All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD
@ 2017-06-16  5:40 Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 01/38] eal: add support for 24 40 and 48 bit operations Shreyansh Jain
                   ` (38 more replies)
  0 siblings, 39 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Series based on net-next/master (bc16bce)

Introduction
============

RFC was posted here -> [R3]

This patch series adds NXP's QorIQ-Layerscape DPAA Architecture based
bus driver, mempool driver and PMD. This version of driver supports NXP
LS1043A/LS1023A, LS1046A/LS1026A family of network SoCs. [R1]

DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
components designed for high-speed network packet processing. This
architecture provides the infrastructure to support simplified sharing of
networking interfaces and accelerators by multiple CPU cores, and the
accelerators themselves.

This patchset introduces the following:
1. DPAA Bus (drivers/bus/dpaa)
 The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
 or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
 The patches introduce necessary layers to expose the DPAA hardware
 blocks for interfacing with RTE framework.

2. DPAA Mempool (drivers/mempool/dpaa)
 BMan, or Buffer Manager, block of DPAA features a hardware offloaded
 mempool. These patches add support for a driver to manage the BMan
 block. This driver allows for mempool creation, deletion, buffer
 acquire and release, as per the RTE APIs.

3. DPAA PMD (drivers/net/dpaa)
 The Poll Mode Driver for DPAA NIC Interfaces.

Changes from RFC
================
 - Some patch restructuring and checkpatch fixes
   (there are still some checkpatch fixes - which I have ignored for
    now.)
 - Some issues observed during internal testing

Patch Layout
============

01: Add EAL support for 24, 40 and 48 bit operations
02~17: Add DPAA Bus support and features, incrementally
18: Add Documentation
19~21: Add DPAA Mempool support
22~38: Add PMD and its various features, incrementally

Dependency
==========

This patch is dependent on:

[D1] Patch: http://dpdk.org/dev/patchwork/patch/24478/
     This patch adds macro for Bus logging to RTE logging framework

References
==========

[R1] http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-layerscape-arm-processors:QORIQ-ARM
[R2] http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf
[R3] RFC: http://dpdk.org/ml/archives/dev/2017-May/066675.html

Hemant Agrawal (2):
  eal: add support for 24 40 and 48 bit operations
  bus/dpaa: add compatibility and helper macros

Shreyansh Jain (36):
  config: add NXP DPAA SoC build configuration
  bus/dpaa: introduce NXP DPAA Bus driver skeleton
  bus/dpaa: add OF parser for device scanning
  bus/dpaa: introducing FMan configurations
  bus/dpaa: add FMan hardware operations
  bus/dpaa: enable DPAA IOCTL portal driver
  bus/dpaa: add layer for interrupt emulation using pthread
  bus/dpaa: add routines for managing a RB tree
  bus/dpaa: add QMAN interface driver
  bus/dpaa: add QMan driver core routines
  bus/dpaa: add BMAN driver core
  bus/dpaa: add support for FMAN frame queue lookup
  bus/dpaa: add BMan hardware interfaces
  bus/dpaa: add fman flow control threshold setting
  bus/dpaa: integrate DPAA Bus with hardware blocks
  doc: add NXP DPAA PMD documentation
  mempool/dpaa: add support for NXP DPAA Mempool
  maintainers: claim ownership of DPAA Mempool driver
  drivers: enable compilation of DPAA Mempool driver
  net/dpaa: add NXP DPAA PMD driver skeleton
  config: enable NXP DPAA PMD compilation
  net/dpaa: add support for Tx and Rx queue setup
  net/dpaa: add support for MTU update
  net/dpaa: add support for jumbo frames
  net/dpaa: add support for link status update
  net/dpaa: add support for device info
  net/dpaa: add support for promiscuous toggle
  net/dpaa: add support for multicast toggle
  net/dpaa: add support for basic stats
  net/dpaa: add support for MAC address update
  net/dpaa: add support for flow control
  net/dpaa: add support for hashed RSS
  net/dpaa: add support for packet type parsing
  net/dpaa: add support for checksum offload
  net/dpaa: add support for Scattered Rx
  net/dpaa: add packet dump for debugging

 MAINTAINERS                                        |    9 +
 config/common_base                                 |    5 +
 config/defconfig_arm64-dpaa-linuxapp-gcc           |   62 +
 doc/guides/nics/dpaa.rst                           |  360 +++
 doc/guides/nics/features/dpaa.ini                  |   23 +
 doc/guides/nics/index.rst                          |    1 +
 drivers/bus/Makefile                               |    3 +
 drivers/bus/dpaa/Makefile                          |   84 +
 drivers/bus/dpaa/base/fman/fman.c                  |  537 +++++
 drivers/bus/dpaa/base/fman/fman_hw.c               |  634 +++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c          |  205 ++
 drivers/bus/dpaa/base/fman/of.c                    |  576 +++++
 drivers/bus/dpaa/base/qbman/bman.c                 |  394 +++
 drivers/bus/dpaa/base/qbman/bman.h                 |  550 +++++
 drivers/bus/dpaa/base/qbman/bman_driver.c          |  323 +++
 drivers/bus/dpaa/base/qbman/bman_priv.h            |  125 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c           |  104 +
 drivers/bus/dpaa/base/qbman/dpaa_sys.c             |  136 ++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h             |   65 +
 drivers/bus/dpaa/base/qbman/process.c              |  331 +++
 drivers/bus/dpaa/base/qbman/qman.c                 | 2497 ++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h                 |  888 +++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c          |  288 +++
 drivers/bus/dpaa/base/qbman/qman_priv.h            |  314 +++
 drivers/bus/dpaa/dpaa_bus.c                        |  414 ++++
 drivers/bus/dpaa/include/compat.h                  |  330 +++
 drivers/bus/dpaa/include/dpaa_bits.h               |   65 +
 drivers/bus/dpaa/include/dpaa_list.h               |  101 +
 drivers/bus/dpaa/include/dpaa_rbtree.h             |  143 ++
 drivers/bus/dpaa/include/fman.h                    |  474 ++++
 drivers/bus/dpaa/include/fsl_bman.h                |  375 +++
 drivers/bus/dpaa/include/fsl_fman.h                |  189 ++
 drivers/bus/dpaa/include/fsl_fman_crc64.h          |  263 +++
 drivers/bus/dpaa/include/fsl_qman.h                | 2038 ++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h                 |  107 +
 drivers/bus/dpaa/include/netcfg.h                  |   96 +
 drivers/bus/dpaa/include/of.h                      |  191 ++
 drivers/bus/dpaa/include/process.h                 |  107 +
 drivers/bus/dpaa/rte_bus_dpaa_version.map          |   46 +
 drivers/bus/dpaa/rte_dpaa_bus.h                    |  169 ++
 drivers/bus/dpaa/rte_dpaa_logs.h                   |   92 +
 drivers/mempool/Makefile                           |    2 +
 drivers/mempool/dpaa/Makefile                      |   65 +
 drivers/mempool/dpaa/dpaa_mempool.c                |  265 +++
 drivers/mempool/dpaa/dpaa_mempool.h                |   78 +
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map  |    6 +
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c           |    2 +-
 drivers/net/Makefile                               |    2 +
 drivers/net/dpaa/Makefile                          |   68 +
 drivers/net/dpaa/dpaa_ethdev.c                     |  883 +++++++
 drivers/net/dpaa/dpaa_ethdev.h                     |  144 ++
 drivers/net/dpaa/dpaa_rxtx.c                       |  697 ++++++
 drivers/net/dpaa/dpaa_rxtx.h                       |  256 ++
 drivers/net/dpaa/rte_pmd_dpaa_version.map          |    4 +
 .../common/include/generic/rte_byteorder.h         |   78 +
 mk/machine/dpaa/rte.vars.mk                        |   61 +
 mk/rte.app.mk                                      |    6 +
 57 files changed, 16330 insertions(+), 1 deletion(-)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h
 create mode 100644 drivers/bus/dpaa/include/of.h
 create mode 100644 drivers/bus/dpaa/include/process.h
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map
 create mode 100644 mk/machine/dpaa/rte.vars.mk

-- 
2.7.4

^ permalink raw reply	[flat|nested] 367+ messages in thread

* [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  8:57   ` Bruce Richardson
  2017-10-02 10:16   ` Avi Kivity
  2017-06-16  5:40 ` [PATCH 02/38] config: add NXP DPAA SoC build configuration Shreyansh Jain
                   ` (37 subsequent siblings)
  38 siblings, 2 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 .../common/include/generic/rte_byteorder.h         | 78 ++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/lib/librte_eal/common/include/generic/rte_byteorder.h b/lib/librte_eal/common/include/generic/rte_byteorder.h
index e00bccb..8903ff6 100644
--- a/lib/librte_eal/common/include/generic/rte_byteorder.h
+++ b/lib/librte_eal/common/include/generic/rte_byteorder.h
@@ -122,6 +122,84 @@ rte_constant_bswap64(uint64_t x)
 		((x & 0xff00000000000000ULL) >> 56);
 }
 
+/*
+ * An internal function to swap bytes of a 48-bit value.
+ */
+static inline uint64_t
+rte_constant_bswap48(uint64_t x)
+{
+	return  ((x & 0x0000000000ffULL) << 40) |
+		((x & 0x00000000ff00ULL) << 24) |
+		((x & 0x000000ff0000ULL) <<  8) |
+		((x & 0x0000ff000000ULL) >>  8) |
+		((x & 0x00ff00000000ULL) >> 24) |
+		((x & 0xff0000000000ULL) >> 40);
+}
+
+/*
+ * An internal function to swap bytes of a 40-bit value.
+ */
+static inline uint64_t
+rte_constant_bswap40(uint64_t x)
+{
+	return  ((x & 0x00000000ffULL) << 32) |
+		((x & 0x000000ff00ULL) << 16) |
+		((x & 0x0000ff0000ULL)) |
+		((x & 0x00ff000000ULL) >> 16) |
+		((x & 0xff00000000ULL) >> 32);
+}
+
+/*
+ * An internal function to swap bytes of a 24-bit value.
+ */
+static inline uint32_t
+rte_constant_bswap24(uint32_t x)
+{
+	return  ((x & 0x0000ffULL) << 16) |
+		((x & 0x00ff00ULL)) |
+		((x & 0xff0000ULL) >> 16);
+}
+
+#define rte_bswap24 rte_constant_bswap24
+#define rte_bswap40 rte_constant_bswap40
+#define rte_bswap48 rte_constant_bswap48
+
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+
+#define rte_cpu_to_le_24(x) (x)
+#define rte_cpu_to_le_40(x) (x)
+#define rte_cpu_to_le_48(x) (x)
+
+#define rte_cpu_to_be_24(x) rte_bswap24(x)
+#define rte_cpu_to_be_40(x) rte_bswap40(x)
+#define rte_cpu_to_be_48(x) rte_bswap48(x)
+
+#define rte_le_to_cpu_24(x) (x)
+#define rte_le_to_cpu_40(x) (x)
+#define rte_le_to_cpu_48(x) (x)
+
+#define rte_be_to_cpu_24(x) rte_bswap24(x)
+#define rte_be_to_cpu_40(x) rte_bswap40(x)
+#define rte_be_to_cpu_48(x) rte_bswap48(x)
+
+#else /* RTE_BIG_ENDIAN */
+
+#define rte_cpu_to_le_24(x) rte_bswap24(x)
+#define rte_cpu_to_le_40(x) rte_bswap40(x)
+#define rte_cpu_to_le_48(x) rte_bswap48(x)
+
+#define rte_cpu_to_be_24(x) (x)
+#define rte_cpu_to_be_40(x) (x)
+#define rte_cpu_to_be_48(x) (x)
+
+#define rte_le_to_cpu_24(x) rte_bswap24(x)
+#define rte_le_to_cpu_40(x) rte_bswap40(x)
+#define rte_le_to_cpu_48(x) rte_bswap48(x)
+
+#define rte_be_to_cpu_24(x) (x)
+#define rte_be_to_cpu_40(x) (x)
+#define rte_be_to_cpu_48(x) (x)
+#endif
 
 #ifdef __DOXYGEN__
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 02/38] config: add NXP DPAA SoC build configuration
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 01/38] eal: add support for 24 40 and 48 bit operations Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 03/38] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
                   ` (36 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch adds skeleton build configuration for DPAA platform.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc | 38 ++++++++++++++++++++
 mk/machine/dpaa/rte.vars.mk              | 61 ++++++++++++++++++++++++++++++++
 2 files changed, 99 insertions(+)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 mk/machine/dpaa/rte.vars.mk

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
new file mode 100644
index 0000000..19ac998
--- /dev/null
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -0,0 +1,38 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 Freescale Semiconductor, Inc.
+#   Copyright 2017 NXP.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+#include "defconfig_arm64-armv8a-linuxapp-gcc"
+
+# NXP (Freescale) - Soc Architecture with FMAN, QMAN & BMAN support
+CONFIG_RTE_MACHINE="dpaa"
+CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
diff --git a/mk/machine/dpaa/rte.vars.mk b/mk/machine/dpaa/rte.vars.mk
new file mode 100644
index 0000000..b24cedf
--- /dev/null
+++ b/mk/machine/dpaa/rte.vars.mk
@@ -0,0 +1,61 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright 2017 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# machine:
+#
+#   - can define ARCH variable (overridden by cmdline value)
+#   - can define CROSS variable (overridden by cmdline value)
+#   - define MACHINE_CFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+#   - can define CPU_CFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_LDFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_ASFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - may override any previously defined variable
+#
+
+# ARCH =
+# CROSS =
+# MACHINE_CFLAGS =
+# MACHINE_LDFLAGS =
+# MACHINE_ASFLAGS =
+# CPU_CFLAGS =
+# CPU_LDFLAGS =
+# CPU_ASFLAGS =
+MACHINE_CFLAGS += -march=armv8-a+crc
+
+ifdef CONFIG_RTE_ARCH_ARM_TUNE
+MACHINE_CFLAGS += -mtune=$(CONFIG_RTE_ARCH_ARM_TUNE:"%"=%)
+endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 03/38] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 01/38] eal: add support for 24 40 and 48 bit operations Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 02/38] config: add NXP DPAA SoC build configuration Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 04/38] bus/dpaa: add compatibility and helper macros Shreyansh Jain
                   ` (35 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 MAINTAINERS                               |   5 +
 config/common_base                        |   3 +
 config/defconfig_arm64-dpaa-linuxapp-gcc  |   9 ++
 drivers/bus/Makefile                      |   3 +
 drivers/bus/dpaa/Makefile                 |  63 +++++++++++
 drivers/bus/dpaa/dpaa_bus.c               | 178 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |   7 ++
 drivers/bus/dpaa/rte_dpaa_bus.h           | 163 +++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_dpaa_logs.h          |  92 +++++++++++++++
 9 files changed, 523 insertions(+)
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h

diff --git a/MAINTAINERS b/MAINTAINERS
index f6095ef..803c2af 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -388,6 +388,11 @@ F: drivers/net/nfp/
 F: doc/guides/nics/nfp.rst
 F: doc/guides/nics/features/nfp.ini
 
+NXP dpaa
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+M: Shreyansh Jain <shreyansh.jain@nxp.com>
+F: drivers/bus/dpaa/
+
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/config/common_base b/config/common_base
index f6aafd1..56bd27c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -301,6 +301,9 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_MBOX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=n
+
 #
 # Compile NXP DPAA2 FSL-MC Bus
 #
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 19ac998..a189ad2 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -36,3 +36,12 @@
 # NXP (Freescale) - Soc Architecture with FMAN, QMAN & BMAN support
 CONFIG_RTE_MACHINE="dpaa"
 CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
+
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=y
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_TX=n
+
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 1e5b281..2dad392 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -33,6 +33,9 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mbuf librte_mempool librte_ring librte_ether
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += dpaa
+DEPDIRS-dpaa = $(core-libs)
+
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 DEPDIRS-fslmc = $(core-libs)
 
diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
new file mode 100644
index 0000000..f44f3c4
--- /dev/null
+++ b/drivers/bus/dpaa/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 NXP.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_BUS_DPAA=$(RTE_SDK)/drivers/bus/dpaa
+
+#
+# library name
+#
+LIB = librte_bus_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+
+CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+
+# versioning export map
+EXPORT_MAP := rte_bus_dpaa_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	dpaa_bus.c
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
new file mode 100644
index 0000000..1c4627d
--- /dev/null
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -0,0 +1,178 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_bus.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+
+struct rte_dpaa_bus rte_dpaa_bus;
+
+
+static inline void
+dpaa_add_to_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+
+static inline void
+dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+static int
+rte_dpaa_bus_scan(void)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/* register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
+{
+	RTE_VERIFY(driver);
+
+	PMD_INIT_FUNC_TRACE();
+
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = &rte_dpaa_bus;
+}
+
+/* un-register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
+{
+	struct rte_dpaa_bus *dpaa_bus;
+
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_bus = driver->dpaa_bus;
+
+	TAILQ_REMOVE(&dpaa_bus->driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = NULL;
+}
+
+static int
+rte_dpaa_device_match(struct rte_dpaa_driver *drv,
+		      struct rte_dpaa_device *dev)
+{
+	int ret = -1;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!drv || !dev) {
+		PMD_DRV_LOG(DEBUG, "Invalid drv or dev received.");
+		return ret;
+	}
+
+	if (drv->driver_type == dev->id.device_type) {
+		DPAA_BUS_LOG(INFO, "Device: %s matches for driver: %s",
+			    dev->name, drv->driver.name);
+		ret = 0; /* Found a match */
+	}
+
+	return ret;
+}
+
+static int
+rte_dpaa_bus_probe(void)
+{
+	int ret = -1;
+	struct rte_dpaa_device *dev;
+	struct rte_dpaa_driver *drv;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For each registered driver, and device, call the driver->probe */
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		TAILQ_FOREACH(drv, &rte_dpaa_bus.driver_list, next) {
+			ret = rte_dpaa_device_match(drv, dev);
+			if (ret)
+				continue;
+
+			if (!drv->probe)
+				continue;
+
+			ret = drv->probe(drv, dev);
+			if (ret)
+				DPAA_BUS_LOG(ERR, "Unable to probe.\n");
+			break;
+		}
+	}
+	return 0;
+}
+
+struct rte_dpaa_bus rte_dpaa_bus = {
+	.bus = {
+		.scan = rte_dpaa_bus_scan,
+		.probe = rte_dpaa_bus_probe,
+	},
+	.device_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.device_list),
+	.driver_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.driver_list),
+	.device_count = 0,
+};
+
+RTE_REGISTER_BUS(FSL_DPAA_BUS_NAME, rte_dpaa_bus.bus);
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
new file mode 100644
index 0000000..8c1ea65
--- /dev/null
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -0,0 +1,7 @@
+DPDK_17.08 {
+	global:
+
+	rte_dpaa_driver_register;
+	rte_dpaa_driver_unregister;
+
+};
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
new file mode 100644
index 0000000..55e6793
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -0,0 +1,163 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __RTE_DPAA_BUS_H__
+#define __RTE_DPAA_BUS_H__
+
+#include <rte_bus.h>
+#include <rte_mempool.h>
+
+#define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
+
+#define DEV_TO_DPAA_DEVICE(ptr)	\
+		container_of(ptr, struct rte_dpaa_device, device)
+
+struct rte_dpaa_device;
+struct rte_dpaa_driver;
+
+/* DPAA Device and Driver lists for DPAA bus */
+TAILQ_HEAD(rte_dpaa_device_list, rte_dpaa_device);
+TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
+
+enum rte_dpaa_type {
+	FSL_DPAA_ETH = 1,
+	FSL_DPAA_CRYPTO,
+};
+
+struct rte_dpaa_bus {
+	struct rte_bus bus;
+	struct rte_dpaa_device_list device_list;
+	struct rte_dpaa_driver_list driver_list;
+	int device_count;
+};
+
+struct dpaa_device_id {
+	uint8_t fman_id; /**< Fman interface ID, for ETH type device */
+	uint8_t mac_id; /**< Fman MAC interface ID, for ETH type device */
+	uint16_t dev_id; /**< Device Identifier from DPDK */
+	enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
+};
+
+struct rte_dpaa_device {
+	TAILQ_ENTRY(rte_dpaa_device) next;
+	struct rte_device device;
+	struct rte_eth_dev *eth_dev;
+	struct rte_cryptodev *crypto_dev;
+	struct rte_dpaa_driver *driver;
+	struct dpaa_device_id id;
+	char name[RTE_ETH_NAME_MAX_LEN];
+};
+
+typedef int (*rte_dpaa_probe_t)(struct rte_dpaa_driver *dpaa_drv,
+				struct rte_dpaa_device *dpaa_dev);
+typedef int (*rte_dpaa_remove_t)(struct rte_dpaa_device *dpaa_dev);
+
+struct rte_dpaa_driver {
+	TAILQ_ENTRY(rte_dpaa_driver) next;
+	struct rte_driver driver;
+	struct rte_dpaa_bus *dpaa_bus;
+	enum rte_dpaa_type driver_type;
+	rte_dpaa_probe_t probe;
+	rte_dpaa_remove_t remove;
+};
+
+struct dpaa_portal {
+	uint32_t bman_idx; /**< BMAN Portal ID*/
+	uint32_t qman_idx; /**< QMAN Portal ID*/
+	uint64_t tid;/**< Parent Thread id for this portal */
+};
+
+/* TODO - this is costly, need to write a fast coversion routine */
+static inline void *rte_dpaa_mem_ptov(phys_addr_t paddr)
+{
+	const struct rte_memseg *memseg = rte_eal_get_physmem_layout();
+	int i;
+
+	for (i = 0; i < RTE_MAX_MEMSEG && memseg[i].addr != NULL; i++) {
+		if (paddr >= memseg[i].phys_addr && paddr <
+			memseg[i].phys_addr + memseg[i].len)
+			return (uint8_t *)(memseg[i].addr) +
+			       (paddr - memseg[i].phys_addr);
+	}
+
+	return NULL;
+}
+
+/**
+ * Register a DPAA driver.
+ *
+ * @param driver
+ *   A pointer to a rte_dpaa_driver structure describing the driver
+ *   to be registered.
+ */
+void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
+
+/**
+ * Unregister a DPAA driver.
+ *
+ * @param driver
+ *	A pointer to a rte_dpaa_driver structure describing the driver
+ *	to be unregistered.
+ */
+void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
+
+/**
+ * Initialize a DPAA portal
+ *
+ * @param arg
+ *	Per thread ID
+ *
+ * @return
+ *	0 in case of success, error otherwise
+ */
+int rte_dpaa_portal_init(void *arg);
+
+/**
+ * Cleanup a DPAA Portal
+ */
+void dpaa_portal_finish(void *arg);
+
+/** Helper for DPAA device registration from driver (eth, crypto) instance */
+#define RTE_PMD_REGISTER_DPAA(nm, dpaa_drv) \
+RTE_INIT(dpaainitfn_ ##nm); \
+static void dpaainitfn_ ##nm(void) \
+{\
+	(dpaa_drv).driver.name = RTE_STR(nm);\
+	rte_dpaa_driver_register(&dpaa_drv); \
+} \
+RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_DPAA_BUS_H__ */
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
new file mode 100644
index 0000000..c33b0c2
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -0,0 +1,92 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA_LOGS_H_
+#define _DPAA_LOGS_H_
+
+#include <rte_log.h>
+
+#define DPAA_BUS_LOG(level, fmt, args...) \
+	RTE_LOG(level, BUS, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_BUS
+#define DPAA_BUS_WARN(cond, fmt, args...) \
+	do {\
+		if (cond) \
+			DPAA_BUS_LOG(DEBUG, "WARN: " fmt, ##args); \
+	} while (0)
+#else
+#define DPAA_BUS_WARN(cond, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _DPAA_LOGS_H_ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 04/38] bus/dpaa: add compatibility and helper macros
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (2 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 03/38] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 05/38] bus/dpaa: add OF parser for device scanning Shreyansh Jain
                   ` (34 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Linked list, bit operations and compatibility macros.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/dpaa/include/compat.h    | 330 +++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/dpaa_bits.h |  65 +++++++
 drivers/bus/dpaa/include/dpaa_list.h | 101 +++++++++++
 3 files changed, 496 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h

diff --git a/drivers/bus/dpaa/include/compat.h b/drivers/bus/dpaa/include/compat.h
new file mode 100644
index 0000000..ce6136e
--- /dev/null
+++ b/drivers/bus/dpaa/include/compat.h
@@ -0,0 +1,330 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __COMPAT_H
+#define __COMPAT_H
+
+#include <sched.h>
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+#include <stdint.h>
+#include <stdlib.h>
+#include <stddef.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <pthread.h>
+#include <linux/types.h>
+#include <stdbool.h>
+#include <ctype.h>
+#include <malloc.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/mman.h>
+#include <limits.h>
+#include <assert.h>
+#include <dirent.h>
+#include <inttypes.h>
+#include <error.h>
+#include <rte_byteorder.h>
+#include <rte_atomic.h>
+#include <rte_spinlock.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+
+/* The following definitions are primarily to allow the single-source driver
+ * interfaces to be included by arbitrary program code. Ie. for interfaces that
+ * are also available in kernel-space, these definitions provide compatibility
+ * with certain attributes and types used in those interfaces.
+ */
+
+/* Required compiler attributes */
+#define __maybe_unused	__rte_unused
+#define __always_unused	__rte_unused
+#define __packed	__rte_packed
+#define noinline	__attribute__((noinline))
+
+#define L1_CACHE_BYTES 64
+#define ____cacheline_aligned __attribute__((aligned(L1_CACHE_BYTES)))
+#define __stringify_1(x) #x
+#define __stringify(x)	__stringify_1(x)
+
+#ifdef ARRAY_SIZE
+#undef ARRAY_SIZE
+#endif
+#define ARRAY_SIZE(a) (sizeof(a) / sizeof((a)[0]))
+
+/* Debugging */
+#define prflush(fmt, args...) \
+	do { \
+		printf(fmt, ##args); \
+		fflush(stdout); \
+	} while (0)
+
+#define pr_crit(fmt, args...)	 prflush("CRIT:" fmt, ##args)
+#define pr_err(fmt, args...)	 prflush("ERR:" fmt, ##args)
+#define pr_warn(fmt, args...)	 prflush("WARN:" fmt, ##args)
+#define pr_info(fmt, args...)	 prflush(fmt, ##args)
+
+#define ASSERT(x) do {\
+	if (!(x)) \
+		rte_panic("DPAA: x"); \
+} while (0)
+#define BUG_ON(x) ASSERT(!(x))
+
+/* Required types */
+typedef uint8_t		u8;
+typedef uint16_t	u16;
+typedef uint32_t	u32;
+typedef uint64_t	u64;
+typedef uint64_t	dma_addr_t;
+typedef cpu_set_t	cpumask_t;
+typedef uint32_t	phandle;
+typedef uint32_t	gfp_t;
+typedef uint32_t	irqreturn_t;
+
+#define IRQ_HANDLED	0
+#define request_irq	qbman_request_irq
+#define free_irq	qbman_free_irq
+
+#define __iomem
+#define GFP_KERNEL	0
+#define __raw_readb(p)	(*(const volatile unsigned char *)(p))
+#define __raw_readl(p)	(*(const volatile unsigned int *)(p))
+#define __raw_writel(v, p) {*(volatile unsigned int *)(p) = (v); }
+
+/* SMP stuff */
+#define DEFINE_PER_CPU(t, x)	__thread t per_cpu__##x
+#define get_cpu_var(x)		per_cpu__##x
+/* to be used as an upper-limit only */
+#define NR_CPUS			64
+
+/* Waitqueue stuff */
+typedef struct { }		wait_queue_head_t;
+#define DECLARE_WAIT_QUEUE_HEAD(x) int dummy_##x __always_unused
+#define wake_up(x)		do { } while (0)
+
+/* I/O operations */
+static inline u32 in_be32(volatile void *__p)
+{
+	volatile u32 *p = __p;
+	return rte_be_to_cpu_32(*p);
+}
+
+static inline void out_be32(volatile void *__p, u32 val)
+{
+	volatile u32 *p = __p;
+	*p = rte_cpu_to_be_32(val);
+}
+
+#define dcbt_ro(p) __builtin_prefetch(p, 0)
+#define dcbt_rw(p) __builtin_prefetch(p, 1)
+
+#define dcbz(p) { asm volatile("dc zva, %0" : : "r" (p) : "memory"); }
+#define dcbz_64(p) dcbz(p)
+#define hwsync() rte_rmb()
+#define lwsync() rte_wmb()
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+#define dccivac(p) { asm volatile("dc civac, %0" : : "r"(p) : "memory"); }
+
+#define dcbit_ro(p) \
+	do { \
+		dccivac(p);						\
+		asm volatile("prfm pldl1keep, [%0, #64]" : : "r" (p));	\
+	} while (0)
+
+#define barrier() { asm volatile ("" : : : "memory"); }
+#define cpu_relax barrier
+
+static inline uint64_t mfatb(void)
+{
+	uint64_t ret, ret_new, timeout = 200;
+
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret));
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	while (ret != ret_new && timeout--) {
+		ret = ret_new;
+		asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	}
+	BUG_ON(!timeout && (ret != ret_new));
+	return ret * 64;
+}
+
+/* Spin for a few cycles without bothering the bus */
+static inline void cpu_spin(int cycles)
+{
+	uint64_t now = mfatb();
+
+	while (mfatb() < (now + cycles))
+		;
+}
+
+/* Qman/Bman API inlines and macros; */
+#ifdef lower_32_bits
+#undef lower_32_bits
+#endif
+#define lower_32_bits(x) ((u32)(x))
+
+#ifdef upper_32_bits
+#undef upper_32_bits
+#endif
+#define upper_32_bits(x) ((u32)(((x) >> 16) >> 16))
+
+#define cpu_to_be64(d) rte_cpu_to_be_64(d)
+#define cpu_to_be32(d) rte_cpu_to_be_32(d)
+#define cpu_to_be16(d) rte_cpu_to_be_16(d)
+
+#define be64_to_cpu(d) rte_be_to_cpu_64(d)
+#define be32_to_cpu(d) rte_be_to_cpu_32(d)
+#define be16_to_cpu(d) rte_be_to_cpu_16(d)
+
+#define cpu_to_be48(x) rte_cpu_to_be_48(x)
+#define be48_to_cpu(x) rte_be_to_cpu_48(x)
+
+#define cpu_to_be40(x) rte_cpu_to_be_40(x)
+#define be40_to_cpu(x) rte_be_to_cpu_40(x)
+
+#define cpu_to_be24(x) rte_cpu_to_be_24(x)
+#define be24_to_cpu(x) rte_be_to_cpu_24(x)
+
+/* When copying aligned words or shorts, try to avoid memcpy() */
+/* memcpy() stuff - when you know alignments in advance */
+#define CONFIG_TRY_BETTER_MEMCPY
+
+#ifdef CONFIG_TRY_BETTER_MEMCPY
+static inline void copy_words(void *dest, const void *src, size_t sz)
+{
+	u32 *__dest = dest;
+	const u32 *__src = src;
+	size_t __sz = sz >> 2;
+
+	BUG_ON((unsigned long)dest & 0x3);
+	BUG_ON((unsigned long)src & 0x3);
+	BUG_ON(sz & 0x3);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_shorts(void *dest, const void *src, size_t sz)
+{
+	u16 *__dest = dest;
+	const u16 *__src = src;
+	size_t __sz = sz >> 1;
+
+	BUG_ON((unsigned long)dest & 0x1);
+	BUG_ON((unsigned long)src & 0x1);
+	BUG_ON(sz & 0x1);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_bytes(void *dest, const void *src, size_t sz)
+{
+	u8 *__dest = dest;
+	const u8 *__src = src;
+
+	while (sz--)
+		*(__dest++) = *(__src++);
+}
+#else
+#define copy_words memcpy
+#define copy_shorts memcpy
+#define copy_bytes memcpy
+#endif
+
+/* Allocator stuff */
+#define kmalloc(sz, t)	malloc(sz)
+#define vmalloc(sz)	malloc(sz)
+#define kfree(p)	{ if (p) free(p); }
+static inline void *kzalloc(size_t sz, gfp_t __foo __rte_unused)
+{
+	void *ptr = malloc(sz);
+
+	if (ptr)
+		memset(ptr, 0, sz);
+	return ptr;
+}
+
+static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused)
+{
+	void *p;
+
+	if (posix_memalign(&p, 4096, 4096))
+		return 0;
+	memset(p, 0, 4096);
+	return (unsigned long)p;
+}
+
+/* Spinlock stuff */
+#define spinlock_t		rte_spinlock_t
+#define __SPIN_LOCK_UNLOCKED(x)	RTE_SPINLOCK_INITIALIZER
+#define DEFINE_SPINLOCK(x)	spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
+#define spin_lock_init(x)	rte_spinlock_init(x)
+#define spin_lock_destroy(x)
+#define spin_lock(x)		rte_spinlock_lock(x)
+#define spin_unlock(x)		rte_spinlock_unlock(x)
+#define spin_lock_irq(x)	spin_lock(x)
+#define spin_unlock_irq(x)	spin_unlock(x)
+#define spin_lock_irqsave(x, f) spin_lock_irq(x)
+#define spin_unlock_irqrestore(x, f) spin_unlock_irq(x)
+
+#define atomic_t                rte_atomic32_t
+#define atomic_read(v)          rte_atomic32_read(v)
+#define atomic_set(v, i)        rte_atomic32_set(v, i)
+
+#define atomic_inc(v)           rte_atomic32_add(v, 1)
+#define atomic_dec(v)           rte_atomic32_sub(v, 1)
+
+#define atomic_inc_and_test(v)  rte_atomic32_inc_and_test(v)
+#define atomic_dec_and_test(v)  rte_atomic32_dec_and_test(v)
+
+#define atomic_inc_return(v)    rte_atomic32_add_return(v, 1)
+#define atomic_dec_return(v)    rte_atomic32_sub_return(v, 1)
+#define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0)
+
+#include <dpaa_list.h>
+#include <dpaa_bits.h>
+
+#endif /* __COMPAT_H */
diff --git a/drivers/bus/dpaa/include/dpaa_bits.h b/drivers/bus/dpaa/include/dpaa_bits.h
new file mode 100644
index 0000000..e29019b
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_bits.h
@@ -0,0 +1,65 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_BITS_H
+#define __DPAA_BITS_H
+
+/* Bitfield stuff. */
+#define BITS_PER_ULONG	(sizeof(unsigned long) << 3)
+#define SHIFT_PER_ULONG	(((1 << 5) == BITS_PER_ULONG) ? 5 : 6)
+#define BITS_MASK(idx)	(1UL << ((idx) & (BITS_PER_ULONG - 1)))
+#define BITS_IDX(idx)	((idx) >> SHIFT_PER_ULONG)
+
+static inline void dpaa_set_bits(unsigned long mask,
+				 volatile unsigned long *p)
+{
+	*p |= mask;
+}
+
+static inline void dpaa_set_bit(int idx, volatile unsigned long *bits)
+{
+	dpaa_set_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+static inline void dpaa_clear_bits(unsigned long mask,
+				   volatile unsigned long *p)
+{
+	*p &= ~mask;
+}
+
+static inline void dpaa_clear_bit(int idx,
+				  volatile unsigned long *bits)
+{
+	dpaa_clear_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+#endif /* __DPAA_BITS_H */
diff --git a/drivers/bus/dpaa/include/dpaa_list.h b/drivers/bus/dpaa/include/dpaa_list.h
new file mode 100644
index 0000000..7ad0f14
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_list.h
@@ -0,0 +1,101 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_LIST_H
+#define __DPAA_LIST_H
+
+/****************/
+/* Linked-lists */
+/****************/
+
+struct list_head {
+	struct list_head *prev;
+	struct list_head *next;
+};
+
+#define COMPAT_LIST_HEAD(n) \
+struct list_head n = { \
+	.prev = &n, \
+	.next = &n \
+}
+
+#define INIT_LIST_HEAD(p) \
+do { \
+	struct list_head *__p298 = (p); \
+	__p298->next = __p298; \
+	__p298->prev = __p298->next; \
+} while (0)
+#define list_entry(node, type, member) \
+	(type *)((void *)node - offsetof(type, member))
+#define list_empty(p) \
+({ \
+	const struct list_head *__p298 = (p); \
+	((__p298->next == __p298) && (__p298->prev == __p298)); \
+})
+#define list_add(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->next = __l298->next; \
+	__p298->prev = __l298; \
+	__l298->next->prev = __p298; \
+	__l298->next = __p298; \
+} while (0)
+#define list_add_tail(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->prev = __l298->prev; \
+	__p298->next = __l298; \
+	__l298->prev->next = __p298; \
+	__l298->prev = __p298; \
+} while (0)
+#define list_for_each(i, l)				\
+	for (i = (l)->next; i != (l); i = i->next)
+#define list_for_each_safe(i, j, l)			\
+	for (i = (l)->next, j = i->next; i != (l);	\
+	     i = j, j = i->next)
+#define list_for_each_entry(i, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name); &i->name != (l); \
+		i = list_entry(i->name.next, typeof(*i), name))
+#define list_for_each_entry_safe(i, j, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name), \
+		j = list_entry(i->name.next, typeof(*j), name); \
+		&i->name != (l); \
+		i = j, j = list_entry(j->name.next, typeof(*j), name))
+#define list_del(i) \
+do { \
+	(i)->next->prev = (i)->prev; \
+	(i)->prev->next = (i)->next; \
+} while (0)
+
+#endif /* __DPAA_LIST_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 05/38] bus/dpaa: add OF parser for device scanning
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (3 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 04/38] bus/dpaa: add compatibility and helper macros Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 06/38] bus/dpaa: introducing FMan configurations Shreyansh Jain
                   ` (33 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This layer is used by Bus driver's scan function. Devices are parsed
using OF parser and added to DPAA device list.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile       |   7 +
 drivers/bus/dpaa/base/fman/of.c | 576 ++++++++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/of.h   | 191 +++++++++++++
 3 files changed, 774 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/include/of.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index f44f3c4..cc685d1 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -45,7 +45,12 @@ CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 endif
 
+CFLAGS +=-Wno-pointer-arith
+CFLAGS +=-Wno-cast-qual
+CFLAGS += -D _GNU_SOURCE
+
 CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_BUS_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -59,5 +64,7 @@ LIBABIVER := 1
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/of.c \
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/fman/of.c b/drivers/bus/dpaa/base/fman/of.c
new file mode 100644
index 0000000..6cc3987
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/of.c
@@ -0,0 +1,576 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <of.h>
+#include <rte_dpaa_logs.h>
+
+static int alive;
+static struct dt_dir root_dir;
+static const char *base_dir;
+static COMPAT_LIST_HEAD(linear);
+
+static int
+of_open_dir(const char *relative_path, struct dirent ***d)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = scandir(full_path, d, 0, versionsort);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+of_close_dir(struct dirent **d, int num)
+{
+	while (num--)
+		free(d[num]);
+	free(d);
+}
+
+static int
+of_open_file(const char *relative_path)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = open(full_path, O_RDONLY);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+process_file(struct dirent *dent, struct dt_dir *parent)
+{
+	int fd;
+	struct dt_file *f = malloc(sizeof(*f));
+
+	if (!f) {
+		DPAA_BUS_LOG(DEBUG, "Unable to allocate memory for file node");
+		return;
+	}
+	f->node.is_file = 1;
+	snprintf(f->node.node.name, NAME_MAX, "%s", dent->d_name);
+	snprintf(f->node.node.full_name, PATH_MAX, "%s/%s",
+		 parent->node.node.full_name, dent->d_name);
+	f->parent = parent;
+	fd = of_open_file(f->node.node.full_name);
+	if (fd < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to open file node");
+		free(f);
+		return;
+	}
+	f->len = read(fd, f->buf, OF_FILE_BUF_MAX);
+	close(fd);
+	if (f->len < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to read file node");
+		free(f);
+		return;
+	}
+	list_add_tail(&f->node.list, &parent->files);
+}
+
+static const struct dt_dir *
+node2dir(const struct device_node *n)
+{
+	struct dt_node *dn = container_of((struct device_node *)n,
+					  struct dt_node, node);
+	const struct dt_dir *d = container_of(dn, struct dt_dir, node);
+
+	assert(!dn->is_file);
+	return d;
+}
+
+/* process_dir() calls iterate_dir(), but the latter will also call the former
+ * when recursing into sub-directories, so a predeclaration is needed.
+ */
+static int process_dir(const char *relative_path, struct dt_dir *dt);
+
+static int
+iterate_dir(struct dirent **d, int num, struct dt_dir *dt)
+{
+	int loop;
+	/* Iterate the directory contents */
+	for (loop = 0; loop < num; loop++) {
+		struct dt_dir *subdir;
+		int ret;
+		/* Ignore dot files of all types (especially "..") */
+		if (d[loop]->d_name[0] == '.')
+			continue;
+		switch (d[loop]->d_type) {
+		case DT_REG:
+			process_file(d[loop], dt);
+			break;
+		case DT_DIR:
+			subdir = malloc(sizeof(*subdir));
+			if (!subdir) {
+				perror("malloc");
+				return -ENOMEM;
+			}
+			snprintf(subdir->node.node.name, NAME_MAX, "%s",
+				 d[loop]->d_name);
+			snprintf(subdir->node.node.full_name, PATH_MAX,
+				 "%s/%s", dt->node.node.full_name,
+				 d[loop]->d_name);
+			subdir->parent = dt;
+			ret = process_dir(subdir->node.node.full_name, subdir);
+			if (ret)
+				return ret;
+			list_add_tail(&subdir->node.list, &dt->subdirs);
+			break;
+		default:
+			DPAA_BUS_LOG(DEBUG, "Ignoring invalid dt entry %s/%s",
+				     dt->node.node.full_name, d[loop]->d_name);
+		}
+	}
+	return 0;
+}
+
+static int
+process_dir(const char *relative_path, struct dt_dir *dt)
+{
+	struct dirent **d;
+	int ret, num;
+
+	dt->node.is_file = 0;
+	INIT_LIST_HEAD(&dt->subdirs);
+	INIT_LIST_HEAD(&dt->files);
+	ret = of_open_dir(relative_path, &d);
+	if (ret < 0)
+		return ret;
+	num = ret;
+	ret = iterate_dir(d, num, dt);
+	of_close_dir(d, num);
+	return (ret < 0) ? ret : 0;
+}
+
+static void
+linear_dir(struct dt_dir *d)
+{
+	struct dt_file *f;
+	struct dt_dir *dd;
+
+	d->compatible = NULL;
+	d->status = NULL;
+	d->lphandle = NULL;
+	d->a_cells = NULL;
+	d->s_cells = NULL;
+	d->reg = NULL;
+	list_for_each_entry(f, &d->files, node.list) {
+		if (!strcmp(f->node.node.name, "compatible")) {
+			if (d->compatible)
+				DPAA_BUS_LOG(DEBUG, "Duplicate compatible in"
+					     " %s", d->node.node.full_name);
+			d->compatible = f;
+		} else if (!strcmp(f->node.node.name, "status")) {
+			if (d->status)
+				DPAA_BUS_LOG(DEBUG, "Duplicate status in %s",
+					     d->node.node.full_name);
+			d->status = f;
+		} else if (!strcmp(f->node.node.name, "linux,phandle")) {
+			if (d->lphandle)
+				DPAA_BUS_LOG(DEBUG, "Duplicate lphandle in %s",
+					     d->node.node.full_name);
+			d->lphandle = f;
+		} else if (!strcmp(f->node.node.name, "#address-cells")) {
+			if (d->a_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate a_cells in %s",
+					     d->node.node.full_name);
+			d->a_cells = f;
+		} else if (!strcmp(f->node.node.name, "#size-cells")) {
+			if (d->s_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate s_cells in %s",
+					     d->node.node.full_name);
+			d->s_cells = f;
+		} else if (!strcmp(f->node.node.name, "reg")) {
+			if (d->reg)
+				DPAA_BUS_LOG(DEBUG, "Duplicate reg in %s",
+					     d->node.node.full_name);
+			d->reg = f;
+		}
+	}
+
+	list_for_each_entry(dd, &d->subdirs, node.list) {
+		list_add_tail(&dd->linear, &linear);
+		linear_dir(dd);
+	}
+}
+
+int
+of_init_path(const char *dt_path)
+{
+	int ret;
+
+	base_dir = dt_path;
+
+	/* This needs to be singleton initialization */
+	DPAA_BUS_WARN(alive, "Double-init of device-tree driver!");
+
+	/* Prepare root node (the remaining fields are set in process_dir()) */
+	root_dir.node.node.name[0] = '\0';
+	root_dir.node.node.full_name[0] = '\0';
+	INIT_LIST_HEAD(&root_dir.node.list);
+	root_dir.parent = NULL;
+
+	/* Kick things off... */
+	ret = process_dir("", &root_dir);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to parse device tree");
+		return ret;
+	}
+
+	/* Now make a flat, linear list of directories */
+	linear_dir(&root_dir);
+	alive = 1;
+	return 0;
+}
+
+static void
+destroy_dir(struct dt_dir *d)
+{
+	struct dt_file *f, *tmpf;
+	struct dt_dir *dd, *tmpd;
+
+	list_for_each_entry_safe(f, tmpf, &d->files, node.list) {
+		list_del(&f->node.list);
+		free(f);
+	}
+	list_for_each_entry_safe(dd, tmpd, &d->subdirs, node.list) {
+		destroy_dir(dd);
+		list_del(&dd->node.list);
+		free(dd);
+	}
+}
+
+void
+of_finish(void)
+{
+	DPAA_BUS_WARN(!alive, "Double-finish of device-tree driver!");
+
+	destroy_dir(&root_dir);
+	INIT_LIST_HEAD(&linear);
+	alive = 0;
+}
+
+static const struct dt_dir *
+next_linear(const struct dt_dir *f)
+{
+	if (f->linear.next == &linear)
+		return NULL;
+	return list_entry(f->linear.next, struct dt_dir, linear);
+}
+
+static int
+check_compatible(const struct dt_file *f, const char *compatible)
+{
+	const char *c = (char *)f->buf;
+	unsigned int len, remains = f->len;
+
+	while (remains) {
+		len = strlen(c);
+		if (!strcmp(c, compatible))
+			return 1;
+
+		if (remains < len + 1)
+			break;
+
+		c += (len + 1);
+		remains -= (len + 1);
+	}
+	return 0;
+}
+
+const struct device_node *
+of_find_compatible_node(const struct device_node *from,
+			const char *type __always_unused,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+
+	if (list_empty(&linear))
+		return NULL;
+	if (!from)
+		d = list_entry(linear.next, struct dt_dir, linear);
+	else
+		d = node2dir(from);
+	for (d = next_linear(d); d && (!d->compatible ||
+				       !check_compatible(d->compatible,
+				       compatible));
+			d = next_linear(d))
+		;
+	if (d)
+		return &d->node.node;
+	return NULL;
+}
+
+const void *
+of_get_property(const struct device_node *from, const char *name,
+		size_t *lenp)
+{
+	const struct dt_dir *d;
+	const struct dt_file *f;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+
+	d = node2dir(from);
+	list_for_each_entry(f, &d->files, node.list)
+		if (!strcmp(f->node.node.name, name)) {
+			if (lenp)
+				*lenp = f->len;
+			return f->buf;
+		}
+	return NULL;
+}
+
+bool
+of_device_is_available(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+	d = node2dir(dev_node);
+	if (!d->status)
+		return true;
+	if (!strcmp((char *)d->status->buf, "okay"))
+		return true;
+	if (!strcmp((char *)d->status->buf, "ok"))
+		return true;
+	return false;
+}
+
+const struct device_node *
+of_find_node_by_phandle(phandle ph)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+	list_for_each_entry(d, &linear, linear)
+		if (d->lphandle && (d->lphandle->len == 4) &&
+		    !memcmp(d->lphandle->buf, &ph, 4))
+			return &d->node.node;
+	return NULL;
+}
+
+const struct device_node *
+of_get_parent(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	d = node2dir(dev_node);
+	if (!d->parent)
+		return NULL;
+	return &d->parent->node.node;
+}
+
+const struct device_node *
+of_get_next_child(const struct device_node *dev_node,
+		  const struct device_node *prev)
+{
+	const struct dt_dir *p, *c;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	p = node2dir(dev_node);
+	if (prev) {
+		c = node2dir(prev);
+		DPAA_BUS_WARN((c->parent != p), "Parent/child mismatch");
+		if (c->parent != p)
+			return NULL;
+		if (c->node.list.next == &p->subdirs)
+			/* prev was the last child */
+			return NULL;
+		c = list_entry(c->node.list.next, struct dt_dir, node.list);
+		return &c->node.node;
+	}
+	/* Return first child */
+	if (list_empty(&p->subdirs))
+		return NULL;
+	c = list_entry(p->subdirs.next, struct dt_dir, node.list);
+	return &c->node.node;
+}
+
+uint32_t
+of_n_addr_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->a_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->a_cells->buf[0];
+			assert(d->a_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NA;
+}
+
+uint32_t
+of_n_size_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->s_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->s_cells->buf[0];
+			assert(d->s_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NS;
+}
+
+const uint32_t *
+of_get_address(const struct device_node *dev_node, size_t idx,
+	       uint64_t *size, uint32_t *flags __rte_unused)
+{
+	const struct dt_dir *d;
+	const unsigned char *buf;
+	uint32_t na = of_n_addr_cells(dev_node);
+	uint32_t ns = of_n_size_cells(dev_node);
+
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (!d->reg)
+		return NULL;
+	assert(d->reg->len % ((na + ns) * 4) == 0);
+	assert(d->reg->len / ((na + ns) * 4) > (unsigned int) idx);
+	buf = (const unsigned char *)&d->reg->buf[0];
+	buf += (na + ns) * idx * 4;
+	if (size)
+		for (*size = 0; ns > 0; ns--, na++)
+			*size = (*size << 32) +
+				(((uint32_t)buf[4 * na] << 24) |
+				((uint32_t)buf[4 * na + 1] << 16) |
+				((uint32_t)buf[4 * na + 2] << 8) |
+				(uint32_t)buf[4 * na + 3]);
+	return (const uint32_t *)buf;
+}
+
+uint64_t
+of_translate_address(const struct device_node *dev_node,
+		     const uint32_t *addr)
+{
+	uint64_t phys_addr, tmp_addr;
+	const struct device_node *parent;
+	const uint32_t *ranges;
+	size_t rlen;
+	uint32_t na, pna;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+	assert(dev_node != NULL);
+
+	na = of_n_addr_cells(dev_node);
+	phys_addr = of_read_number(addr, na);
+
+	dev_node = of_get_parent(dev_node);
+	if (!dev_node)
+		return 0;
+	else if (node2dir(dev_node) == &root_dir)
+		return phys_addr;
+
+	do {
+		pna = of_n_addr_cells(dev_node);
+		parent = of_get_parent(dev_node);
+		if (!parent)
+			return 0;
+
+		ranges = of_get_property(dev_node, "ranges", &rlen);
+		/* "ranges" property is missing. Translation breaks */
+		if (!ranges)
+			return 0;
+		/* "ranges" property is empty. Do 1:1 translation */
+		else if (rlen == 0)
+			continue;
+		else
+			tmp_addr = of_read_number(ranges + na, pna);
+
+		na = pna;
+		dev_node = parent;
+		phys_addr += tmp_addr;
+	} while (node2dir(parent) != &root_dir);
+
+	return phys_addr;
+}
+
+bool
+of_device_is_compatible(const struct device_node *dev_node,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (d->compatible && check_compatible(d->compatible, compatible))
+		return true;
+	return false;
+}
diff --git a/drivers/bus/dpaa/include/of.h b/drivers/bus/dpaa/include/of.h
new file mode 100644
index 0000000..e422a53
--- /dev/null
+++ b/drivers/bus/dpaa/include/of.h
@@ -0,0 +1,191 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor, Inc.
+ * Copyright 2017 NXP.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __OF_H
+#define	__OF_H
+
+#include <compat.h>
+
+#ifndef OF_INIT_DEFAULT_PATH
+#define OF_INIT_DEFAULT_PATH "/proc/device-tree"
+#endif
+
+#define OF_DEFAULT_NA 1
+#define OF_DEFAULT_NS 1
+
+#define OF_FILE_BUF_MAX 256
+
+/**
+ * Layout of Device Tree:
+ * dt_dir
+ *  |- dt_dir
+ *  |   |- dt_dir
+ *  |   |  |- dt_dir
+ *  |   |  |  |- dt_file
+ *  |   |  |  ``- dt_file
+ *  |   |  ``- dt_file
+ *  |   `-dt_file`
+ *  ``- dt_file
+ *
+ *  +------------------+
+ *  |dt_dir            |
+ *  |+----------------+|
+ *  ||dt_node         ||
+ *  ||+--------------+||
+ *  |||device_node   |||
+ *  ||+--------------+||
+ *  || list_dt_nodes  ||
+ *  |+----------------+|
+ *  | list of subdir   |
+ *  | list of files    |
+ *  +------------------+
+ */
+
+/**
+ * Device description on of a device node in device tree.
+ */
+struct device_node {
+	char name[NAME_MAX];
+	char full_name[PATH_MAX];
+};
+
+/**
+ * List of device nodes available in a device tree layout
+ */
+struct dt_node {
+	struct device_node node; /**< Property of node */
+	int is_file; /**< FALSE==dir, TRUE==file */
+	struct list_head list; /**< Nodes within a parent subdir */
+};
+
+/**
+ * Types we use to represent directories and files
+ */
+struct dt_file;
+struct dt_dir {
+	struct dt_node node;
+	struct list_head subdirs;
+	struct list_head files;
+	struct list_head linear;
+	struct dt_dir *parent;
+	struct dt_file *compatible;
+	struct dt_file *status;
+	struct dt_file *lphandle;
+	struct dt_file *a_cells;
+	struct dt_file *s_cells;
+	struct dt_file *reg;
+};
+
+struct dt_file {
+	struct dt_node node;
+	struct dt_dir *parent;
+	ssize_t len;
+	uint64_t buf[OF_FILE_BUF_MAX >> 3]; /** ASDF: Why? */
+};
+
+const struct device_node *of_find_compatible_node(
+					const struct device_node *from,
+					const char *type __always_unused,
+					const char *compatible)
+	__attribute__((nonnull(3)));
+
+#define for_each_compatible_node(dev_node, type, compatible) \
+	for (dev_node = of_find_compatible_node(NULL, type, compatible); \
+		dev_node != NULL; \
+		dev_node = of_find_compatible_node(dev_node, type, compatible))
+
+const void *of_get_property(const struct device_node *from, const char *name,
+			    size_t *lenp) __attribute__((nonnull(2)));
+bool of_device_is_available(const struct device_node *dev_node);
+
+const struct device_node *of_find_node_by_phandle(phandle ph);
+
+const struct device_node *of_get_parent(const struct device_node *dev_node);
+
+const struct device_node *of_get_next_child(const struct device_node *dev_node,
+					    const struct device_node *prev);
+
+#define for_each_child_node(parent, child) \
+	for (child = of_get_next_child(parent, NULL); child != NULL; \
+			child = of_get_next_child(parent, child))
+
+uint32_t of_n_addr_cells(const struct device_node *dev_node);
+uint32_t of_n_size_cells(const struct device_node *dev_node);
+
+const uint32_t *of_get_address(const struct device_node *dev_node, size_t idx,
+			       uint64_t *size, uint32_t *flags);
+
+uint64_t of_translate_address(const struct device_node *dev_node,
+			      const u32 *addr) __attribute__((nonnull));
+
+bool of_device_is_compatible(const struct device_node *dev_node,
+			     const char *compatible);
+
+/* of_init() must be called prior to initialisation or use of any driver
+ * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers, etc.
+ * The path should usually be "/proc/device-tree".
+ */
+int of_init_path(const char *dt_path);
+
+/* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
+ * full reload is desired without a process exit.
+ */
+void of_finish(void);
+
+/* Use of this wrapper is recommended. */
+static inline int of_init(void)
+{
+	return of_init_path(OF_INIT_DEFAULT_PATH);
+}
+
+/* Read a numeric property according to its size and return it as a 64-bit
+ * value.
+ */
+static inline uint64_t of_read_number(const __be32 *cell, int size)
+{
+	uint64_t r = 0;
+
+	while (size--)
+		r = (r << 32) | be32toh(*(cell++));
+	return r;
+}
+
+#endif	/*  __OF_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 06/38] bus/dpaa: introducing FMan configurations
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (4 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 05/38] bus/dpaa: add OF parser for device scanning Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 07/38] bus/dpaa: add FMan hardware operations Shreyansh Jain
                   ` (32 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

FMan or Frame Manager, inspects traffic, splits it into queueson ingress.
It is also responsible for directing traffic on queues on egress.

This patch introduces FMan configurational interfaces. This layer is
used by Bus driver for configuring the hardware block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   2 +
 drivers/bus/dpaa/base/fman/fman.c         | 537 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c | 205 ++++++++++++
 drivers/bus/dpaa/include/fman.h           | 472 ++++++++++++++++++++++++++
 drivers/bus/dpaa/include/netcfg.h         |  96 ++++++
 5 files changed, 1312 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index cc685d1..49abdc7 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -65,6 +65,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/fman.c \
 	base/fman/of.c \
+	base/fman/netcfg_layer.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
new file mode 100644
index 0000000..b579b58
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -0,0 +1,537 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+
+#include <rte_malloc.h>
+
+/* This header declares the driver interface we implement */
+#include <fman.h>
+#include <of.h>
+
+#define QMI_PORT_REGS_OFFSET		0x400
+
+/* CCSR map address to access ccsr based register */
+void *fman_ccsr_map;
+/* fman version info */
+u16 fman_ip_rev;
+static int get_once;
+u32 fman_dealloc_bufs_mask_hi;
+u32 fman_dealloc_bufs_mask_lo;
+
+int fman_ccsr_map_fd = -1;
+static COMPAT_LIST_HEAD(__ifs);
+
+/* This is the (const) global variable that callers have read-only access to.
+ * Internally, we have read-write access directly to __ifs.
+ */
+const struct list_head *fman_if_list = &__ifs;
+
+static void
+if_destructor(struct __fman_if *__if)
+{
+	struct fman_if_bpool *bp, *tmpbp;
+
+	if (__if->__if.mac_type == fman_offline)
+		goto cleanup;
+
+	list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+cleanup:
+	rte_free(__if);
+}
+
+static int
+fman_get_ip_rev(const struct device_node *fman_node)
+{
+	const uint32_t *fman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	uint32_t ip_rev_1;
+	int _errno;
+
+	fman_addr = of_get_address(fman_node, 0, &regs_size, NULL);
+	if (!fman_addr) {
+		pr_err("of_get_address cannot return fman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(fman_node, fman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+	fman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, fman_ccsr_map_fd, phys_addr);
+	if (fman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map FMan ccsr base");
+		return -EINVAL;
+	}
+
+	ip_rev_1 = in_be32(fman_ccsr_map + FMAN_IP_REV_1);
+	fman_ip_rev = (ip_rev_1 & FMAN_IP_REV_1_MAJOR_MASK) >>
+			FMAN_IP_REV_1_MAJOR_SHIFT;
+
+	_errno = munmap(fman_ccsr_map, regs_size);
+	if (_errno)
+		pr_err("munmap() of FMan ccsr failed");
+
+	return 0;
+}
+
+static int
+fman_get_mac_index(uint64_t regs_addr_host, uint8_t *mac_idx)
+{
+	int ret = 0;
+
+	/*
+	 * MAC1 : E_0000h
+	 * MAC2 : E_2000h
+	 * MAC3 : E_4000h
+	 * MAC4 : E_6000h
+	 * MAC5 : E_8000h
+	 * MAC6 : E_A000h
+	 * MAC7 : E_C000h
+	 * MAC8 : E_E000h
+	 * MAC9 : F_0000h
+	 * MAC10: F_2000h
+	 */
+	switch (regs_addr_host) {
+	case 0xE0000:
+		*mac_idx = 1;
+		break;
+	case 0xE2000:
+		*mac_idx = 2;
+		break;
+	case 0xE4000:
+		*mac_idx = 3;
+		break;
+	case 0xE6000:
+		*mac_idx = 4;
+		break;
+	case 0xE8000:
+		*mac_idx = 5;
+		break;
+	case 0xEA000:
+		*mac_idx = 6;
+		break;
+	case 0xEC000:
+		*mac_idx = 7;
+		break;
+	case 0xEE000:
+		*mac_idx = 8;
+		break;
+	case 0xF0000:
+		*mac_idx = 9;
+		break;
+	case 0xF2000:
+		*mac_idx = 10;
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+fman_if_init(const struct device_node *dpa_node)
+{
+	const char *rprop, *mprop;
+	uint64_t phys_addr;
+	struct __fman_if *__if;
+	struct fman_if_bpool *bpool;
+
+	const phandle *mac_phandle, *ports_phandle, *pools_phandle;
+	const phandle *tx_channel_id = NULL, *mac_addr, *cell_idx;
+	const phandle *rx_phandle, *tx_phandle;
+	uint64_t tx_phandle_host[4] = {0};
+	uint64_t rx_phandle_host[4] = {0};
+	uint64_t regs_addr_host = 0;
+	uint64_t cell_idx_host = 0;
+
+	const struct device_node *mac_node = NULL, *tx_node;
+	const struct device_node *pool_node, *fman_node, *rx_node;
+	const uint32_t *regs_addr = NULL;
+	const char *mname, *fname;
+	const char *dname = dpa_node->full_name;
+	size_t lenp;
+	int _errno;
+	const char *char_prop;
+	uint32_t na;
+
+	if (of_device_is_available(dpa_node) == false)
+		return 0;
+
+	rprop = "fsl,qman-frame-queues-rx";
+	mprop = "fsl,fman-mac";
+
+	/* Allocate an object for this network interface */
+	__if = rte_malloc(NULL, sizeof(*__if), RTE_CACHE_LINE_SIZE);
+	FMAN_ERR(!__if, -ENOMEM, "malloc(%zu)\n", sizeof(*__if));
+	memset(__if, 0, sizeof(*__if));
+	INIT_LIST_HEAD(&__if->__if.bpool_list);
+	strncpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
+	__if->node_path[PATH_MAX - 1] = '\0';
+
+	/** ASDF: This needs to be revisited */
+	/* Obtain the MAC node used by this interface except macless */
+	mac_phandle = of_get_property(dpa_node, mprop, &lenp);
+	FMAN_ERR(!mac_phandle, -EINVAL, "%s: no %s\n", dname, mprop);
+	assert(lenp == sizeof(phandle));
+	mac_node = of_find_node_by_phandle(*mac_phandle);
+	FMAN_ERR(!mac_node, -ENXIO, "%s: bad 'fsl,fman-mac\n", dname);
+	mname = mac_node->full_name;
+
+	/* Map the CCSR regs for the MAC node */
+	regs_addr = of_get_address(mac_node, 0, &__if->regs_size, NULL);
+	FMAN_ERR(!regs_addr, -EINVAL, "of_get_address(%s)\n", mname);
+	phys_addr = of_translate_address(mac_node, regs_addr);
+	FMAN_ERR(!phys_addr, -EINVAL, "of_translate_address(%s, %p)\n",
+		mname, regs_addr);
+		__if->ccsr_map = mmap(NULL, __if->regs_size,
+		PROT_READ | PROT_WRITE, MAP_SHARED,
+		fman_ccsr_map_fd, phys_addr);
+	FMAN_ERR(__if->ccsr_map == MAP_FAILED, -errno,
+		"mmap(0x%"PRIx64")\n", phys_addr);
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	regs_addr_host = of_read_number(regs_addr, na);
+
+
+	/* Get the index of the Fman this i/f belongs to */
+	fman_node = of_get_parent(mac_node);
+	na = of_n_addr_cells(mac_node);
+	FMAN_ERR(!fman_node, -ENXIO, "of_get_parent(%s)\n", mname);
+	fname = fman_node->full_name;
+	cell_idx = of_get_property(fman_node, "cell-index", &lenp);
+	FMAN_ERR(!cell_idx, -ENXIO, "%s: no cell-index)\n", fname);
+	assert(lenp == sizeof(*cell_idx));
+	cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+	__if->__if.fman_idx = cell_idx_host;
+	if (!get_once) {
+		_errno = fman_get_ip_rev(fman_node);
+		FMAN_ERR(_errno, -ENXIO, "%s: ip_rev is not available\n",
+		       fname);
+	}
+
+	if (fman_ip_rev >= FMAN_V3) {
+		/*
+		 * Set A2V, OVOM, EBD bits in contextA to allow external
+		 * buffer deallocation by fman.
+		 */
+		fman_dealloc_bufs_mask_hi = FMAN_V3_CONTEXTA_EN_A2V |
+						FMAN_V3_CONTEXTA_EN_OVOM;
+		fman_dealloc_bufs_mask_lo = FMAN_V3_CONTEXTA_EN_EBD;
+	} else {
+		fman_dealloc_bufs_mask_hi = 0;
+		fman_dealloc_bufs_mask_lo = 0;
+	}
+	/* Is the MAC node 1G, 10G? */
+	__if->__if.is_memac = 0;
+
+	if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
+		__if->__if.mac_type = fman_mac_1g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
+		__if->__if.mac_type = fman_mac_10g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-memac")) {
+		/** ASDF: what is memac? */
+		__if->__if.is_memac = 1;
+		char_prop = of_get_property(mac_node, "phy-connection-type",
+					    NULL);
+		if (!char_prop) {
+			printf("memac: unknown MII type assuming 1G\n");
+			/* Right now forcing memac to 1g in case of error*/
+			__if->__if.mac_type = fman_mac_1g;
+		} else {
+			if (strstr(char_prop, "sgmii"))
+				__if->__if.mac_type = fman_mac_1g;
+			else if (strstr(char_prop, "rgmii")) {
+				__if->__if.mac_type = fman_mac_1g;
+				__if->__if.is_rgmii = 1;
+			} else if (strstr(char_prop, "xgmii"))
+				__if->__if.mac_type = fman_mac_10g;
+		}
+	} else
+		FMAN_ERR(1, -EINVAL, "%s: unknown MAC type\n", mname);
+
+	/*
+	 * For MAC ports, we cannot rely on cell-index. In
+	 * T2080, two of the 10G ports on single FMAN have same
+	 * duplicate cell-indexes as the other two 10G ports on
+	 * same FMAN. Hence, we now rely upon addresses of the
+	 * ports from device tree to deduce the index.
+	 */
+
+	_errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
+	FMAN_ERR(_errno, -EINVAL, "Invalid register address: %lu",
+		 regs_addr_host);
+
+	/* Extract the MAC address for private and shared interfaces */
+	mac_addr = of_get_property(mac_node, "local-mac-address",
+				   &lenp);
+	FMAN_ERR(!mac_addr, -EINVAL, "%s: no local-mac-address\n",
+	       mname);
+	memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
+
+	/* Extract the Tx port (it's the second of the two port handles)
+	 * and get its channel ID
+	 */
+	ports_phandle = of_get_property(mac_node, "fsl,port-handles",
+					&lenp);
+	FMAN_ERR(!ports_phandle, -EINVAL, "%s: no fsl,port-handles\n",
+	       mname);
+	assert(lenp == (2 * sizeof(phandle)));
+	tx_node = of_find_node_by_phandle(ports_phandle[1]);
+	FMAN_ERR(!tx_node, -ENXIO, "%s: bad fsl,port-handle[1]\n", mname);
+	/* Extract the channel ID (from tx-port-handle) */
+	tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
+					&lenp);
+	FMAN_ERR(!tx_channel_id, -EINVAL, "%s: no fsl-qman-channel-id\n",
+	       tx_node->full_name);
+
+	rx_node = of_find_node_by_phandle(ports_phandle[0]);
+	FMAN_ERR(!rx_node, -ENXIO, "%s: bad fsl,port-handle[0]\n", mname);
+	regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+	FMAN_ERR(!regs_addr, -EINVAL, "of_get_address(%s)\n", mname);
+	phys_addr = of_translate_address(rx_node, regs_addr);
+	FMAN_ERR(!phys_addr, -EINVAL, "of_translate_address(%s, %p)\n",
+	       mname, regs_addr);
+	__if->bmi_map = mmap(NULL, __if->regs_size,
+				 PROT_READ | PROT_WRITE, MAP_SHARED,
+				 fman_ccsr_map_fd, phys_addr);
+	FMAN_ERR(__if->bmi_map == MAP_FAILED, -errno,
+	       "mmap(0x%"PRIx64")\n", phys_addr);
+
+	/* No channel ID for MAC-less */
+	assert(lenp == sizeof(*tx_channel_id));
+	na = of_n_addr_cells(mac_node);
+	__if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+	/* Extract the Rx FQIDs. (Note, the device representation is silly,
+	 * there are "counts" that must always be 1.)
+	 */
+	rx_phandle = of_get_property(dpa_node, rprop, &lenp);
+	FMAN_ERR(!rx_phandle, -EINVAL, "%s: no fsl,qman-frame-queues-rx\n",
+	       dname);
+
+	assert(lenp == (4 * sizeof(phandle)));
+
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
+	rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
+	rx_phandle_host[2] = of_read_number(&rx_phandle[2], na);
+	rx_phandle_host[3] = of_read_number(&rx_phandle[3], na);
+
+	assert((rx_phandle_host[1] == 1) && (rx_phandle_host[3] == 1));
+	__if->__if.fqid_rx_err = rx_phandle_host[0];
+	__if->__if.fqid_rx_def = rx_phandle_host[2];
+
+	/* Extract the Tx FQIDs */
+	tx_phandle = of_get_property(dpa_node,
+				     "fsl,qman-frame-queues-tx", &lenp);
+	FMAN_ERR(!tx_phandle, -EINVAL, "%s: no fsl,qman-frame-queues-tx\n",
+	       dname);
+
+	assert(lenp == (4 * sizeof(phandle)));
+	/*TODO: Fix for other cases also */
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	tx_phandle_host[0] = of_read_number(&tx_phandle[0], na);
+	tx_phandle_host[1] = of_read_number(&tx_phandle[1], na);
+	tx_phandle_host[2] = of_read_number(&tx_phandle[2], na);
+	tx_phandle_host[3] = of_read_number(&tx_phandle[3], na);
+	assert((tx_phandle_host[1] == 1) && (tx_phandle_host[3] == 1));
+	__if->__if.fqid_tx_err = tx_phandle_host[0];
+	__if->__if.fqid_tx_confirm = tx_phandle_host[2];
+
+	/* Obtain the buffer pool nodes used by this interface */
+	pools_phandle = of_get_property(dpa_node, "fsl,bman-buffer-pools",
+					&lenp);
+	FMAN_ERR(!pools_phandle, -EINVAL, "%s: no fsl,bman-buffer-pools\n",
+	       dname);
+	/* For each pool, parse the corresponding node and add a pool object
+	 * to the interface's "bpool_list"
+	 */
+	assert(lenp && !(lenp % sizeof(phandle)));
+	while (lenp) {
+		size_t proplen;
+		const phandle *prop;
+		uint64_t bpid_host = 0;
+		uint64_t bpool_host[6] = {0};
+		const char *pname;
+		/* Allocate an object for the pool */
+		bpool = rte_malloc(NULL, sizeof(*bpool), RTE_CACHE_LINE_SIZE);
+		FMAN_ERR(!bpool, -ENOMEM, "malloc(%zu)\n", sizeof(*bpool));
+		/* Find the pool node */
+		pool_node = of_find_node_by_phandle(*pools_phandle);
+		FMAN_ERR(!pool_node, -ENXIO, "%s: bad fsl,bman-buffer-pools\n",
+		       dname);
+		pname = pool_node->full_name;
+		/* Extract the BPID property */
+		prop = of_get_property(pool_node, "fsl,bpid", &proplen);
+		FMAN_ERR(!prop, -EINVAL, "%s: no fsl,bpid\n", pname);
+		assert(proplen == sizeof(*prop));
+		na = of_n_addr_cells(mac_node);
+		/* Get rid of endianness (issues).
+		 * Convert to host byte-order
+		 */
+		bpid_host = of_read_number(prop, na);
+		bpool->bpid = bpid_host;
+		/* Extract the cfg property (count/size/addr). "fsl,bpool-cfg"
+		 * indicates for the Bman driver to seed the pool.
+		 * "fsl,bpool-ethernet-cfg" is used by the network driver. The
+		 * two are mutually exclusive, so check for either of them.
+		 */
+		prop = of_get_property(pool_node, "fsl,bpool-cfg",
+				       &proplen);
+		if (!prop)
+			prop = of_get_property(pool_node,
+					       "fsl,bpool-ethernet-cfg",
+					       &proplen);
+		if (!prop) {
+			/* It's OK for there to be no bpool-cfg */
+			bpool->count = bpool->size = bpool->addr = 0;
+		} else {
+			assert(proplen == (6 * sizeof(*prop)));
+			na = of_n_addr_cells(mac_node);
+			/* Get rid of endianness (issues).
+			 * Convert to host byte order
+			 */
+			bpool_host[0] = of_read_number(&prop[0], na);
+			bpool_host[1] = of_read_number(&prop[1], na);
+			bpool_host[2] = of_read_number(&prop[2], na);
+			bpool_host[3] = of_read_number(&prop[3], na);
+			bpool_host[4] = of_read_number(&prop[4], na);
+			bpool_host[5] = of_read_number(&prop[5], na);
+
+			bpool->count = ((uint64_t)bpool_host[0] << 32) |
+					bpool_host[1];
+			bpool->size = ((uint64_t)bpool_host[2] << 32) |
+					bpool_host[3];
+			bpool->addr = ((uint64_t)bpool_host[4] << 32) |
+					bpool_host[5];
+		}
+		/* Parsing of the pool is complete, add it to the interface
+		 * list.
+		 */
+		list_add_tail(&bpool->node, &__if->__if.bpool_list);
+		lenp -= sizeof(phandle);
+		pools_phandle++;
+	}
+
+	/* Parsing of the network interface is complete, add it to the list */
+	DPAA_BUS_LOG(DEBUG, "Found %s, Tx Channel = %x, FMAN = %x,"
+		    "Port ID = %x\n",
+		    dname, __if->__if.tx_channel_id, __if->__if.fman_idx,
+		    __if->__if.mac_idx);
+
+	list_add_tail(&__if->__if.node, &__ifs);
+	return 0;
+err:
+	if_destructor(__if);
+	return _errno;
+}
+
+int
+fman_init(void)
+{
+	const struct device_node *dpa_node;
+	int _errno;
+
+	/* If multiple dependencies try to initialise the Fman driver, don't
+	 * panic.
+	 */
+	if (fman_ccsr_map_fd != -1)
+		return 0;
+
+	fman_ccsr_map_fd = open(FMAN_DEVICE_PATH, O_RDWR);
+	if (unlikely(fman_ccsr_map_fd < 0)) {
+		DPAA_BUS_LOG(ERR, "Unable to open (/dev/mem)");
+		return fman_ccsr_map_fd;
+	}
+
+	for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-init") {
+		_errno = fman_if_init(dpa_node);
+		FMAN_ERR(_errno, _errno, "if_init(%s)\n", dpa_node->full_name);
+	}
+
+	return 0;
+err:
+	fman_finish();
+	return _errno;
+}
+
+void
+fman_finish(void)
+{
+	struct __fman_if *__if, *tmpif;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	list_for_each_entry_safe(__if, tmpif, &__ifs, __if.node) {
+		int _errno;
+
+		/* disable Rx and Tx */
+		if ((__if->__if.mac_type == fman_mac_1g) &&
+		    (!__if->__if.is_memac))
+			out_be32(__if->ccsr_map + 0x100,
+				 in_be32(__if->ccsr_map + 0x100) & ~(u32)0x5);
+		else
+			out_be32(__if->ccsr_map + 8,
+				 in_be32(__if->ccsr_map + 8) & ~(u32)3);
+		/* release the mapping */
+		_errno = munmap(__if->ccsr_map, __if->regs_size);
+		if (unlikely(_errno < 0))
+			fprintf(stderr, "%s:%hu:%s(): munmap() = %d (%s)\n",
+				__FILE__, __LINE__, __func__,
+				-errno, strerror(errno));
+		printf("Tearing down %s\n", __if->node_path);
+		list_del(&__if->__if.node);
+		rte_free(__if);
+	}
+
+	close(fman_ccsr_map_fd);
+	fman_ccsr_map_fd = -1;
+}
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
new file mode 100644
index 0000000..e3a0ced
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -0,0 +1,205 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <inttypes.h>
+#include <of.h>
+#include <net/if.h>
+#include <sys/ioctl.h>
+#include <error.h>
+#include <net/if_arp.h>
+#include <assert.h>
+#include <unistd.h>
+
+#include <rte_malloc.h>
+
+#include <rte_dpaa_logs.h>
+#include <netcfg.h>
+
+/* Structure contains information about all the interfaces given by user
+ * on command line.
+ */
+struct netcfg_interface *netcfg_interface;
+
+/* This data structure contaings all configurations information
+ * related to usages of DPA devices.
+ */
+struct netcfg_info *netcfg;
+/* fd to open a socket for making ioctl request to disable/enable shared
+ *  interfaces.
+ */
+static int skfd = -1;
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dump_netcfg(struct netcfg_info *cfg_ptr)
+{
+	int i;
+
+	printf("..........  DPAA Configuration  ..........\n\n");
+
+	/* Network interfaces */
+	printf("Network interfaces: %d\n", cfg_ptr->num_ethports);
+	for (i = 0; i < cfg_ptr->num_ethports; i++) {
+		struct fman_if_bpool *bpool;
+		struct fm_eth_port_cfg *p_cfg = &cfg_ptr->port_cfg[i];
+		struct fman_if *__if = p_cfg->fman_if;
+
+		printf("\n+ Fman %d, MAC %d (%s);\n",
+		       __if->fman_idx, __if->mac_idx,
+		       (__if->mac_type == fman_mac_1g) ? "1G" : "10G");
+
+		printf("\tmac_addr: " ETH_MAC_PRINTF_FMT "\n",
+		       ETH_MAC_PRINTF_ARGS(&__if->mac_addr));
+
+		printf("\ttx_channel_id: 0x%02x\n",
+		       __if->tx_channel_id);
+
+		printf("\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
+		printf("\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
+
+		printf("\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
+		printf("\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
+		fman_if_for_each_bpool(bpool, __if)
+			printf("\tbuffer pool: (bpid=%d, count=%"PRId64
+			       " size=%"PRId64", addr=0x%"PRIx64")\n",
+			       bpool->bpid, bpool->count, bpool->size,
+			       bpool->addr);
+	}
+}
+#endif /* RTE_LIBRTE_DPAA_DEBUG_DRIVER */
+
+static inline int
+get_num_netcfg_interfaces(char *str)
+{
+	char *pch;
+	uint8_t count = 0;
+
+	if (str == NULL)
+		return -EINVAL;
+	pch = strtok(str, ",");
+	while (pch != NULL) {
+		count++;
+		pch = strtok(NULL, ",");
+	}
+	return count;
+}
+
+struct netcfg_info *
+netcfg_acquire(void)
+{
+	struct fman_if *__if;
+	int _errno, idx = 0;
+	uint8_t num_ports = 0;
+	uint8_t num_cfg_ports = 0;
+	size_t size;
+
+	/* Extract dpa configuration from fman driver and FMC configuration
+	 * for command-line interfaces.
+	 */
+
+	if (skfd == -1) {
+		/* Open a basic socket to enable/disable shared
+		 * interfaces.
+		 */
+		skfd = socket(AF_PACKET, SOCK_RAW, 0);
+		if (unlikely(skfd < 0)) {
+			/** ASDF: logging would need to be changed */
+			error(0, errno, "%s(): open(SOCK_RAW)", __func__);
+			return NULL;
+		}
+	}
+
+	/* Initialise the Fman driver */
+	_errno = fman_init();
+	if (_errno) {
+		DPAA_BUS_LOG(ERR, "FMAN driver init failed (%d)", errno);
+		return NULL;
+	}
+
+	/* Number of MAC ports */
+	list_for_each_entry(__if, fman_if_list, node)
+		num_ports++;
+
+	if (!num_ports) {
+		DPAA_BUS_LOG(ERR, "FMAN ports not available");
+		return NULL;
+	}
+	/* Allocate space for all enabled mac ports */
+	size = sizeof(*netcfg) +
+		(num_ports * sizeof(struct fm_eth_port_cfg));
+	/** ASDF: Needs to be changed to rte_malloc */
+	netcfg = rte_zmalloc(NULL, size * 1, RTE_CACHE_LINE_SIZE);
+	if (unlikely(netcfg == NULL)) {
+		DPAA_BUS_LOG(ERR, "Unable to allocat mem for netcfg");
+		goto error;
+	}
+
+	netcfg->num_ethports = num_ports;
+
+	list_for_each_entry(__if, fman_if_list, node) {
+		struct fm_eth_port_cfg *cfg = &netcfg->port_cfg[idx];
+		/* Hook in the fman driver interface */
+		cfg->fman_if = __if;
+		cfg->rx_def = __if->fqid_rx_def;
+		num_cfg_ports++;
+		idx++;
+	}
+
+	if (!num_cfg_ports) {
+		DPAA_BUS_LOG(ERR, "No FMAN ports found");
+		goto error;
+	} else if (num_ports != num_cfg_ports)
+		netcfg->num_ethports = num_cfg_ports;
+
+	return netcfg;
+
+error:
+	return NULL;
+}
+
+void
+netcfg_release(struct netcfg_info *cfg_ptr)
+{
+	rte_free(cfg_ptr);
+	/* Close socket for shared interfaces */
+	if (skfd >= 0) {
+		close(skfd);
+		skfd = -1;
+	}
+}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
new file mode 100644
index 0000000..19105bb
--- /dev/null
+++ b/drivers/bus/dpaa/include/fman.h
@@ -0,0 +1,472 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FMAN_H
+#define __FMAN_H
+
+#include <stdbool.h>
+#include <net/if.h>
+
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+
+#include <compat.h>
+#include <rte_dpaa_logs.h>
+
+#ifndef FMAN_DEVICE_PATH
+#define FMAN_DEVICE_PATH "/dev/mem"
+#endif
+
+#define MEMAC_NUM_OF_PADDRS 7 /* Num of additional exact match MAC adr regs */
+
+/* Control and Configuration Register (COMMAND_CONFIG) for MEMAC */
+#define CMD_CFG_LOOPBACK_EN	0x00000400
+/**< 21 XGMII/GMII loopback enable */
+#define CMD_CFG_PROMIS_EN	0x00000010
+/**< 27 Promiscuous operation enable */
+#define CMD_CFG_PAUSE_IGNORE	0x00000100
+/**< 23 Ignore Pause frame quanta */
+
+/* Statistics Configuration Register (STATN_CONFIG) */
+#define STATS_CFG_CLR           0x00000004
+/**< 29 Reset all counters */
+#define STATS_CFG_CLR_ON_RD     0x00000002
+/**< 30 Clear on read */
+#define STATS_CFG_SATURATE      0x00000001
+/**< 31 Saturate at the maximum val */
+
+/**< Max receive frame length mask */
+#define MAXFRM_SIZE_MEMAC	0x00007fe0
+#define MAXFRM_RX_MASK		0x0000ffff
+
+/**< Interface Mode Register Register for MEMAC */
+#define IF_MODE_RLP 0x00000820
+
+/**< Pool Limits */
+#define FMAN_PORT_MAX_EXT_POOLS_NUM	8
+#define FMAN_PORT_OBS_EXT_POOLS_NUM	2
+
+#define FMAN_PORT_CG_MAP_NUM		8
+#define FMAN_PORT_PRS_RESULT_WORDS_NUM	8
+#define FMAN_PORT_BMI_FIFO_UNITS	0x100
+#define FMAN_PORT_IC_OFFSET_UNITS	0x10
+
+#define FMAN_ENABLE_BPOOL_DEPLETION	0xF00000F0
+
+#define HASH_CTRL_MCAST_EN	0x00000100
+#define GROUP_ADDRESS		0x0000010000000000LL
+#define HASH_CTRL_ADDR_MASK	0x0000003F
+
+/* Pre definitions of FMAN interface and Bpool structures */
+struct __fman_if;
+struct fman_if_bpool;
+/* Lists of fman interfaces and bpools */
+TAILQ_HEAD(rte_fman_if_list, __fman_if);
+
+/* Represents the different flavour of network interface */
+enum fman_mac_type {
+	fman_offline = 0, /* ASDF: Should it be removed? */
+	fman_mac_1g,
+	fman_mac_10g,
+};
+
+struct mac_addr {
+	uint32_t   mac_addr_l;	/**< Lower 32 bits of 48-bit MAC address */
+	uint32_t   mac_addr_u;	/**< Upper 16 bits of 48-bit MAC address */
+};
+
+struct memac_regs {
+	/* General Control and Status */
+	uint32_t res0000[2];
+	uint32_t command_config;	/**< 0x008 Ctrl and cfg */
+	struct mac_addr mac_addr0;	/**< 0x00C-0x010 MAC_ADDR_0...1 */
+	uint32_t maxfrm;		/**< 0x014 Max frame length */
+	uint32_t res0018[5];
+	uint32_t hashtable_ctrl;	/**< 0x02C Hash table control */
+	uint32_t res0030[4];
+	uint32_t ievent;		/**< 0x040 Interrupt event */
+	uint32_t tx_ipg_length;
+	/**< 0x044 Transmitter inter-packet-gap */
+	uint32_t res0048;
+	uint32_t imask;			/**< 0x04C Interrupt mask */
+	uint32_t res0050;
+	uint32_t pause_quanta[4];	/**< 0x054 Pause quanta */
+	uint32_t pause_thresh[4];	/**< 0x064 Pause quanta threshold */
+	uint32_t rx_pause_status;	/**< 0x074 Receive pause status */
+	uint32_t res0078[2];
+	struct mac_addr mac_addr[MEMAC_NUM_OF_PADDRS];
+	/**< 0x80-0x0B4 mac padr */
+	uint32_t lpwake_timer;
+	/**< 0x0B8 Low Power Wakeup Timer */
+	uint32_t sleep_timer;
+	/**< 0x0BC Transmit EEE Low Power Timer */
+	uint32_t res00c0[8];
+	uint32_t statn_config;
+	/**< 0x0E0 Statistics configuration */
+	uint32_t res00e4[7];
+	/* Rx Statistics Counter */
+	uint32_t reoct_l;
+	uint32_t reoct_u;
+	uint32_t roct_l;
+	uint32_t roct_u;
+	uint32_t raln_l;
+	uint32_t raln_u;
+	uint32_t rxpf_l;
+	uint32_t rxpf_u;
+	uint32_t rfrm_l;
+	uint32_t rfrm_u;
+	uint32_t rfcs_l;
+	uint32_t rfcs_u;
+	uint32_t rvlan_l;
+	uint32_t rvlan_u;
+	uint32_t rerr_l;
+	uint32_t rerr_u;
+	uint32_t ruca_l;
+	uint32_t ruca_u;
+	uint32_t rmca_l;
+	uint32_t rmca_u;
+	uint32_t rbca_l;
+	uint32_t rbca_u;
+	uint32_t rdrp_l;
+	uint32_t rdrp_u;
+	uint32_t rpkt_l;
+	uint32_t rpkt_u;
+	uint32_t rund_l;
+	uint32_t rund_u;
+	uint32_t r64_l;
+	uint32_t r64_u;
+	uint32_t r127_l;
+	uint32_t r127_u;
+	uint32_t r255_l;
+	uint32_t r255_u;
+	uint32_t r511_l;
+	uint32_t r511_u;
+	uint32_t r1023_l;
+	uint32_t r1023_u;
+	uint32_t r1518_l;
+	uint32_t r1518_u;
+	uint32_t r1519x_l;
+	uint32_t r1519x_u;
+	uint32_t rovr_l;
+	uint32_t rovr_u;
+	uint32_t rjbr_l;
+	uint32_t rjbr_u;
+	uint32_t rfrg_l;
+	uint32_t rfrg_u;
+	uint32_t rcnp_l;
+	uint32_t rcnp_u;
+	uint32_t rdrntp_l;
+	uint32_t rdrntp_u;
+	uint32_t res01d0[12];
+	/* Tx Statistics Counter */
+	uint32_t teoct_l;
+	uint32_t teoct_u;
+	uint32_t toct_l;
+	uint32_t toct_u;
+	uint32_t res0210[2];
+	uint32_t txpf_l;
+	uint32_t txpf_u;
+	uint32_t tfrm_l;
+	uint32_t tfrm_u;
+	uint32_t tfcs_l;
+	uint32_t tfcs_u;
+	uint32_t tvlan_l;
+	uint32_t tvlan_u;
+	uint32_t terr_l;
+	uint32_t terr_u;
+	uint32_t tuca_l;
+	uint32_t tuca_u;
+	uint32_t tmca_l;
+	uint32_t tmca_u;
+	uint32_t tbca_l;
+	uint32_t tbca_u;
+	uint32_t res0258[2];
+	uint32_t tpkt_l;
+	uint32_t tpkt_u;
+	uint32_t tund_l;
+	uint32_t tund_u;
+	uint32_t t64_l;
+	uint32_t t64_u;
+	uint32_t t127_l;
+	uint32_t t127_u;
+	uint32_t t255_l;
+	uint32_t t255_u;
+	uint32_t t511_l;
+	uint32_t t511_u;
+	uint32_t t1023_l;
+	uint32_t t1023_u;
+	uint32_t t1518_l;
+	uint32_t t1518_u;
+	uint32_t t1519x_l;
+	uint32_t t1519x_u;
+	uint32_t res02a8[6];
+	uint32_t tcnp_l;
+	uint32_t tcnp_u;
+	uint32_t res02c8[14];
+	/* Line Interface Control */
+	uint32_t if_mode;		/**< 0x300 Interface Mode Control */
+	uint32_t if_status;		/**< 0x304 Interface Status */
+	uint32_t res0308[14];
+	/* HiGig/2 */
+	uint32_t hg_config;		/**< 0x340 Control and cfg */
+	uint32_t res0344[3];
+	uint32_t hg_pause_quanta;	/**< 0x350 Pause quanta */
+	uint32_t res0354[3];
+	uint32_t hg_pause_thresh;	/**< 0x360 Pause quanta threshold */
+	uint32_t res0364[3];
+	uint32_t hgrx_pause_status;	/**< 0x370 Receive pause status */
+	uint32_t hg_fifos_status;	/**< 0x374 fifos status */
+	uint32_t rhm;			/**< 0x378 rx messages counter */
+	uint32_t thm;			/**< 0x37C tx messages counter */
+};
+
+struct rx_bmi_regs {
+	uint32_t fmbm_rcfg;		/**< Rx Configuration */
+	uint32_t fmbm_rst;		/**< Rx Status */
+	uint32_t fmbm_rda;		/**< Rx DMA attributes*/
+	uint32_t fmbm_rfp;		/**< Rx FIFO Parameters*/
+	uint32_t fmbm_rfed;		/**< Rx Frame End Data*/
+	uint32_t fmbm_ricp;		/**< Rx Internal Context Parameters*/
+	uint32_t fmbm_rim;		/**< Rx Internal Buffer Margins*/
+	uint32_t fmbm_rebm;		/**< Rx External Buffer Margins*/
+	uint32_t fmbm_rfne;		/**< Rx Frame Next Engine*/
+	uint32_t fmbm_rfca;		/**< Rx Frame Command Attributes.*/
+	uint32_t fmbm_rfpne;		/**< Rx Frame Parser Next Engine*/
+	uint32_t fmbm_rpso;		/**< Rx Parse Start Offset*/
+	uint32_t fmbm_rpp;		/**< Rx Policer Profile  */
+	uint32_t fmbm_rccb;		/**< Rx Coarse Classification Base */
+	uint32_t fmbm_reth;		/**< Rx Excessive Threshold */
+	uint32_t reserved003c[1];	/**< (0x03C 0x03F) */
+	uint32_t fmbm_rprai[FMAN_PORT_PRS_RESULT_WORDS_NUM];
+					/**< Rx Parse Results Array Init*/
+	uint32_t fmbm_rfqid;		/**< Rx Frame Queue ID*/
+	uint32_t fmbm_refqid;		/**< Rx Error Frame Queue ID*/
+	uint32_t fmbm_rfsdm;		/**< Rx Frame Status Discard Mask*/
+	uint32_t fmbm_rfsem;		/**< Rx Frame Status Error Mask*/
+	uint32_t fmbm_rfene;		/**< Rx Frame Enqueue Next Engine */
+	uint32_t reserved0074[0x2];	/**< (0x074-0x07C)  */
+	uint32_t fmbm_rcmne;
+	/**< Rx Frame Continuous Mode Next Engine */
+	uint32_t reserved0080[0x20];/**< (0x080 0x0FF)  */
+	uint32_t fmbm_ebmpi[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Buffer Manager pool Information-*/
+	uint32_t fmbm_acnt[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Allocate Counter-*/
+	uint32_t reserved0130[8];
+					/**< 0x130/0x140 - 0x15F reserved -*/
+	uint32_t fmbm_rcgm[FMAN_PORT_CG_MAP_NUM];
+					/**< Congestion Group Map*/
+	uint32_t fmbm_mpd;		/**< BM Pool Depletion  */
+	uint32_t reserved0184[0x1F];	/**< (0x184 0x1FF) */
+	uint32_t fmbm_rstc;		/**< Rx Statistics Counters*/
+	uint32_t fmbm_rfrc;		/**< Rx Frame Counter*/
+	uint32_t fmbm_rfbc;		/**< Rx Bad Frames Counter*/
+	uint32_t fmbm_rlfc;		/**< Rx Large Frames Counter*/
+	uint32_t fmbm_rffc;		/**< Rx Filter Frames Counter*/
+	uint32_t fmbm_rfdc;		/**< Rx Frame Discard Counter*/
+	uint32_t fmbm_rfldec;		/**< Rx Frames List DMA Error Counter*/
+	uint32_t fmbm_rodc;		/**< Rx Out of Buffers Discard nntr*/
+	uint32_t fmbm_rbdc;		/**< Rx Buffers Deallocate Counter*/
+	uint32_t reserved0224[0x17];	/**< (0x224 0x27F) */
+	uint32_t fmbm_rpc;		/**< Rx Performance Counters*/
+	uint32_t fmbm_rpcp;		/**< Rx Performance Count Parameters*/
+	uint32_t fmbm_rccn;		/**< Rx Cycle Counter*/
+	uint32_t fmbm_rtuc;		/**< Rx Tasks Utilization Counter*/
+	uint32_t fmbm_rrquc;
+	/**< Rx Receive Queue Utilization cntr*/
+	uint32_t fmbm_rduc;		/**< Rx DMA Utilization Counter*/
+	uint32_t fmbm_rfuc;		/**< Rx FIFO Utilization Counter*/
+	uint32_t fmbm_rpac;		/**< Rx Pause Activation Counter*/
+	uint32_t reserved02a0[0x18];	/**< (0x2A0 0x2FF) */
+	uint32_t fmbm_rdbg;		/**< Rx Debug-*/
+};
+
+struct fman_port_qmi_regs {
+	uint32_t fmqm_pnc;		/**< PortID n Configuration Register */
+	uint32_t fmqm_pns;		/**< PortID n Status Register */
+	uint32_t fmqm_pnts;		/**< PortID n Task Status Register */
+	uint32_t reserved00c[4];	/**< 0xn00C - 0xn01B */
+	uint32_t fmqm_pnen;		/**< PortID n Enqueue NIA Register */
+	uint32_t fmqm_pnetfc;		/**< PortID n Enq Total Frame Counter */
+	uint32_t reserved024[2];	/**< 0xn024 - 0x02B */
+	uint32_t fmqm_pndn;		/**< PortID n Dequeue NIA Register */
+	uint32_t fmqm_pndc;		/**< PortID n Dequeue Config Register */
+	uint32_t fmqm_pndtfc;		/**< PortID n Dequeue tot Frame cntr */
+	uint32_t fmqm_pndfdc;		/**< PortID n Dequeue FQID Dflt Cntr */
+	uint32_t fmqm_pndcc;		/**< PortID n Dequeue Confirm Counter */
+};
+
+/* This struct exports parameters about an Fman network interface, determined
+ * from the device-tree.
+ */
+struct fman_if {
+	/* Which Fman this interface belongs to */
+	uint8_t fman_idx;
+	/* The type/speed of the interface */
+	enum fman_mac_type mac_type;
+	/* Boolean, set when mac type is memac */
+	uint8_t is_memac;
+	/* Boolean, set when PHY is RGMII */
+	uint8_t is_rgmii;
+	/* The index of this MAC (within the Fman it belongs to) */
+	uint8_t mac_idx;
+	/* The MAC address */
+	struct ether_addr mac_addr;
+	/* The Qman channel to schedule Tx FQs to */
+	u16 tx_channel_id;
+	/* The hard-coded FQIDs for this interface. Note: this doesn't cover
+	 * the PCD nor the "Rx default" FQIDs, which are configured via FMC
+	 * and its XML-based configuration.
+	 */
+	uint32_t fqid_rx_def;
+	uint32_t fqid_rx_err;
+	uint32_t fqid_tx_err;
+	uint32_t fqid_tx_confirm;
+
+	struct list_head bpool_list;
+	/* The node for linking this interface into "fman_if_list" */
+	struct list_head node;
+};
+
+/* This struct exposes parameters for buffer pools, extracted from the network
+ * interface settings in the device tree.
+ */
+struct fman_if_bpool {
+	uint32_t bpid;
+	uint64_t count;
+	uint64_t size;
+	uint64_t addr;
+	/* The node for linking this bpool into fman_if::bpool_list */
+	struct list_head node;
+};
+
+/* Internal Context transfer params - FMBM_RICP*/
+struct fman_if_ic_params {
+	/*IC offset in the packet buffer */
+	uint16_t iceof;
+	/*IC internal offset */
+	uint16_t iciof;
+	/*IC size to copy */
+	uint16_t icsz;
+};
+
+/* The exported "struct fman_if" type contains the subset of fields we want
+ * exposed. This struct is embedded in a larger "struct __fman_if" which
+ * contains the extra bits we *don't* want exposed.
+ */
+struct __fman_if {
+	struct fman_if __if;
+	char node_path[PATH_MAX];
+	uint64_t regs_size;
+	void *ccsr_map;
+	void *bmi_map;
+	void *qmi_map;
+	struct list_head node;
+};
+
+/* And this is the base list node that the interfaces are added to. (See
+ * fman_if_enable_all_rx() below for an example of its use.)
+ */
+extern const struct list_head *fman_if_list;
+
+/* To display MAC addresses (of type "struct ether_addr") via printf()-style
+ * interfaces, these macros may come in handy. Eg;
+ *        struct fman_if *p = get_ptr_to_some_interface();
+ *        printf("MAC address is " ETH_MAC_PRINTF_FMT "\n",
+ *               ETH_MAC_PRINTF_ARGS(&p->mac_addr));
+ */
+#define ETH_MAC_PRINTF_FMT "%02x:%02x:%02x:%02x:%02x:%02x"
+#define ETH_MAC_PRINTF_ARGS(a) \
+		(a)->addr_bytes[0], (a)->addr_bytes[1], \
+		(a)->addr_bytes[2], (a)->addr_bytes[3], \
+		(a)->addr_bytes[4], (a)->addr_bytes[5]
+
+/* To iterate the "bpool_list" for an interface. Eg;
+ *        struct fman_if *p = get_ptr_to_some_interface();
+ *        struct fman_if_bpool *bp;
+ *        printf("Interface uses following BPIDs;\n");
+ *        fman_if_for_each_bpool(bp, p) {
+ *            printf("    %d\n", bp->bpid);
+ *            [...]
+ *        }
+ */
+#define fman_if_for_each_bpool(bp, __if) \
+	list_for_each_entry(bp, &(__if)->bpool_list, node)
+
+#define FMAN_ERR(cond, rc, fmt, args...) \
+	do { \
+		if (unlikely(cond)) { \
+			_errno = (rc); \
+			DPAA_BUS_LOG(ERR, fmt "(%d)", ##args, errno); \
+			goto err; \
+		} \
+	} while (0)
+
+#define FMAN_IP_REV_1	0xC30C4
+#define FMAN_IP_REV_1_MAJOR_MASK 0x0000FF00
+#define FMAN_IP_REV_1_MAJOR_SHIFT 8
+#define FMAN_V3	0x06
+#define FMAN_V3_CONTEXTA_EN_A2V	0x10000000
+#define FMAN_V3_CONTEXTA_EN_OVOM	0x02000000
+#define FMAN_V3_CONTEXTA_EN_EBD	0x80000000
+#define FMAN_CONTEXTA_DIS_CHECKSUM	0x7ull
+#define FMAN_CONTEXTA_SET_OPCODE11 0x2000000b00000000
+extern u16 fman_ip_rev;
+extern u32 fman_dealloc_bufs_mask_hi;
+extern u32 fman_dealloc_bufs_mask_lo;
+
+/**
+ * Initialize the FMAN driver
+ *
+ * @args void
+ * @return
+ *	0 for success; error OTHERWISE
+ */
+int fman_init(void);
+
+/**
+ * Teardown the FMAN driver
+ *
+ * @args void
+ * @return void
+ */
+void fman_finish(void);
+
+#endif	/* __FMAN_H */
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
new file mode 100644
index 0000000..b77a678
--- /dev/null
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -0,0 +1,96 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __NETCFG_H
+#define __NETCFG_H
+
+#include <fman.h>
+#include <argp.h>
+
+/* Configuration information related to a specific ethernet port */
+struct fm_eth_port_cfg {
+	/**< A list of PCD FQ ranges, obtained from FMC configuration */
+	struct list_head *list;
+	/**< The "Rx default" FQID, obtained from FMC configuration */
+	uint32_t rx_def;
+	/**< Other interface details are in the fman driver interface */
+	struct fman_if *fman_if;
+};
+
+struct netcfg_info {
+	uint8_t num_ethports;
+	/**< Number of ports */
+	struct fm_eth_port_cfg port_cfg[0];
+	/**< Variable structure array of size num_ethports */
+};
+
+struct interface_info {
+	char *name;
+	struct ether_addr mac_addr;
+	struct ether_addr peer_mac;
+	int mac_present;
+	int fman_enabled_mac_interface;
+};
+
+struct netcfg_interface {
+	uint8_t numof_netcfg_interface;
+	uint8_t numof_fman_enabled_macless;
+	struct interface_info interface_info[0];
+};
+
+/* pcd_file: FMC netpcd XML ("policy") file, that contains PCD information.
+ * cfg_file: FMC config XML file
+ * Returns the configuration information in newly allocated memory.
+ */
+struct netcfg_info *netcfg_acquire(void);
+
+/* cfg_ptr: configuration information pointer.
+ * Frees the resources allocated by the configuration layer.
+ */
+void netcfg_release(struct netcfg_info *cfg_ptr);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* cfg_ptr: configuration information pointer.
+ * This function dumps configuration data to stdout.
+ */
+void dump_netcfg(struct netcfg_info *cfg_ptr);
+#endif
+
+#endif /* __NETCFG_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 07/38] bus/dpaa: add FMan hardware operations
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (5 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 06/38] bus/dpaa: introducing FMan configurations Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 08/38] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
                   ` (31 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/fman/fman_hw.c      | 606 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fman.h           |   2 +
 drivers/bus/dpaa/include/fsl_fman.h       | 182 +++++++++
 drivers/bus/dpaa/include/fsl_fman_crc64.h | 263 +++++++++++++
 5 files changed, 1054 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 49abdc7..94849b8 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -66,6 +66,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
+	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c
 
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
new file mode 100644
index 0000000..77908ec
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -0,0 +1,606 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+#include <fman.h>
+/* This header declares things about Fman hardware itself (the format of status
+ * words and an inline implementation of CRC64). We include it only in order to
+ * instantiate the one global variable it depends on.
+ */
+#include <fsl_fman.h>
+#include <fsl_fman_crc64.h>
+
+/* Instantiate the global variable that the inline CRC64 implementation (in
+ * <fsl_fman.h>) depends on.
+ */
+DECLARE_FMAN_CRC64_TABLE();
+
+#define ETH_ADDR_TO_UINT64(eth_addr)                  \
+	(uint64_t)(((uint64_t)(eth_addr)[0] << 40) |   \
+	((uint64_t)(eth_addr)[1] << 32) |   \
+	((uint64_t)(eth_addr)[2] << 24) |   \
+	((uint64_t)(eth_addr)[3] << 16) |   \
+	((uint64_t)(eth_addr)[4] << 8) |    \
+	((uint64_t)(eth_addr)[5]))
+
+void
+fman_if_set_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i|HASH_CTRL_MCAST_EN);
+}
+
+void
+fman_if_reset_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i & ~HASH_CTRL_MCAST_EN);
+}
+
+static
+uint32_t get_mac_hash_code(uint64_t eth_addr)
+{
+	uint64_t	mask1, mask2;
+	uint32_t	xorVal = 0;
+	uint8_t		i, j;
+
+	for (i = 0; i < 6; i++) {
+		mask1 = eth_addr & (uint64_t)0x01;
+		eth_addr >>= 1;
+
+		for (j = 0; j < 7; j++) {
+			mask2 = eth_addr & (uint64_t)0x01;
+			mask1 ^= mask2;
+			eth_addr >>= 1;
+		}
+
+		xorVal |= (mask1 << (5 - i));
+	}
+
+	return xorVal;
+}
+
+int
+fman_memac_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	uint64_t eth_addr;
+	void *hashtable_ctrl;
+	uint32_t hash;
+
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	eth_addr = ETH_ADDR_TO_UINT64(eth);
+
+	if (!(eth_addr & GROUP_ADDRESS))
+		return -1;
+
+	hash = get_mac_hash_code(eth_addr) & HASH_CTRL_ADDR_MASK;
+	hash = hash | HASH_CTRL_MCAST_EN;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	out_be32(hashtable_ctrl, hash);
+
+	return 0;
+}
+
+int
+fman_memac_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *mac_reg =
+		&((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
+	u32 val = in_be32(mac_reg);
+
+	eth[0] = (val & 0x000000ff) >> 0;
+	eth[1] = (val & 0x0000ff00) >> 8;
+	eth[2] = (val & 0x00ff0000) >> 16;
+	eth[3] = (val & 0xff000000) >> 24;
+
+	mac_reg =  &((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_u;
+	val = in_be32(mac_reg);
+
+	eth[4] = (val & 0x000000ff) >> 0;
+	eth[5] = (val & 0x0000ff00) >> 8;
+
+	return 0;
+}
+
+static void
+fman_memac_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	void *reg;
+
+	if (addr_num) {
+		reg = &((struct memac_regs *)m->ccsr_map)->
+				mac_addr[addr_num-1].mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+		out_be32(reg, 0x0);
+	} else {
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+		out_be32(reg, 0x0);
+	}
+}
+
+static int
+fman_memac_add_mac_addr(struct fman_if *p, uint8_t *eth,
+				       uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+
+	void *reg;
+	u32 val;
+
+	memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_l;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+
+	val = (m->__if.mac_addr.addr_bytes[0] |
+	       (m->__if.mac_addr.addr_bytes[1] << 8) |
+	       (m->__if.mac_addr.addr_bytes[2] << 16) |
+	       (m->__if.mac_addr.addr_bytes[3] << 24));
+	out_be32(reg, val);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+
+	val = ((m->__if.mac_addr.addr_bytes[4] << 0) |
+	       (m->__if.mac_addr.addr_bytes[5] << 8));
+	out_be32(reg, val);
+
+	return 0;
+}
+
+
+static void
+fman_memac_stats_get(struct fman_if *p,
+		     struct rte_eth_stats *stats)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+
+	/* read recved packet count */
+	stats->ipackets = ((u64)in_be32(&regs->rfrm_u)) << 32 |
+			in_be32(&regs->rfrm_l);
+	stats->ibytes = ((u64)in_be32(&regs->roct_u)) << 32 |
+			in_be32(&regs->roct_l);
+	stats->ierrors = ((u64)in_be32(&regs->rerr_u)) << 32 |
+			in_be32(&regs->rerr_l);
+
+	/* read xmited packet count */
+	stats->opackets = ((u64)in_be32(&regs->tfrm_u)) << 32 |
+			in_be32(&regs->tfrm_l);
+	stats->obytes = ((u64)in_be32(&regs->toct_u)) << 32 |
+			in_be32(&regs->toct_l);
+	stats->oerrors = ((u64)in_be32(&regs->terr_u)) << 32 |
+			in_be32(&regs->terr_l);
+}
+
+static void
+fman_memac_reset_stat(struct fman_if *p)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+	uint32_t tmp;
+
+	tmp = in_be32(&regs->statn_config);
+
+	tmp |= STATS_CFG_CLR;
+
+	out_be32(&regs->statn_config, tmp);
+
+	while (in_be32(&regs->statn_config) & STATS_CFG_CLR)
+		;
+}
+
+int
+fm_mac_add_exact_match_mac_addr(struct fman_if *p, uint8_t *eth,
+				    uint8_t addr_num)
+{
+	assert(fman_ccsr_map_fd != -1);
+
+	return fman_memac_add_mac_addr(p, eth, addr_num);
+}
+
+int
+fm_mac_rem_exact_match_mac_addr(struct fman_if *p, int8_t addr_num)
+{
+	assert(fman_ccsr_map_fd != -1);
+
+	fman_memac_clear_mac_addr(p, addr_num);
+	return 0;
+}
+
+int
+fm_mac_config(struct fman_if *p,  uint8_t *eth)
+{
+	assert(fman_ccsr_map_fd != -1);
+
+	return fman_memac_get_primary_mac_addr(p, eth);
+}
+
+void
+fm_mac_set_rx_ignore_pause_frames(struct fman_if *p, bool enable)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	u32 value = 0;
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Rx Ignore Pause Frames */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	if (enable)
+		value = in_be32(cmdcfg) | CMD_CFG_PAUSE_IGNORE;
+	else
+		value = in_be32(cmdcfg) & ~CMD_CFG_PAUSE_IGNORE;
+
+	out_be32(cmdcfg, value);
+}
+
+void
+fm_mac_config_loopback(struct fman_if *p, bool enable)
+{
+	if (enable)
+		/* Enable loopback mode */
+		fman_if_loopback_enable(p);
+	else
+		/* Disable loopback mode */
+		fman_if_loopback_disable(p);
+}
+
+void
+fm_mac_conf_max_frame_len(struct fman_if *p,
+			       unsigned int max_frame_len)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	unsigned int *maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Max frame length */
+	maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+	out_be32(maxfrm, (MAXFRM_RX_MASK & max_frame_len));
+}
+
+void
+fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats)
+{
+	fman_memac_stats_get(p, stats);
+}
+
+void
+fman_if_stats_reset(struct fman_if *p)
+{
+	fman_memac_reset_stat(p);
+}
+
+void
+fm_mac_set_promiscuous(struct fman_if *p)
+{
+	fman_if_promiscuous_enable(p);
+}
+
+void
+fman_if_promiscuous_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_PROMIS_EN);
+}
+
+void
+fman_if_promiscuous_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Disable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) & (~CMD_CFG_PROMIS_EN));
+}
+
+void
+fman_if_enable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* enable Rx and Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) | 3);
+}
+
+void
+fman_if_disable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* only disable Rx, not Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) & ~(u32)2);
+}
+
+void
+fman_if_loopback_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) | IF_MODE_RLP);
+	} else{
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_loopback_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+	/* Disable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) & ~IF_MODE_RLP);
+	} else {
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) & ~CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
+		    int bpid, size_t bufsize)
+{
+	u32 fmbm_ebmpi;
+	u32 ebmpi_val_ace = 0xc0000000;
+	u32 ebmpi_mask = 0xffc00000;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ebmpi =
+	       in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0]);
+	fmbm_ebmpi = ebmpi_val_ace | (fmbm_ebmpi & ebmpi_mask) | (bpid << 16) |
+		     (bufsize);
+
+	out_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0],
+		 fmbm_ebmpi);
+}
+
+int
+fman_if_get_fc_quanta(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	return in_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0]);
+}
+
+int
+fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	out_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0],
+		 pause_quanta);
+	return 0;
+}
+
+int
+fman_if_get_fdoff(struct fman_if *fm_if)
+{
+	u32 fmbm_ricp;
+	int fdoff;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ricp =
+		   in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp);
+	/*iceof + icsz*/
+	fdoff = ((fmbm_ricp & iceof_mask) >> 16) * 16 +
+		(fmbm_ricp & icsz_mask) * 16;
+
+	return fdoff;
+}
+
+void
+fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_refqid =
+			&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_refqid;
+	out_be32(fmbm_refqid, err_fqid);
+}
+
+int
+fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	val = in_be32(fmbm_ricp);
+
+	icp->iceof = (val & iceof_mask) >> 12;
+	icp->iciof = (val & iciof_mask) >> 4;
+	icp->icsz = (val & icsz_mask) << 4;
+
+	return 0;
+}
+
+int
+fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	val |= (icp->iceof << 12) & iceof_mask;
+	val |= (icp->iciof << 4) & iciof_mask;
+	val |= (icp->icsz >> 4) & icsz_mask;
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	out_be32(fmbm_ricp, val);
+
+	return 0;
+}
+
+void
+fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rebm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_rebm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rebm;
+
+	out_be32(fmbm_rebm, in_be32(fmbm_rebm) | (fd_offset << 16));
+}
+
+void
+fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	out_be32(reg_maxfrm, (in_be32(reg_maxfrm) & 0xFFFF0000) | max_frm);
+}
+
+uint16_t
+fman_if_get_maxfrm(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	return (in_be32(reg_maxfrm) | 0x0000FFFF);
+}
+
+void
+fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmqm_pndn;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmqm_pndn = &((struct fman_port_qmi_regs *)__if->qmi_map)->fmqm_pndn;
+
+	out_be32(fmqm_pndn, nia);
+}
+
+void
+fman_if_discard_rx_errors(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rfsdm, *fmbm_rfsem;
+
+	fmbm_rfsem = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsem;
+	out_be32(fmbm_rfsem, 0);
+
+	/* Configure the discard mask to discard the error packets which have
+	 * DMA errors, Frame size error, Header error etc. The mask 0x010CE3F0
+	 * is to configured discard all the errors which come in the FD[STATUS]
+	 */
+	fmbm_rfsdm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsdm;
+	out_be32(fmbm_rfsdm, 0x010CE3F0);
+}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 19105bb..aeb707b 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -406,6 +406,8 @@ struct __fman_if {
  */
 extern const struct list_head *fman_if_list;
 
+extern int fman_ccsr_map_fd;
+
 /* To display MAC addresses (of type "struct ether_addr") via printf()-style
  * interfaces, these macros may come in handy. Eg;
  *        struct fman_if *p = get_ptr_to_some_interface();
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
new file mode 100644
index 0000000..0aff22c
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -0,0 +1,182 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_H
+#define __FSL_FMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Status field in FD is updated on Rx side by FMAN with following information.
+ * Refer to field description in FM BG.
+ */
+struct fm_status_t {
+	unsigned int reserved0:3;
+	unsigned int dcl4c:1; /* Don't Check L4 Checksum */
+	unsigned int reserved1:1;
+	unsigned int ufd:1; /* Unsupported Format */
+	unsigned int lge:1; /* Length Error */
+	unsigned int dme:1; /* DMA Error */
+
+	unsigned int reserved2:4;
+	unsigned int fpe:1; /* Frame physical Error */
+	unsigned int fse:1; /* Frame Size Error */
+	unsigned int dis:1; /* Discard by Classification */
+	unsigned int reserved3:1;
+
+	unsigned int eof:1; /* Key Extraction goes out of frame */
+	unsigned int nss:1; /* No Scheme selected */
+	unsigned int kso:1; /* Key Size Overflow */
+	unsigned int reserved4:1;
+	unsigned int fcl:2; /* Frame Color */
+	unsigned int ipp:1; /* Illegal Policer Profile Selected */
+	unsigned int flm:1; /* Frame Length Mismatch */
+	unsigned int pte:1; /* Parser Timeout */
+	unsigned int isp:1; /* Invalid Soft Parser Instruction */
+	unsigned int phe:1; /* Header Error during parsing */
+	unsigned int frdr:1; /* Frame Dropped by disabled port */
+	unsigned int reserved5:4;
+} __attribute__ ((__packed__));
+
+/* Set promiscuous mode on an interface */
+void fm_mac_set_promiscuous(struct fman_if *p);
+
+/* Get mac config*/
+int fm_mac_config(struct fman_if *p, uint8_t *eth);
+
+/* Set MAC address for a particular interface */
+int fm_mac_add_exact_match_mac_addr(struct fman_if *p, uint8_t *eth,
+					      uint8_t addr_num);
+
+/* Remove a MAC address for a particular interface */
+int fm_mac_rem_exact_match_mac_addr(struct fman_if *p, int8_t addr_num);
+
+/* Get the FMAN statistics */
+void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);
+
+/* Reset the FMAN statistics */
+void fman_if_stats_reset(struct fman_if *p);
+
+/* Set ignore pause option for a specific interface */
+void fm_mac_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
+
+/* Enable Loopback mode */
+void fm_mac_config_loopback(struct fman_if *p, bool enable);
+
+/* Set max frame length */
+void fm_mac_conf_max_frame_len(struct fman_if *p,
+			       unsigned int max_frame_len);
+
+/* Enable/disable Rx promiscuous mode on specified interface */
+void fman_if_promiscuous_enable(struct fman_if *);
+void fman_if_promiscuous_disable(struct fman_if *);
+
+/* Enable/disable Rx on specific interfaces */
+void fman_if_enable_rx(struct fman_if *);
+void fman_if_disable_rx(struct fman_if *);
+
+/* Enable/disable loopback on specific interfaces */
+void fman_if_loopback_enable(struct fman_if *);
+void fman_if_loopback_disable(struct fman_if *);
+
+/* Set buffer pool on specific interface */
+void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
+		    size_t bufsize);
+
+/* Get Flow Control pause quanta on specific interface */
+int fman_if_get_fc_quanta(struct fman_if *fm_if);
+
+/* Set Flow Control pause quanta on specific interface */
+int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
+
+/* Set default error fqid on specific interface */
+void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
+
+/* Get IC transfer params */
+int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
+
+/* Set IC transfer params */
+int fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp);
+
+/* Get interface fd->offset value */
+int fman_if_get_fdoff(struct fman_if *fm_if);
+
+/* Set interface fd->offset value */
+void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
+
+/* Get interface Max Frame length (MTU) */
+uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
+
+/* Set interface  Max Frame length (MTU) */
+void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
+
+/* Set interface next invoked action for dequeue operation */
+void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
+
+/* discard error packets on rx */
+void fman_if_discard_rx_errors(struct fman_if *fm_if);
+
+void fman_if_set_mcast_filter_table(struct fman_if *p);
+
+void fman_if_reset_mcast_filter_table(struct fman_if *p);
+
+int fman_memac_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
+
+int fman_memac_get_primary_mac_addr(struct fman_if *p, uint8_t *eth);
+
+
+/* Enable/disable Rx on all interfaces */
+static inline void fman_if_enable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_enable_rx(__if);
+}
+
+static inline void fman_if_disable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_disable_rx(__if);
+}
+#endif /* __FSL_FMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_fman_crc64.h b/drivers/bus/dpaa/include/fsl_fman_crc64.h
new file mode 100644
index 0000000..af5803f
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman_crc64.h
@@ -0,0 +1,263 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_CRC64_H
+#define __FSL_FMAN_CRC64_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*
+ * This following definitions provide a software implementation of the CRC64
+ * algorithm implemented within Fman.
+ *
+ * The following example shows how to compute a CRC64 hash value based on
+ * SRC_IP, DST_IP and ESP_SPI values
+ *
+ *     #define compute_hash(saddr,daddr,spi) \
+ *        do { \
+ *           uint64_t result; \
+ *           result = fman_crc64_init(); \
+ *           result = fman_crc64_compute_32bit(saddr, result); \
+ *           result = fman_crc64_compute_32bit(daddr, result); \
+ *           result = fman_crc64_compute_32bit(spi, result); \
+ *           return (uint32_t) result & RC_HASH_MASK; \
+ *        } while (0);
+ *
+ * If hashing over a different number of fields (or of different types) is
+ * required, this can be implemented using the following primitives.
+ */
+
+/* The following table provides the constants used by the Fman CRC64
+ * implementation. The table is instantiated within the DPAA fman driver.
+ * However if the application is not going to be linked against the DPAA fman
+ * driver but will use this Fman CRC64 implementation, then it will need to
+ * instantiate this table by using the DECLARE_FMAN_CRC64_TABLE() macro.
+ */
+struct fman_crc64_t {
+	uint64_t initial;
+	uint64_t table[1 << 8];
+};
+extern struct fman_crc64_t FMAN_CRC64_ECMA_182;
+#define DECLARE_FMAN_CRC64_TABLE() \
+struct fman_crc64_t FMAN_CRC64_ECMA_182 = { \
+	0xFFFFFFFFFFFFFFFFULL, \
+	{ \
+		0x0000000000000000ULL, 0xb32e4cbe03a75f6fULL, \
+		0xf4843657a840a05bULL, 0x47aa7ae9abe7ff34ULL, \
+		0x7bd0c384ff8f5e33ULL, 0xc8fe8f3afc28015cULL, \
+		0x8f54f5d357cffe68ULL, 0x3c7ab96d5468a107ULL, \
+		0xf7a18709ff1ebc66ULL, 0x448fcbb7fcb9e309ULL, \
+		0x0325b15e575e1c3dULL, 0xb00bfde054f94352ULL, \
+		0x8c71448d0091e255ULL, 0x3f5f08330336bd3aULL, \
+		0x78f572daa8d1420eULL, 0xcbdb3e64ab761d61ULL, \
+		0x7d9ba13851336649ULL, 0xceb5ed8652943926ULL, \
+		0x891f976ff973c612ULL, 0x3a31dbd1fad4997dULL, \
+		0x064b62bcaebc387aULL, 0xb5652e02ad1b6715ULL, \
+		0xf2cf54eb06fc9821ULL, 0x41e11855055bc74eULL, \
+		0x8a3a2631ae2dda2fULL, 0x39146a8fad8a8540ULL, \
+		0x7ebe1066066d7a74ULL, 0xcd905cd805ca251bULL, \
+		0xf1eae5b551a2841cULL, 0x42c4a90b5205db73ULL, \
+		0x056ed3e2f9e22447ULL, 0xb6409f5cfa457b28ULL, \
+		0xfb374270a266cc92ULL, 0x48190ecea1c193fdULL, \
+		0x0fb374270a266cc9ULL, 0xbc9d3899098133a6ULL, \
+		0x80e781f45de992a1ULL, 0x33c9cd4a5e4ecdceULL, \
+		0x7463b7a3f5a932faULL, 0xc74dfb1df60e6d95ULL, \
+		0x0c96c5795d7870f4ULL, 0xbfb889c75edf2f9bULL, \
+		0xf812f32ef538d0afULL, 0x4b3cbf90f69f8fc0ULL, \
+		0x774606fda2f72ec7ULL, 0xc4684a43a15071a8ULL, \
+		0x83c230aa0ab78e9cULL, 0x30ec7c140910d1f3ULL, \
+		0x86ace348f355aadbULL, 0x3582aff6f0f2f5b4ULL, \
+		0x7228d51f5b150a80ULL, 0xc10699a158b255efULL, \
+		0xfd7c20cc0cdaf4e8ULL, 0x4e526c720f7dab87ULL, \
+		0x09f8169ba49a54b3ULL, 0xbad65a25a73d0bdcULL, \
+		0x710d64410c4b16bdULL, 0xc22328ff0fec49d2ULL, \
+		0x85895216a40bb6e6ULL, 0x36a71ea8a7ace989ULL, \
+		0x0adda7c5f3c4488eULL, 0xb9f3eb7bf06317e1ULL, \
+		0xfe5991925b84e8d5ULL, 0x4d77dd2c5823b7baULL, \
+		0x64b62bcaebc387a1ULL, 0xd7986774e864d8ceULL, \
+		0x90321d9d438327faULL, 0x231c512340247895ULL, \
+		0x1f66e84e144cd992ULL, 0xac48a4f017eb86fdULL, \
+		0xebe2de19bc0c79c9ULL, 0x58cc92a7bfab26a6ULL, \
+		0x9317acc314dd3bc7ULL, 0x2039e07d177a64a8ULL, \
+		0x67939a94bc9d9b9cULL, 0xd4bdd62abf3ac4f3ULL, \
+		0xe8c76f47eb5265f4ULL, 0x5be923f9e8f53a9bULL, \
+		0x1c4359104312c5afULL, 0xaf6d15ae40b59ac0ULL, \
+		0x192d8af2baf0e1e8ULL, 0xaa03c64cb957be87ULL, \
+		0xeda9bca512b041b3ULL, 0x5e87f01b11171edcULL, \
+		0x62fd4976457fbfdbULL, 0xd1d305c846d8e0b4ULL, \
+		0x96797f21ed3f1f80ULL, 0x2557339fee9840efULL, \
+		0xee8c0dfb45ee5d8eULL, 0x5da24145464902e1ULL, \
+		0x1a083bacedaefdd5ULL, 0xa9267712ee09a2baULL, \
+		0x955cce7fba6103bdULL, 0x267282c1b9c65cd2ULL, \
+		0x61d8f8281221a3e6ULL, 0xd2f6b4961186fc89ULL, \
+		0x9f8169ba49a54b33ULL, 0x2caf25044a02145cULL, \
+		0x6b055fede1e5eb68ULL, 0xd82b1353e242b407ULL, \
+		0xe451aa3eb62a1500ULL, 0x577fe680b58d4a6fULL, \
+		0x10d59c691e6ab55bULL, 0xa3fbd0d71dcdea34ULL, \
+		0x6820eeb3b6bbf755ULL, 0xdb0ea20db51ca83aULL, \
+		0x9ca4d8e41efb570eULL, 0x2f8a945a1d5c0861ULL, \
+		0x13f02d374934a966ULL, 0xa0de61894a93f609ULL, \
+		0xe7741b60e174093dULL, 0x545a57dee2d35652ULL, \
+		0xe21ac88218962d7aULL, 0x5134843c1b317215ULL, \
+		0x169efed5b0d68d21ULL, 0xa5b0b26bb371d24eULL, \
+		0x99ca0b06e7197349ULL, 0x2ae447b8e4be2c26ULL, \
+		0x6d4e3d514f59d312ULL, 0xde6071ef4cfe8c7dULL, \
+		0x15bb4f8be788911cULL, 0xa6950335e42fce73ULL, \
+		0xe13f79dc4fc83147ULL, 0x521135624c6f6e28ULL, \
+		0x6e6b8c0f1807cf2fULL, 0xdd45c0b11ba09040ULL, \
+		0x9aefba58b0476f74ULL, 0x29c1f6e6b3e0301bULL, \
+		0xc96c5795d7870f42ULL, 0x7a421b2bd420502dULL, \
+		0x3de861c27fc7af19ULL, 0x8ec62d7c7c60f076ULL, \
+		0xb2bc941128085171ULL, 0x0192d8af2baf0e1eULL, \
+		0x4638a2468048f12aULL, 0xf516eef883efae45ULL, \
+		0x3ecdd09c2899b324ULL, 0x8de39c222b3eec4bULL, \
+		0xca49e6cb80d9137fULL, 0x7967aa75837e4c10ULL, \
+		0x451d1318d716ed17ULL, 0xf6335fa6d4b1b278ULL, \
+		0xb199254f7f564d4cULL, 0x02b769f17cf11223ULL, \
+		0xb4f7f6ad86b4690bULL, 0x07d9ba1385133664ULL, \
+		0x4073c0fa2ef4c950ULL, 0xf35d8c442d53963fULL, \
+		0xcf273529793b3738ULL, 0x7c0979977a9c6857ULL, \
+		0x3ba3037ed17b9763ULL, 0x888d4fc0d2dcc80cULL, \
+		0x435671a479aad56dULL, 0xf0783d1a7a0d8a02ULL, \
+		0xb7d247f3d1ea7536ULL, 0x04fc0b4dd24d2a59ULL, \
+		0x3886b22086258b5eULL, 0x8ba8fe9e8582d431ULL, \
+		0xcc0284772e652b05ULL, 0x7f2cc8c92dc2746aULL, \
+		0x325b15e575e1c3d0ULL, 0x8175595b76469cbfULL, \
+		0xc6df23b2dda1638bULL, 0x75f16f0cde063ce4ULL, \
+		0x498bd6618a6e9de3ULL, 0xfaa59adf89c9c28cULL, \
+		0xbd0fe036222e3db8ULL, 0x0e21ac88218962d7ULL, \
+		0xc5fa92ec8aff7fb6ULL, 0x76d4de52895820d9ULL, \
+		0x317ea4bb22bfdfedULL, 0x8250e80521188082ULL, \
+		0xbe2a516875702185ULL, 0x0d041dd676d77eeaULL, \
+		0x4aae673fdd3081deULL, 0xf9802b81de97deb1ULL, \
+		0x4fc0b4dd24d2a599ULL, 0xfceef8632775faf6ULL, \
+		0xbb44828a8c9205c2ULL, 0x086ace348f355aadULL, \
+		0x34107759db5dfbaaULL, 0x873e3be7d8faa4c5ULL, \
+		0xc094410e731d5bf1ULL, 0x73ba0db070ba049eULL, \
+		0xb86133d4dbcc19ffULL, 0x0b4f7f6ad86b4690ULL, \
+		0x4ce50583738cb9a4ULL, 0xffcb493d702be6cbULL, \
+		0xc3b1f050244347ccULL, 0x709fbcee27e418a3ULL, \
+		0x3735c6078c03e797ULL, 0x841b8ab98fa4b8f8ULL, \
+		0xadda7c5f3c4488e3ULL, 0x1ef430e13fe3d78cULL, \
+		0x595e4a08940428b8ULL, 0xea7006b697a377d7ULL, \
+		0xd60abfdbc3cbd6d0ULL, 0x6524f365c06c89bfULL, \
+		0x228e898c6b8b768bULL, 0x91a0c532682c29e4ULL, \
+		0x5a7bfb56c35a3485ULL, 0xe955b7e8c0fd6beaULL, \
+		0xaeffcd016b1a94deULL, 0x1dd181bf68bdcbb1ULL, \
+		0x21ab38d23cd56ab6ULL, 0x9285746c3f7235d9ULL, \
+		0xd52f0e859495caedULL, 0x6601423b97329582ULL, \
+		0xd041dd676d77eeaaULL, 0x636f91d96ed0b1c5ULL, \
+		0x24c5eb30c5374ef1ULL, 0x97eba78ec690119eULL, \
+		0xab911ee392f8b099ULL, 0x18bf525d915feff6ULL, \
+		0x5f1528b43ab810c2ULL, 0xec3b640a391f4fadULL, \
+		0x27e05a6e926952ccULL, 0x94ce16d091ce0da3ULL, \
+		0xd3646c393a29f297ULL, 0x604a2087398eadf8ULL, \
+		0x5c3099ea6de60cffULL, 0xef1ed5546e415390ULL, \
+		0xa8b4afbdc5a6aca4ULL, 0x1b9ae303c601f3cbULL, \
+		0x56ed3e2f9e224471ULL, 0xe5c372919d851b1eULL, \
+		0xa26908783662e42aULL, 0x114744c635c5bb45ULL, \
+		0x2d3dfdab61ad1a42ULL, 0x9e13b115620a452dULL, \
+		0xd9b9cbfcc9edba19ULL, 0x6a978742ca4ae576ULL, \
+		0xa14cb926613cf817ULL, 0x1262f598629ba778ULL, \
+		0x55c88f71c97c584cULL, 0xe6e6c3cfcadb0723ULL, \
+		0xda9c7aa29eb3a624ULL, 0x69b2361c9d14f94bULL, \
+		0x2e184cf536f3067fULL, 0x9d36004b35545910ULL, \
+		0x2b769f17cf112238ULL, 0x9858d3a9ccb67d57ULL, \
+		0xdff2a94067518263ULL, 0x6cdce5fe64f6dd0cULL, \
+		0x50a65c93309e7c0bULL, 0xe388102d33392364ULL, \
+		0xa4226ac498dedc50ULL, 0x170c267a9b79833fULL, \
+		0xdcd7181e300f9e5eULL, 0x6ff954a033a8c131ULL, \
+		0x28532e49984f3e05ULL, 0x9b7d62f79be8616aULL, \
+		0xa707db9acf80c06dULL, 0x14299724cc279f02ULL, \
+		0x5383edcd67c06036ULL, 0xe0ada17364673f59ULL} \
+}
+
+/*
+ * Return the initial CRC seed. Use the value returned from this API as the
+ * "crc" parameter to the first call to add data.
+ */
+static inline uint64_t fman_crc64_init(void)
+{
+	return FMAN_CRC64_ECMA_182.initial;
+}
+
+/* Updates the CRC with arbitrary data */
+static inline uint64_t fman_crc64_update(uint64_t crc,
+					 void *data, unsigned int len)
+{
+	uint8_t *p = data;
+	while (len--)
+		crc = FMAN_CRC64_ECMA_182.table[(crc ^ *(p++)) & 0xff] ^
+				(crc >> 8);
+	return crc;
+}
+
+/* Shorthands for updating the CRC with 8/16/32 bits of data.
+ * IMPORTANT NOTE: the typed "data" arguments should not be mistaken for
+ * host-endian numerical values, the assumption is that these values contain
+ * big-endian (ie. network byte order) data.
+ */
+static inline uint64_t fman_crc64_compute_32bit(uint32_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_16bit(uint16_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_8bit(uint8_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+
+/*
+ * Finalise the CRC (using 2's complement)
+ */
+static inline uint64_t fman_crc64_finish(uint64_t seed)
+{
+	return ~seed;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_FMAN_CRC64_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 08/38] bus/dpaa: enable DPAA IOCTL portal driver
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (6 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 07/38] bus/dpaa: add FMan hardware operations Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 09/38] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
                   ` (30 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Userspace applications interact with DPAA blocks using this IOCTL driver.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile             |   4 +-
 drivers/bus/dpaa/base/qbman/process.c | 331 ++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h    |  88 +++++++++
 drivers/bus/dpaa/include/process.h    | 107 +++++++++++
 4 files changed, 529 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/process.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 94849b8..22218e2 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -51,6 +51,7 @@ CFLAGS += -D _GNU_SOURCE
 
 CFLAGS += -I$(RTE_BUS_DPAA)/
 CFLAGS += -I$(RTE_BUS_DPAA)/include
+CFLAGS += -I$(RTE_BUS_DPAA)/base/qbman
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -68,6 +69,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
-	base/fman/netcfg_layer.c
+	base/fman/netcfg_layer.c \
+	base/qbman/process.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
new file mode 100644
index 0000000..b8ec539
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -0,0 +1,331 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <assert.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+
+#include "process.h"
+
+#include <fsl_usd.h>
+
+/* As higher-level drivers will be built on top of this (dma_mem, qbman, ...),
+ * it's preferable that the process driver itself not provide any exported API.
+ * As such, combined with the fact that none of these operations are
+ * performance critical, it is justified to use lazy initialisation, so that's
+ * what the lock is for.
+ */
+static int fd = -1;
+static pthread_mutex_t fd_init_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static int check_fd(void)
+{
+	int ret;
+
+	if (fd >= 0)
+		return 0;
+	ret = pthread_mutex_lock(&fd_init_lock);
+	assert(!ret);
+	/* check again with the lock held */
+	if (fd < 0)
+		fd = open(PROCESS_PATH, O_RDWR);
+	ret = pthread_mutex_unlock(&fd_init_lock);
+	assert(!ret);
+	return (fd >= 0) ? 0 : -ENODEV;
+}
+
+#define DPAA_IOCTL_MAGIC 'u'
+struct dpaa_ioctl_id_alloc {
+	uint32_t base; /* Return value, the start of the allocated range */
+	enum dpaa_id_type id_type; /* what kind of resource(s) to allocate */
+	uint32_t num; /* how many IDs to allocate (and return value) */
+	uint32_t align; /* must be a power of 2, 0 is treated like 1 */
+	int partial; /* whether to allow less than 'num' */
+};
+
+struct dpaa_ioctl_id_release {
+	/* Input; */
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+struct dpaa_ioctl_id_reserve {
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+#define DPAA_IOCTL_ID_ALLOC \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x01, struct dpaa_ioctl_id_alloc)
+#define DPAA_IOCTL_ID_RELEASE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x02, struct dpaa_ioctl_id_release)
+#define DPAA_IOCTL_ID_RESERVE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x0A, struct dpaa_ioctl_id_reserve)
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial)
+{
+	struct dpaa_ioctl_id_alloc id = {
+		.id_type = id_type,
+		.num = num,
+		.align = align,
+		.partial = partial
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	ret = ioctl(fd, DPAA_IOCTL_ID_ALLOC, &id);
+	if (ret)
+		return ret;
+	for (ret = 0; ret < (int)id.num; ret++)
+		base[ret] = id.base + ret;
+	return id.num;
+}
+
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_release id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret) {
+		fprintf(stderr, "Process FD failure\n");
+		return;
+	}
+	ret = ioctl(fd, DPAA_IOCTL_ID_RELEASE, &id);
+	if (ret)
+		fprintf(stderr, "Process FD ioctl failure type %d base 0x%x num %d\n",
+			id_type, base, num);
+}
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_reserve id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	return ioctl(fd, DPAA_IOCTL_ID_RESERVE, &id);
+}
+
+/***************************************/
+/* Mapping and using QMan/BMan portals */
+/***************************************/
+
+#define DPAA_IOCTL_PORTAL_MAP \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x07, struct dpaa_ioctl_portal_map)
+#define DPAA_IOCTL_PORTAL_UNMAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x08, struct dpaa_portal_map)
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_MAP, params);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_MAP)");
+		return ret;
+	}
+	return 0;
+}
+
+int process_portal_unmap(struct dpaa_portal_map *map)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_UNMAP, map);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_UNMAP)");
+		return ret;
+	}
+	return 0;
+}
+
+#define DPAA_IOCTL_PORTAL_IRQ_MAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x09, struct dpaa_ioctl_irq_map)
+
+int process_portal_irq_map(int ifd, struct dpaa_ioctl_irq_map *map)
+{
+	map->fd = fd;
+	return ioctl(ifd, DPAA_IOCTL_PORTAL_IRQ_MAP, map);
+}
+
+int process_portal_irq_unmap(int ifd)
+{
+	return close(ifd);
+}
+
+struct dpaa_ioctl_raw_portal {
+	/* inputs */
+	enum dpaa_portal_type type; /* Type of portal to allocate */
+
+	uint8_t enable_stash; /* set to non zero to turn on stashing */
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+#define DPAA_IOCTL_ALLOC_RAW_PORTAL \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x0C, struct dpaa_ioctl_raw_portal)
+
+#define DPAA_IOCTL_FREE_RAW_PORTAL \
+	_IOR(DPAA_IOCTL_MAGIC, 0x0D, struct dpaa_ioctl_raw_portal)
+
+static int process_portal_allocate(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_ALLOC_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_ALLOC_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+static int process_portal_free(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_FREE_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_FREE_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.enable_stash = portal->enable_stash;
+	input.cpu = portal->cpu;
+	input.cache = portal->cache;
+	input.window = portal->window;
+	input.sdest = portal->sdest;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int qman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.enable_stash = 0;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int bman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
new file mode 100644
index 0000000..4ff48c6
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_USD_H
+#define __FSL_USD_H
+
+#include <compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define QBMAN_ANY_PORTAL_IDX 0xffffffff
+
+/* Obtain and free raw (unitialized) portals */
+
+struct dpaa_raw_portal {
+	/* inputs */
+
+	/* set to non zero to turn on stashing */
+	uint8_t enable_stash;
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int qman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int bman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_USD_H */
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
new file mode 100644
index 0000000..989ddcd
--- /dev/null
+++ b/drivers/bus/dpaa/include/process.h
@@ -0,0 +1,107 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __PROCESS_H
+#define	__PROCESS_H
+
+#include <compat.h>
+
+/* The process device underlies process-wide user/kernel interactions, such as
+ * mapping dma_mem memory and providing accompanying ioctl()s. (This isn't used
+ * for portals, which use one UIO device each.).
+ */
+#define PROCESS_PATH		"/dev/fsl-usdpaa"
+
+/* Allocation of resource IDs uses a generic interface. This enum is used to
+ * distinguish between the type of underlying object being manipulated.
+ */
+enum dpaa_id_type {
+	dpaa_id_fqid,
+	dpaa_id_bpid,
+	dpaa_id_qpool,
+	dpaa_id_cgrid,
+	dpaa_id_max /* <-- not a valid type, represents the number of types */
+};
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial);
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+/* Mapping and using QMan/BMan portals */
+enum dpaa_portal_type {
+	dpaa_portal_qman,
+	dpaa_portal_bman,
+};
+
+struct dpaa_ioctl_portal_map {
+	/* Input parameter, is a qman or bman portal required. */
+	enum dpaa_portal_type type;
+	/* Specifes a specific portal index to map or 0xffffffff
+	 * for don't care.
+	 */
+	uint32_t index;
+
+	/* Return value if the map succeeds, this gives the mapped
+	 * cache-inhibited (cinh) and cache-enabled (cena) addresses.
+	 */
+	struct dpaa_portal_map {
+		void *cinh;
+		void *cena;
+	} addr;
+	/* Qman-specific return values */
+	u16 channel;
+	uint32_t pools;
+};
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params);
+int process_portal_unmap(struct dpaa_portal_map *map);
+
+struct dpaa_ioctl_irq_map {
+	enum dpaa_portal_type type; /* Type of portal to map */
+	int fd; /* File descriptor that contains the portal */
+	void *portal_cinh; /* Cache inhibited area to identify the portal */
+};
+
+int process_portal_irq_map(int fd,  struct dpaa_ioctl_irq_map *irq);
+int process_portal_irq_unmap(int fd);
+
+#endif	/*  __PROCESS_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 09/38] bus/dpaa: add layer for interrupt emulation using pthread
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (7 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 08/38] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 10/38] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
                   ` (29 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

An interrupt manager is implemented by emulating over pthreads.
Handlers are registered by QBMAN layer for being notified about
any interrupt request from DPAA blocks in userspace.

Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile              |   3 +-
 drivers/bus/dpaa/base/qbman/dpaa_sys.c | 136 +++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h |  65 ++++++++++++++++
 3 files changed, 203 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 22218e2..193ffc1 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -70,6 +70,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
-	base/qbman/process.c
+	base/qbman/process.c \
+	base/qbman/dpaa_sys.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.c b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
new file mode 100644
index 0000000..0017da5
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
@@ -0,0 +1,136 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <process.h>
+#include "dpaa_sys.h"
+
+struct process_interrupt {
+	int irq;
+	irqreturn_t (*isr)(int irq, void *arg);
+	unsigned long flags;
+	const char *name;
+	void *arg;
+	struct list_head node;
+};
+
+static COMPAT_LIST_HEAD(process_irq_list);
+static pthread_mutex_t process_irq_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static void process_interrupt_install(struct process_interrupt *irq)
+{
+	int ret;
+	/* Add the irq to the end of the list */
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_add_tail(&irq->node, &process_irq_list);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static void process_interrupt_remove(struct process_interrupt *irq)
+{
+	int ret;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_del(&irq->node);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static struct process_interrupt *process_interrupt_find(int irq_num)
+{
+	int ret;
+	struct process_interrupt *i = NULL;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_for_each_entry(i, &process_irq_list, node) {
+		if (i->irq == irq_num)
+			goto done;
+	}
+done:
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+	return i;
+}
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name,
+		      void *arg __maybe_unused)
+{
+	struct process_interrupt *irq_node =
+		kmalloc(sizeof(*irq_node), GFP_KERNEL);
+
+	if (!irq_node)
+		return -ENOMEM;
+	irq_node->irq = irq;
+	irq_node->isr = isr;
+	irq_node->flags = flags;
+	irq_node->name = name;
+	irq_node->arg = arg;
+	process_interrupt_install(irq_node);
+	return 0;
+}
+
+int qbman_free_irq(int irq, __maybe_unused void *arg)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (!irq_node)
+		return -EINVAL;
+	process_interrupt_remove(irq_node);
+	kfree(irq_node);
+	return 0;
+}
+
+/* This is the interface from the platform-specific driver code to obtain
+ * interrupt handlers that have been registered.
+ */
+void qbman_invoke_irq(int irq)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (irq_node)
+		irq_node->isr(irq, irq_node->arg);
+}
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.h b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
new file mode 100644
index 0000000..c53035a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
@@ -0,0 +1,65 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_SYS_H
+#define __DPAA_SYS_H
+
+#include <of.h>
+
+/* For 2-element tables related to cache-inhibited and cache-enabled mappings */
+#define DPAA_PORTAL_CE 0
+#define DPAA_PORTAL_CI 1
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+#define DPAA_ASSERT(x) ASSERT(x)
+#else
+#define DPAA_ASSERT(x)	do {  } while (0)
+#endif
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name, void *arg);
+int qbman_free_irq(int irq, void *arg);
+
+void qbman_invoke_irq(int irq);
+
+#endif /* __DPAA_SYS_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 10/38] bus/dpaa: add routines for managing a RB tree
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (8 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 09/38] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 11/38] bus/dpaa: add QMAN interface driver Shreyansh Jain
                   ` (28 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

QMAN frames are managed over a RB tree data structure.
This patch introduces necessary routines for implementing a RB tree.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/include/dpaa_rbtree.h | 143 +++++++++++++++++++++++++++++++++
 1 file changed, 143 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h

diff --git a/drivers/bus/dpaa/include/dpaa_rbtree.h b/drivers/bus/dpaa/include/dpaa_rbtree.h
new file mode 100644
index 0000000..fff2110
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_rbtree.h
@@ -0,0 +1,143 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_RBTREE_H
+#define __DPAA_RBTREE_H
+
+#include <rte_common.h>
+/************/
+/* RB-trees */
+/************/
+
+/* Linux has a good RB-tree implementation, that we can't use (GPL). It also has
+ * a flat/hooked-in interface that virtually requires license-contamination in
+ * order to write a caller-compatible implementation. Instead, I've created an
+ * RB-tree encapsulation on top of linux's primitives (it does some of the work
+ * the client logic would normally do), and this gives us something we can
+ * reimplement on LWE. Unfortunately there's no good+free RB-tree
+ * implementations out there that are license-compatible and "flat" (ie. no
+ * dynamic allocation). I did find a malloc-based one that I could convert, but
+ * that will be a task for later on. For now, LWE's RB-tree is implemented using
+ * an ordered linked-list.
+ *
+ * Note, the only linux-esque type is "struct rb_node", because it's used
+ * statically in the exported header, so it can't be opaque. Our version doesn't
+ * include a "rb_parent_color" field because we're doing linked-list instead of
+ * a true rb-tree.
+ */
+
+struct rb_node {
+	struct rb_node *prev, *next;
+};
+
+struct dpa_rbtree {
+	struct rb_node *head, *tail;
+};
+
+#define DPAA_RBTREE { NULL, NULL }
+static inline void dpa_rbtree_init(struct dpa_rbtree *tree)
+{
+	tree->head = tree->tail = NULL;
+}
+
+#define QMAN_NODE2OBJ(ptr, type, node_field) \
+	(type *)((char *)ptr - offsetof(type, node_field))
+
+#define IMPLEMENT_DPAA_RBTREE(name, type, node_field, val_field) \
+static inline int name##_push(struct dpa_rbtree *tree, type *obj) \
+{ \
+	struct rb_node *node = tree->head; \
+	if (!node) { \
+		tree->head = tree->tail = &obj->node_field; \
+		obj->node_field.prev = obj->node_field.next = NULL; \
+		return 0; \
+	} \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (obj->val_field == item->val_field) \
+			return -EBUSY; \
+		if (obj->val_field < item->val_field) { \
+			if (tree->head == node) \
+				tree->head = &obj->node_field; \
+			else \
+				node->prev->next = &obj->node_field; \
+			obj->node_field.prev = node->prev; \
+			obj->node_field.next = node; \
+			node->prev = &obj->node_field; \
+			return 0; \
+		} \
+		node = node->next; \
+	} \
+	obj->node_field.prev = tree->tail; \
+	obj->node_field.next = NULL; \
+	tree->tail->next = &obj->node_field; \
+	tree->tail = &obj->node_field; \
+	return 0; \
+} \
+static inline void name##_del(struct dpa_rbtree *tree, type *obj) \
+{ \
+	if (tree->head == &obj->node_field) { \
+		if (tree->tail == &obj->node_field) \
+			/* Only item in the list */ \
+			tree->head = tree->tail = NULL; \
+		else { \
+			/* Is the head, next != NULL */ \
+			tree->head = tree->head->next; \
+			tree->head->prev = NULL; \
+		} \
+	} else { \
+		if (tree->tail == &obj->node_field) { \
+			/* Is the tail, prev != NULL */ \
+			tree->tail = tree->tail->prev; \
+			tree->tail->next = NULL; \
+		} else { \
+			/* Is neither the head nor the tail */ \
+			obj->node_field.prev->next = obj->node_field.next; \
+			obj->node_field.next->prev = obj->node_field.prev; \
+		} \
+	} \
+} \
+static inline type *name##_find(struct dpa_rbtree *tree, u32 val) \
+{ \
+	struct rb_node *node = tree->head; \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (val == item->val_field) \
+			return item; \
+		if (val < item->val_field) \
+			return NULL; \
+		node = node->next; \
+	} \
+	return NULL; \
+}
+
+#endif /* __DPAA_RBTREE_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 11/38] bus/dpaa: add QMAN interface driver
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (9 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 10/38] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 12/38] bus/dpaa: add QMan driver core routines Shreyansh Jain
                   ` (27 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Queue Manager (QMan) is a hardware queue management block that
allows software and accelerators on the datapath to enqueue and dequeue
frames in order to communicate.

This part of QBMAN DPAA Block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |    4 +
 drivers/bus/dpaa/base/qbman/qman_driver.c |  271 ++++++
 drivers/bus/dpaa/base/qbman/qman_priv.h   |  314 +++++++
 drivers/bus/dpaa/include/fsl_qman.h       | 1283 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   13 +
 5 files changed, 1885 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 193ffc1..f1120bd 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -71,6 +71,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman_driver.c \
 	base/qbman/dpaa_sys.c
 
+# Link Pthread
+LDLIBS += -lpthread
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
new file mode 100644
index 0000000..80dde20
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -0,0 +1,271 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "qman_priv.h"
+#include <sys/ioctl.h>
+#include <rte_branch_prediction.h>
+
+/* Global variable containing revision id (even on non-control plane systems
+ * where CCSR isn't available).
+ */
+u16 qman_ip_rev;
+u16 qm_channel_pool1 = QMAN_CHANNEL_POOL1;
+u16 qm_channel_caam = QMAN_CHANNEL_CAAM;
+u16 qm_channel_pme = QMAN_CHANNEL_PME;
+
+/* Ccsr map address to access ccsrbased register */
+void *qman_ccsr_map;
+/* The qman clock frequency */
+u32 qman_clk;
+
+static __thread int fd = -1;
+static __thread struct qm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_qman
+};
+
+static int fsl_qman_portal_init(uint32_t index, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu\n");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!\n");
+		return -EINVAL;
+	}
+
+	/* Allocate and map a qman portal */
+	map.index = index;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	pcfg.channel = map.channel;
+	pcfg.pools = map.pools;
+	pcfg.index = map.index;
+
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+
+	fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("QMan irq init failed\n");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
+	pcfg.is_shared = is_shared;
+	pcfg.node = NULL;
+	pcfg.irq = fd;
+
+	irq_map.type = dpaa_portal_qman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_qman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int qman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_qman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int qman_thread_finish(void)
+{
+	return fsl_qman_portal_finish();
+}
+
+void qman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int qman_global_init(void)
+{
+	const struct device_node *dt_node;
+	int ret = 0;
+	size_t lenp;
+	const u32 *chanid;
+	static int ccsr_map_fd;
+	const uint32_t *qman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	const u32 *clk;
+
+	static int done;
+
+	if (done)
+		return -EBUSY;
+
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman-portal");
+	if (!dt_node) {
+		pr_err("No qman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,qman-portal-1.0.0"))
+		pr_err("QMan rev1.0 on P4080 rev1 is not supported!\n");
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.1") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.1.0"))
+		qman_ip_rev = QMAN_REV11;
+	else if	(of_device_is_compatible(dt_node, "fsl,qman-portal-1.2") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.2.0"))
+		qman_ip_rev = QMAN_REV12;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-2.0.0"))
+		qman_ip_rev = QMAN_REV20;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.1"))
+		qman_ip_rev = QMAN_REV30;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.3"))
+		qman_ip_rev = QMAN_REV31;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.1"))
+		qman_ip_rev = QMAN_REV32;
+	else
+		qman_ip_rev = QMAN_REV11;
+
+	if (!qman_ip_rev) {
+		pr_err("Unknown qman portal version\n");
+		return -ENODEV;
+	}
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30) {
+		qm_channel_pool1 = QMAN_CHANNEL_POOL1_REV3;
+		qm_channel_caam = QMAN_CHANNEL_CAAM_REV3;
+		qm_channel_pme = QMAN_CHANNEL_PME_REV3;
+	}
+
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,pool-channel-range");
+	if (!dt_node) {
+		pr_err("No qman pool channel range available\n");
+		return -ENODEV;
+	}
+	chanid = of_get_property(dt_node, "fsl,pool-channel-range", &lenp);
+	if (!chanid) {
+		pr_err("Can not get pool-channel-range property\n");
+		return -EINVAL;
+	}
+
+	/* get ccsr base */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman");
+	if (!dt_node) {
+		pr_err("No qman device node available\n");
+		return -ENODEV;
+	}
+	qman_addr = of_get_address(dt_node, 0, &regs_size, NULL);
+	if (!qman_addr) {
+		pr_err("of_get_address cannot return qman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(dt_node, qman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open("/dev/mem", O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for qman ccsr map\n");
+		return ccsr_map_fd;
+	}
+
+	qman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (qman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map qman ccsr base\n");
+		return -EINVAL;
+	}
+
+	clk = of_get_property(dt_node, "clock-frequency", NULL);
+	if (!clk)
+		pr_warn("Can't find Qman clock frequency\n");
+	else
+		qman_clk = be32_to_cpu(*clk);
+
+	return ret;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
new file mode 100644
index 0000000..e9826c2
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -0,0 +1,314 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __QMAN_PRIV_H
+#define __QMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_qman.h>
+
+#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)
+#error "_ARM64 requires _FSL_QMAN_FQ_LOOKUP"
+#endif
+
+/* Congestion Groups */
+/*
+ * This wrapper represents a bit-array for the state of the 256 QMan congestion
+ * groups. Is also used as a *mask* for congestion groups, eg. so we ignore
+ * those that don't concern us. We harness the structure and accessor details
+ * already used in the management command to query congestion groups.
+ */
+struct qman_cgrs {
+	struct __qm_mcr_querycongestion q;
+};
+
+static inline void qman_cgrs_init(struct qman_cgrs *c)
+{
+	memset(c, 0, sizeof(*c));
+}
+
+static inline void qman_cgrs_fill(struct qman_cgrs *c)
+{
+	memset(c, 0xff, sizeof(*c));
+}
+
+static inline int qman_cgrs_get(struct qman_cgrs *c, int num)
+{
+	return QM_MCR_QUERYCONGESTION(&c->q, num);
+}
+
+static inline void qman_cgrs_set(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] |= (0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline void qman_cgrs_unset(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] &= ~(0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline int qman_cgrs_next(struct qman_cgrs *c, int num)
+{
+	while ((++num < (int)__CGR_NUM) && !qman_cgrs_get(c, num))
+		;
+	return num;
+}
+
+static inline void qman_cgrs_cp(struct qman_cgrs *dest,
+				const struct qman_cgrs *src)
+{
+	*dest = *src;
+}
+
+static inline void qman_cgrs_and(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) & *(_b++);
+}
+
+static inline void qman_cgrs_xor(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) ^ *(_b++);
+}
+
+/* used by CCSR and portal interrupt code */
+enum qm_isr_reg {
+	qm_isr_status = 0,
+	qm_isr_enable = 1,
+	qm_isr_disable = 2,
+	qm_isr_inhibit = 3
+};
+
+struct qm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	struct device_node *node;
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* If the caller enables DQRR stashing (and thus wishes to operate the
+	 * portal from only one cpu), this is the logical CPU that the portal
+	 * will stash to. Whether stashing is enabled or not, this setting is
+	 * also used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* The portal's dedicated channel id, use this value for initialising
+	 * frame queues to target this portal when scheduled.
+	 */
+	u16 channel;
+	/* A mask of which pool channels this portal has dequeue access to
+	 * (using QM_SDQCR_CHANNELS_POOL(n) for the bitmask).
+	 */
+	u32 pools;
+
+};
+
+/* Revision info (for errata and feature handling) */
+#define QMAN_REV11 0x0101
+#define QMAN_REV12 0x0102
+#define QMAN_REV20 0x0200
+#define QMAN_REV30 0x0300
+#define QMAN_REV31 0x0301
+#define QMAN_REV32 0x0302
+extern u16 qman_ip_rev; /* 0 if uninitialised, otherwise QMAN_REVx */
+extern u32 qman_clk;
+
+int qm_set_wpm(int wpm);
+int qm_get_wpm(int *wpm);
+
+struct qman_portal *qman_create_affine_portal(
+			const struct qm_portal_config *config,
+			const struct qman_cgrs *cgrs);
+const struct qm_portal_config *qman_destroy_affine_portal(void);
+
+struct qm_portal_config *qm_get_unused_portal(void);
+struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx);
+
+void qm_put_unused_portal(struct qm_portal_config *pcfg);
+void qm_set_liodns(struct qm_portal_config *pcfg);
+
+/* This CGR feature is supported by h/w and required by unit-tests and the
+ * debugfs hooks, so is implemented in the driver. However it allows an explicit
+ * corruption of h/w fields by s/w that are usually incorruptible (because the
+ * counters are usually maintained entirely within h/w). As such, we declare
+ * this API internally.
+ */
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result);
+
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+/* If the fq object pointer is greater than the size of context_b field,
+ * than a lookup table is required.
+ */
+int qman_setup_fq_lookup_table(size_t num_entries);
+#endif
+
+/*   QMan s/w corenet portal, low-level i/face	 */
+
+/*
+ * For Choose one SOURCE. Choose one COUNT. Choose one
+ * dequeue TYPE. Choose TOKEN (8-bit).
+ * If SOURCE == CHANNELS,
+ *   Choose CHANNELS_DEDICATED and/or CHANNELS_POOL(n).
+ *   You can choose DEDICATED_PRECEDENCE if the portal channel should have
+ *   priority.
+ * If SOURCE == SPECIFICWQ,
+ *     Either select the work-queue ID with SPECIFICWQ_WQ(), or select the
+ *     channel (SPECIFICWQ_DEDICATED or SPECIFICWQ_POOL()) and specify the
+ *     work-queue priority (0-7) with SPECIFICWQ_WQ() - either way, you get the
+ *     same value.
+ */
+#define QM_SDQCR_SOURCE_CHANNELS	0x0
+#define QM_SDQCR_SOURCE_SPECIFICWQ	0x40000000
+#define QM_SDQCR_COUNT_EXACT1		0x0
+#define QM_SDQCR_COUNT_UPTO3		0x20000000
+#define QM_SDQCR_DEDICATED_PRECEDENCE	0x10000000
+#define QM_SDQCR_TYPE_MASK		0x03000000
+#define QM_SDQCR_TYPE_NULL		0x0
+#define QM_SDQCR_TYPE_PRIO_QOS		0x01000000
+#define QM_SDQCR_TYPE_ACTIVE_QOS	0x02000000
+#define QM_SDQCR_TYPE_ACTIVE		0x03000000
+#define QM_SDQCR_TOKEN_MASK		0x00ff0000
+#define QM_SDQCR_TOKEN_SET(v)		(((v) & 0xff) << 16)
+#define QM_SDQCR_TOKEN_GET(v)		(((v) >> 16) & 0xff)
+#define QM_SDQCR_CHANNELS_DEDICATED	0x00008000
+#define QM_SDQCR_SPECIFICWQ_MASK	0x000000f7
+#define QM_SDQCR_SPECIFICWQ_DEDICATED	0x00000000
+#define QM_SDQCR_SPECIFICWQ_POOL(n)	((n) << 4)
+#define QM_SDQCR_SPECIFICWQ_WQ(n)	(n)
+
+#define QM_VDQCR_FQID_MASK		0x00ffffff
+#define QM_VDQCR_FQID(n)		((n) & QM_VDQCR_FQID_MASK)
+
+#define QM_EQCR_VERB_VBIT		0x80
+#define QM_EQCR_VERB_CMD_MASK		0x61	/* but only one value; */
+#define QM_EQCR_VERB_CMD_ENQUEUE	0x01
+#define QM_EQCR_VERB_COLOUR_MASK	0x18	/* 4 possible values; */
+#define QM_EQCR_VERB_COLOUR_GREEN	0x00
+#define QM_EQCR_VERB_COLOUR_YELLOW	0x08
+#define QM_EQCR_VERB_COLOUR_RED		0x10
+#define QM_EQCR_VERB_COLOUR_OVERRIDE	0x18
+#define QM_EQCR_VERB_INTERRUPT		0x04	/* on command consumption */
+#define QM_EQCR_VERB_ORP		0x02	/* enable order restoration */
+#define QM_EQCR_DCA_ENABLE		0x80
+#define QM_EQCR_DCA_PARK		0x40
+#define QM_EQCR_DCA_IDXMASK		0x0f	/* "DQRR::idx" goes here */
+#define QM_EQCR_SEQNUM_NESN		0x8000	/* Advance NESN */
+#define QM_EQCR_SEQNUM_NLIS		0x4000	/* More fragments to come */
+#define QM_EQCR_SEQNUM_SEQMASK		0x3fff	/* sequence number goes here */
+#define QM_EQCR_FQID_NULL		0	/* eg. for an ORP seqnum hole */
+
+#define QM_MCC_VERB_VBIT		0x80
+#define QM_MCC_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_MCC_VERB_INITFQ_PARKED	0x40
+#define QM_MCC_VERB_INITFQ_SCHED	0x41
+#define QM_MCC_VERB_QUERYFQ		0x44
+#define QM_MCC_VERB_QUERYFQ_NP		0x45	/* "non-programmable" fields */
+#define QM_MCC_VERB_QUERYWQ		0x46
+#define QM_MCC_VERB_QUERYWQ_DEDICATED	0x47
+#define QM_MCC_VERB_ALTER_SCHED		0x48	/* Schedule FQ */
+#define QM_MCC_VERB_ALTER_FE		0x49	/* Force Eligible FQ */
+#define QM_MCC_VERB_ALTER_RETIRE	0x4a	/* Retire FQ */
+#define QM_MCC_VERB_ALTER_OOS		0x4b	/* Take FQ out of service */
+#define QM_MCC_VERB_ALTER_FQXON		0x4d	/* FQ XON */
+#define QM_MCC_VERB_ALTER_FQXOFF	0x4e	/* FQ XOFF */
+#define QM_MCC_VERB_INITCGR		0x50
+#define QM_MCC_VERB_MODIFYCGR		0x51
+#define QM_MCC_VERB_CGRTESTWRITE	0x52
+#define QM_MCC_VERB_QUERYCGR		0x58
+#define QM_MCC_VERB_QUERYCONGESTION	0x59
+
+/*
+ * Used by all portal interrupt registers except 'inhibit'
+ * Channels with frame availability
+ */
+#define QM_PIRQ_DQAVAIL	0x0000ffff
+
+/* The DQAVAIL interrupt fields break down into these bits; */
+#define QM_DQAVAIL_PORTAL	0x8000		/* Portal channel */
+#define QM_DQAVAIL_POOL(n)	(0x8000 >> (n))	/* Pool channel, n==[1..15] */
+#define QM_DQAVAIL_MASK		0xffff
+/* This mask contains all the "irqsource" bits visible to API users */
+#define QM_PIRQ_VISIBLE	(QM_PIRQ_SLOW | QM_PIRQ_DQRI)
+
+/* These are qm_<reg>_<verb>(). So for example, qm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define qm_isr_status_read(qm)		__qm_isr_read(qm, qm_isr_status)
+#define qm_isr_status_clear(qm, m)	__qm_isr_write(qm, qm_isr_status, m)
+#define qm_isr_enable_read(qm)		__qm_isr_read(qm, qm_isr_enable)
+#define qm_isr_enable_write(qm, v)	__qm_isr_write(qm, qm_isr_enable, v)
+#define qm_isr_disable_read(qm)		__qm_isr_read(qm, qm_isr_disable)
+#define qm_isr_disable_write(qm, v)	__qm_isr_write(qm, qm_isr_disable, v)
+/* TODO: unfortunate name-clash here, reword? */
+#define qm_isr_inhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 1)
+#define qm_isr_uninhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 0)
+
+#define QMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+
+#endif /* _QMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
new file mode 100644
index 0000000..740ee25
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -0,0 +1,1283 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_QMAN_H
+#define __FSL_QMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <dpaa_rbtree.h>
+
+/* FQ lookups (turn this on for 64bit user-space) */
+#if (__WORDSIZE == 64)
+#define CONFIG_FSL_QMAN_FQ_LOOKUP
+/* if FQ lookups are supported, this controls the number of initialised,
+ * s/w-consumed FQs that can be supported at any one time.
+ */
+#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)
+#endif
+
+/* Last updated for v00.800 of the BG */
+
+/* Hardware constants */
+#define QM_CHANNEL_SWPORTAL0 0
+#define QMAN_CHANNEL_POOL1 0x21
+#define QMAN_CHANNEL_CAAM 0x80
+#define QMAN_CHANNEL_PME 0xa0
+#define QMAN_CHANNEL_POOL1_REV3 0x401
+#define QMAN_CHANNEL_CAAM_REV3 0x840
+#define QMAN_CHANNEL_PME_REV3 0x860
+extern u16 qm_channel_pool1;
+extern u16 qm_channel_caam;
+extern u16 qm_channel_pme;
+enum qm_dc_portal {
+	qm_dc_portal_fman0 = 0,
+	qm_dc_portal_fman1 = 1,
+	qm_dc_portal_caam = 2,
+	qm_dc_portal_pme = 3
+};
+
+/* Portal processing (interrupt) sources */
+#define QM_PIRQ_CCSCI	0x00200000	/* CEETM Congestion State Change */
+#define QM_PIRQ_CSCI	0x00100000	/* Congestion State Change */
+#define QM_PIRQ_EQCI	0x00080000	/* Enqueue Command Committed */
+#define QM_PIRQ_EQRI	0x00040000	/* EQCR Ring (below threshold) */
+#define QM_PIRQ_DQRI	0x00020000	/* DQRR Ring (non-empty) */
+#define QM_PIRQ_MRI	0x00010000	/* MR Ring (non-empty) */
+/*
+ * This mask contains all the interrupt sources that need handling except DQRI,
+ * ie. that if present should trigger slow-path processing.
+ */
+#define QM_PIRQ_SLOW	(QM_PIRQ_CSCI | QM_PIRQ_EQCI | QM_PIRQ_EQRI | \
+			QM_PIRQ_MRI | QM_PIRQ_CCSCI)
+
+/* For qman_static_dequeue_*** APIs */
+#define QM_SDQCR_CHANNELS_POOL_MASK	0x00007fff
+/* for n in [1,15] */
+#define QM_SDQCR_CHANNELS_POOL(n)	(0x00008000 >> (n))
+/* for conversion from n of qm_channel */
+static inline u32 QM_SDQCR_CHANNELS_POOL_CONV(u16 channel)
+{
+	return QM_SDQCR_CHANNELS_POOL(channel + 1 - qm_channel_pool1);
+}
+
+/* For qman_volatile_dequeue(); Choose one PRECEDENCE. EXACT is optional. Use
+ * NUMFRAMES(n) (6-bit) or NUMFRAMES_TILLEMPTY to fill in the frame-count. Use
+ * FQID(n) to fill in the frame queue ID.
+ */
+#define QM_VDQCR_PRECEDENCE_VDQCR	0x0
+#define QM_VDQCR_PRECEDENCE_SDQCR	0x80000000
+#define QM_VDQCR_EXACT			0x40000000
+#define QM_VDQCR_NUMFRAMES_MASK		0x3f000000
+#define QM_VDQCR_NUMFRAMES_SET(n)	(((n) & 0x3f) << 24)
+#define QM_VDQCR_NUMFRAMES_GET(n)	(((n) >> 24) & 0x3f)
+#define QM_VDQCR_NUMFRAMES_TILLEMPTY	QM_VDQCR_NUMFRAMES_SET(0)
+
+/* --- QMan data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct qm_eqcr_entry;	/* EQCR (EnQueue Command Ring) entries */
+struct qm_dqrr_entry;	/* DQRR (DeQueue Response Ring) entries */
+struct qm_mr_entry;	/* MR (Message Ring) entries */
+struct qm_mc_command;	/* MC (Management Command) command */
+struct qm_mc_result;	/* MC result */
+
+#define QM_FD_FORMAT_SG		0x4
+#define QM_FD_FORMAT_LONG	0x2
+#define QM_FD_FORMAT_COMPOUND	0x1
+enum qm_fd_format {
+	/*
+	 * 'contig' implies a contiguous buffer, whereas 'sg' implies a
+	 * scatter-gather table. 'big' implies a 29-bit length with no offset
+	 * field, otherwise length is 20-bit and offset is 9-bit. 'compound'
+	 * implies a s/g-like table, where each entry itself represents a frame
+	 * (contiguous or scatter-gather) and the 29-bit "length" is
+	 * interpreted purely for congestion calculations, ie. a "congestion
+	 * weight".
+	 */
+	qm_fd_contig = 0,
+	qm_fd_contig_big = QM_FD_FORMAT_LONG,
+	qm_fd_sg = QM_FD_FORMAT_SG,
+	qm_fd_sg_big = QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG,
+	qm_fd_compound = QM_FD_FORMAT_COMPOUND
+};
+
+/* Capitalised versions are un-typed but can be used in static expressions */
+#define QM_FD_CONTIG	0
+#define QM_FD_CONTIG_BIG QM_FD_FORMAT_LONG
+#define QM_FD_SG	QM_FD_FORMAT_SG
+#define QM_FD_SG_BIG	(QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG)
+#define QM_FD_COMPOUND	QM_FD_FORMAT_COMPOUND
+
+/* "Frame Descriptor (FD)" */
+struct qm_fd {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 dd:2;	/* dynamic debug */
+			u8 liodn_offset:6;
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 eliodn_offset:4;
+			u8 __reserved:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u8 liodn_offset:6;
+			u8 dd:2;	/* dynamic debug */
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 __reserved:4;
+			u8 eliodn_offset:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#endif
+		};
+		struct {
+			u64 __notaddress:24;
+			/* More efficient address accessor */
+			u64 addr:40;
+		};
+		u64 opaque_addr;
+	};
+	/* The 'format' field indicates the interpretation of the remaining 29
+	 * bits of the 32-bit word. For packing reasons, it is duplicated in the
+	 * other union elements. Note, union'd structs are difficult to use with
+	 * static initialisation under gcc, in which case use the "opaque" form
+	 * with one of the macros.
+	 */
+	union {
+		/* For easier/faster copying of this part of the fd (eg. from a
+		 * DQRR entry to an EQCR entry) copy 'opaque'
+		 */
+		u32 opaque;
+		/* If 'format' is _contig or _sg, 20b length and 9b offset */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format format:3;
+			u16 offset:9;
+			u32 length20:20;
+#else
+			u32 length20:20;
+			u16 offset:9;
+			enum qm_fd_format format:3;
+#endif
+		};
+		/* If 'format' is _contig_big or _sg_big, 29b length */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format1:3;
+			u32 length29:29;
+#else
+			u32 length29:29;
+			enum qm_fd_format _format1:3;
+#endif
+		};
+		/* If 'format' is _compound, 29b "congestion weight" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format2:3;
+			u32 cong_weight:29;
+#else
+			u32 cong_weight:29;
+			enum qm_fd_format _format2:3;
+#endif
+		};
+	};
+	union {
+		u32 cmd;
+		u32 status;
+	};
+} __attribute__((aligned(8)));
+#define QM_FD_DD_NULL		0x00
+#define QM_FD_PID_MASK		0x3f
+static inline u64 qm_fd_addr_get64(const struct qm_fd *fd)
+{
+	return fd->addr;
+}
+
+static inline dma_addr_t qm_fd_addr(const struct qm_fd *fd)
+{
+	return (dma_addr_t)fd->addr;
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_fd_addr_set64(fd, v) \
+	do { \
+		struct qm_fd *__fd931 = (fd); \
+		__fd931->addr = v; \
+	} while (0)
+
+/* For static initialisation of FDs (which is complicated by the use of unions
+ * in "struct qm_fd"), use the following macros. Note that;
+ * - 'dd', 'pid' and 'bpid' are ignored because there's no static initialisation
+ *   use-case),
+ * - use capitalised QM_FD_*** formats for static initialisation.
+ */
+#define QM_FD_FMT_20(cmd, addr_hi, addr_lo, fmt, off, len) \
+	{ 0, 0, 0, 0, 0, addr_hi, addr_lo, \
+	{ (((fmt) & 0x7) << 29) | (((off) & 0x1ff) << 20) | ((len) & 0xfffff) }, \
+	{ cmd } }
+#define QM_FD_FMT_29(cmd, addr_hi, addr_lo, fmt, len) \
+	{ 0, 0, 0, 0, 0, addr_hi, addr_lo, \
+	{ (((fmt) & 0x7) << 29) | ((len) & 0x1fffffff) }, \
+	{ cmd } }
+
+
+/* Scatter/Gather table entry */
+struct qm_sg_entry {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1[3];
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u8 __reserved1[3];
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:24;
+			u64 addr:40;
+#else
+			u64 addr:40;
+			u64 __notaddress:24;
+#endif
+		};
+		u64 opaque;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 extension:1;	/* Extension bit */
+			u32 final:1;		/* Final bit */
+			u32 length:30;
+#else
+			u32 length:30;
+			u32 final:1;		/* Final bit */
+			u32 extension:1;	/* Extension bit */
+#endif
+		};
+		u32 val;
+	};
+	u8 __reserved2;
+	u8 bpid;
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved3:3;
+			u16 offset:13;
+#else
+			u16 offset:13;
+			u16 __reserved3:3;
+#endif
+		};
+		u16 val_off;
+	};
+} __packed;
+static inline u64 qm_sg_entry_get64(const struct qm_sg_entry *sg)
+{
+	return be64_to_cpu(sg->opaque);
+}
+
+static inline dma_addr_t qm_sg_addr(const struct qm_sg_entry *sg)
+{
+	return (dma_addr_t)be64_to_cpu(sg->opaque);
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_sg_entry_set64(sg, v) \
+	do { \
+		struct qm_sg_entry *__sg931 = (sg); \
+		__sg931->opaque = cpu_to_be64(v); \
+	} while (0)
+
+/* See 1.5.8.1: "Enqueue Command" */
+struct qm_eqcr_entry {
+	u8 __dont_write_directly__verb;
+	u8 dca;
+	u16 seqnum;
+	u32 orp;	/* 24-bit */
+	u32 fqid;	/* 24-bit */
+	u32 tag;
+	struct qm_fd fd;
+	u8 __reserved3[32];
+} __packed;
+
+
+/* "Frame Dequeue Response" */
+struct qm_dqrr_entry {
+	u8 verb;
+	u8 stat;
+	u16 seqnum;	/* 15-bit */
+	u8 tok;
+	u8 __reserved2[3];
+	u32 fqid;	/* 24-bit */
+	u32 contextB;
+	struct qm_fd fd;
+	u8 __reserved4[32];
+};
+
+#define QM_DQRR_VERB_VBIT		0x80
+#define QM_DQRR_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_DQRR_VERB_FRAME_DEQUEUE	0x60	/* "this format" */
+#define QM_DQRR_STAT_FQ_EMPTY		0x80	/* FQ empty */
+#define QM_DQRR_STAT_FQ_HELDACTIVE	0x40	/* FQ held active */
+#define QM_DQRR_STAT_FQ_FORCEELIGIBLE	0x20	/* FQ was force-eligible'd */
+#define QM_DQRR_STAT_FD_VALID		0x10	/* has a non-NULL FD */
+#define QM_DQRR_STAT_UNSCHEDULED	0x02	/* Unscheduled dequeue */
+#define QM_DQRR_STAT_DQCR_EXPIRED	0x01	/* VDQCR or PDQCR expired*/
+
+
+/* "ERN Message Response" */
+/* "FQ State Change Notification" */
+struct qm_mr_entry {
+	u8 verb;
+	union {
+		struct {
+			u8 dca;
+			u16 seqnum;
+			u8 rc;		/* Rejection Code */
+			u32 orp:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed ern;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+			u8 __reserved1:4;
+			enum qm_dc_portal portal:2;
+#else
+			enum qm_dc_portal portal:3;
+			u8 __reserved1:3;
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+#endif
+			u16 __reserved2;
+			u8 rc;		/* Rejection Code */
+			u32 __reserved3:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed dcern;
+		struct {
+			u8 fqs;		/* Frame Queue Status */
+			u8 __reserved1[6];
+			u32 fqid;	/* 24-bit */
+			u32 contextB;
+			u8 __reserved2[16];
+		} __packed fq;		/* FQRN/FQRNI/FQRL/FQPN */
+	};
+	u8 __reserved2[32];
+} __packed;
+#define QM_MR_VERB_VBIT			0x80
+/*
+ * ERNs originating from direct-connect portals ("dcern") use 0x20 as a verb
+ * which would be invalid as a s/w enqueue verb. A s/w ERN can be distinguished
+ * from the other MR types by noting if the 0x20 bit is unset.
+ */
+#define QM_MR_VERB_TYPE_MASK		0x27
+#define QM_MR_VERB_DC_ERN		0x20
+#define QM_MR_VERB_FQRN			0x21
+#define QM_MR_VERB_FQRNI		0x22
+#define QM_MR_VERB_FQRL			0x23
+#define QM_MR_VERB_FQPN			0x24
+#define QM_MR_RC_MASK			0xf0	/* contains one of; */
+#define QM_MR_RC_CGR_TAILDROP		0x00
+#define QM_MR_RC_WRED			0x10
+#define QM_MR_RC_ERROR			0x20
+#define QM_MR_RC_ORPWINDOW_EARLY	0x30
+#define QM_MR_RC_ORPWINDOW_LATE		0x40
+#define QM_MR_RC_FQ_TAILDROP		0x50
+#define QM_MR_RC_ORPWINDOW_RETIRED	0x60
+#define QM_MR_RC_ORP_ZERO		0x70
+#define QM_MR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+#define QM_MR_DCERN_COLOUR_GREEN	0x00
+#define QM_MR_DCERN_COLOUR_YELLOW	0x01
+#define QM_MR_DCERN_COLOUR_RED		0x02
+#define QM_MR_DCERN_COLOUR_OVERRIDE	0x03
+/*
+ * An identical structure of FQD fields is present in the "Init FQ" command and
+ * the "Query FQ" result, it's suctioned out into the "struct qm_fqd" type.
+ * Within that, the 'stashing' and 'taildrop' pieces are also factored out, the
+ * latter has two inlines to assist with converting to/from the mant+exp
+ * representation.
+ */
+struct qm_fqd_stashing {
+	/* See QM_STASHING_EXCL_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 exclusive;
+	u8 __reserved1:2;
+	/* Numbers of cachelines */
+	u8 annotation_cl:2;
+	u8 data_cl:2;
+	u8 context_cl:2;
+#else
+	u8 context_cl:2;
+	u8 data_cl:2;
+	u8 annotation_cl:2;
+	u8 __reserved1:2;
+	u8 exclusive;
+#endif
+} __packed;
+struct qm_fqd_taildrop {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved1:3;
+	u16 mant:8;
+	u16 exp:5;
+#else
+	u16 exp:5;
+	u16 mant:8;
+	u16 __reserved1:3;
+#endif
+} __packed;
+struct qm_fqd_oac {
+	/* "Overhead Accounting Control", see QM_OAC_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 oac:2; /* "Overhead Accounting Control" */
+	u8 __reserved1:6;
+#else
+	u8 __reserved1:6;
+	u8 oac:2; /* "Overhead Accounting Control" */
+#endif
+	/* Two's-complement value (-128 to +127) */
+	signed char oal; /* "Overhead Accounting Length" */
+} __packed;
+struct qm_fqd {
+	union {
+		u8 orpc;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1:2;
+			u8 orprws:3;
+			u8 oa:1;
+			u8 olws:2;
+#else
+			u8 olws:2;
+			u8 oa:1;
+			u8 orprws:3;
+			u8 __reserved1:2;
+#endif
+		} __packed;
+	};
+	u8 cgid;
+	u16 fq_ctrl;	/* See QM_FQCTRL_<...> */
+	union {
+		u16 dest_wq;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 channel:13; /* qm_channel */
+			u16 wq:3;
+#else
+			u16 wq:3;
+			u16 channel:13; /* qm_channel */
+#endif
+		} __packed dest;
+	};
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#else
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#endif
+	/*
+	 * For "Initialize Frame Queue" commands, the write-enable mask
+	 * determines whether 'td' or 'oac_init' is observed. For query
+	 * commands, this field is always 'td', and 'oac_query' (below) reflects
+	 * the Overhead ACcounting values.
+	 */
+	union {
+		uint16_t opaque_td;
+		struct qm_fqd_taildrop td;
+		struct qm_fqd_oac oac_init;
+	};
+	u32 context_b;
+	union {
+		/* Treat it as 64-bit opaque */
+		u64 opaque;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 hi;
+			u32 lo;
+#else
+			u32 lo;
+			u32 hi;
+#endif
+		};
+		/* Treat it as s/w portal stashing config */
+		/* see "FQD Context_A field used for [...]" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			struct qm_fqd_stashing stashing;
+			/*
+			 * 48-bit address of FQ context to
+			 * stash, must be cacheline-aligned
+			 */
+			u16 context_hi;
+			u32 context_lo;
+#else
+			u32 context_lo;
+			u16 context_hi;
+			struct qm_fqd_stashing stashing;
+#endif
+		} __packed;
+	} context_a;
+	struct qm_fqd_oac oac_query;
+} __packed;
+/* 64-bit converters for context_hi/lo */
+static inline u64 qm_fqd_stashing_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.context_hi << 32) |
+		(u64)fqd->context_a.context_lo;
+}
+
+static inline dma_addr_t qm_fqd_stashing_addr(const struct qm_fqd *fqd)
+{
+	return (dma_addr_t)qm_fqd_stashing_get64(fqd);
+}
+
+static inline u64 qm_fqd_context_a_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.hi << 32) |
+		(u64)fqd->context_a.lo;
+}
+
+static inline void qm_fqd_stashing_set64(struct qm_fqd *fqd, u64 addr)
+{
+		fqd->context_a.context_hi = upper_32_bits(addr);
+		fqd->context_a.context_lo = lower_32_bits(addr);
+}
+
+static inline void qm_fqd_context_a_set64(struct qm_fqd *fqd, u64 addr)
+{
+	fqd->context_a.hi = upper_32_bits(addr);
+	fqd->context_a.lo = lower_32_bits(addr);
+}
+
+/* convert a threshold value into mant+exp representation */
+static inline int qm_fqd_taildrop_set(struct qm_fqd_taildrop *td, u32 val,
+				      int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	if (val > 0xe0000000)
+		return -ERANGE;
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	td->exp = e;
+	td->mant = val;
+	return 0;
+}
+
+/* and the other direction */
+static inline u32 qm_fqd_taildrop_get(const struct qm_fqd_taildrop *td)
+{
+	return (u32)td->mant << td->exp;
+}
+
+
+/* See "Frame Queue Descriptor (FQD)" */
+/* Frame Queue Descriptor (FQD) field 'fq_ctrl' uses these constants */
+#define QM_FQCTRL_MASK		0x07ff	/* 'fq_ctrl' flags; */
+#define QM_FQCTRL_CGE		0x0400	/* Congestion Group Enable */
+#define QM_FQCTRL_TDE		0x0200	/* Tail-Drop Enable */
+#define QM_FQCTRL_ORP		0x0100	/* ORP Enable */
+#define QM_FQCTRL_CTXASTASHING	0x0080	/* Context-A stashing */
+#define QM_FQCTRL_CPCSTASH	0x0040	/* CPC Stash Enable */
+#define QM_FQCTRL_FORCESFDR	0x0008	/* High-priority SFDRs */
+#define QM_FQCTRL_AVOIDBLOCK	0x0004	/* Don't block active */
+#define QM_FQCTRL_HOLDACTIVE	0x0002	/* Hold active in portal */
+#define QM_FQCTRL_PREFERINCACHE	0x0001	/* Aggressively cache FQD */
+#define QM_FQCTRL_LOCKINCACHE	QM_FQCTRL_PREFERINCACHE /* older naming */
+
+/* See "FQD Context_A field used for [...] */
+/* Frame Queue Descriptor (FQD) field 'CONTEXT_A' uses these constants */
+#define QM_STASHING_EXCL_ANNOTATION	0x04
+#define QM_STASHING_EXCL_DATA		0x02
+#define QM_STASHING_EXCL_CTX		0x01
+
+/* See "Intra Class Scheduling" */
+/* FQD field 'OAC' (Overhead ACcounting) uses these constants */
+#define QM_OAC_ICS		0x2 /* Accounting for Intra-Class Scheduling */
+#define QM_OAC_CG		0x1 /* Accounting for Congestion Groups */
+
+/*
+ * This struct represents the 32-bit "WR_PARM_[GYR]" parameters in CGR fields
+ * and associated commands/responses. The WRED parameters are calculated from
+ * these fields as follows;
+ *   MaxTH = MA * (2 ^ Mn)
+ *   Slope = SA / (2 ^ Sn)
+ *    MaxP = 4 * (Pn + 1)
+ */
+struct qm_cgr_wr_parm {
+	union {
+		u32 word;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 MA:8;
+			u32 Mn:5;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Sn:6;
+			u32 Pn:6;
+#else
+			u32 Pn:6;
+			u32 Sn:6;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Mn:5;
+			u32 MA:8;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This struct represents the 13-bit "CS_THRES" CGR field. In the corresponding
+ * management commands, this is padded to a 16-bit structure field, so that's
+ * how we represent it here. The congestion state threshold is calculated from
+ * these fields as follows;
+ *   CS threshold = TA * (2 ^ Tn)
+ */
+struct qm_cgr_cs_thres {
+	union {
+		u16 hword;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved:3;
+			u16 TA:8;
+			u16 Tn:5;
+#else
+			u16 Tn:5;
+			u16 TA:8;
+			u16 __reserved:3;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This identical structure of CGR fields is present in the "Init/Modify CGR"
+ * commands and the "Query CGR" result. It's suctioned out here into its own
+ * struct.
+ */
+struct __qm_mc_cgr {
+	struct qm_cgr_wr_parm wr_parm_g;
+	struct qm_cgr_wr_parm wr_parm_y;
+	struct qm_cgr_wr_parm wr_parm_r;
+	u8 wr_en_g;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_y;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_r;	/* boolean, use QM_CGR_EN */
+	u8 cscn_en;	/* boolean, use QM_CGR_EN */
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+#else
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+#endif
+		};
+		u32 cscn_targ;	/* use QM_CGR_TARG_* */
+	};
+	u8 cstd_en;	/* boolean, use QM_CGR_EN */
+	u8 cs;		/* boolean, only used in query response */
+	union {
+		struct qm_cgr_cs_thres cs_thres;
+		/* use qm_cgr_cs_thres_set64() */
+		u16 __cs_thres;
+	};
+	u8 mode;	/* QMAN_CGR_MODE_FRAME not supported in rev1.0 */
+} __packed;
+#define QM_CGR_EN		0x01 /* For wr_en_*, cscn_en, cstd_en */
+#define QM_CGR_TARG_UDP_CTRL_WRITE_BIT	0x8000 /* value written to portal bit*/
+#define QM_CGR_TARG_UDP_CTRL_DCP	0x4000 /* 0: SWP, 1: DCP */
+#define QM_CGR_TARG_PORTAL(n)	(0x80000000 >> (n)) /* s/w portal, 0-9 */
+#define QM_CGR_TARG_FMAN0	0x00200000 /* direct-connect portal: fman0 */
+#define QM_CGR_TARG_FMAN1	0x00100000 /*			   : fman1 */
+/* Convert CGR thresholds to/from "cs_thres" format */
+static inline u64 qm_cgr_cs_thres_get64(const struct qm_cgr_cs_thres *th)
+{
+	return (u64)th->TA << th->Tn;
+}
+
+static inline int qm_cgr_cs_thres_set64(struct qm_cgr_cs_thres *th, u64 val,
+					int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	th->Tn = e;
+	th->TA = val;
+	return 0;
+}
+
+/* See 1.5.8.5.1: "Initialize FQ" */
+/* See 1.5.8.5.2: "Query FQ" */
+/* See 1.5.8.5.3: "Query FQ Non-Programmable Fields" */
+/* See 1.5.8.5.4: "Alter FQ State Commands " */
+/* See 1.5.8.6.1: "Initialize/Modify CGR" */
+/* See 1.5.8.6.2: "CGR Test Write" */
+/* See 1.5.8.6.3: "Query CGR" */
+/* See 1.5.8.6.4: "Query Congestion Group State" */
+struct qm_mcc_initfq {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	u32 fqid;	/* 24-bit */
+	u16 count;	/* Initialises 'count+1' FQDs */
+	struct qm_fqd fqd; /* the FQD fields go here */
+	u8 __reserved3[30];
+} __packed;
+struct qm_mcc_queryfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_queryfq_np {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_alterfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2;
+	u8 count;	/* number of consecutive FQID */
+	u8 __reserved3[10];
+	u32 context_b;	/* frame queue context b */
+	u8 __reserved4[40];
+} __packed;
+struct qm_mcc_initcgr {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	struct __qm_mc_cgr cgr;	/* CGR fields */
+	u8 __reserved2[2];
+	u8 cgid;
+	u8 __reserved4[32];
+} __packed;
+struct qm_mcc_cgrtestwrite {
+	u8 __reserved1[2];
+	u8 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u8 __reserved2[23];
+	u8 cgid;
+	u8 __reserved3[32];
+} __packed;
+struct qm_mcc_querycgr {
+	u8 __reserved1[30];
+	u8 cgid;
+	u8 __reserved2[32];
+} __packed;
+struct qm_mcc_querycongestion {
+	u8 __reserved[63];
+} __packed;
+struct qm_mcc_querywq {
+	u8 __reserved;
+	/* select channel if verb != QUERYWQ_DEDICATED */
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved1:3;
+#else
+			u16 __reserved1:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved2[60];
+} __packed;
+
+struct qm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct qm_mcc_initfq initfq;
+		struct qm_mcc_queryfq queryfq;
+		struct qm_mcc_queryfq_np queryfq_np;
+		struct qm_mcc_alterfq alterfq;
+		struct qm_mcc_initcgr initcgr;
+		struct qm_mcc_cgrtestwrite cgrtestwrite;
+		struct qm_mcc_querycgr querycgr;
+		struct qm_mcc_querycongestion querycongestion;
+		struct qm_mcc_querywq querywq;
+	};
+} __packed;
+
+/* INITFQ-specific flags */
+#define QM_INITFQ_WE_MASK		0x01ff	/* 'Write Enable' flags; */
+#define QM_INITFQ_WE_OAC		0x0100
+#define QM_INITFQ_WE_ORPC		0x0080
+#define QM_INITFQ_WE_CGID		0x0040
+#define QM_INITFQ_WE_FQCTRL		0x0020
+#define QM_INITFQ_WE_DESTWQ		0x0010
+#define QM_INITFQ_WE_ICSCRED		0x0008
+#define QM_INITFQ_WE_TDTHRESH		0x0004
+#define QM_INITFQ_WE_CONTEXTB		0x0002
+#define QM_INITFQ_WE_CONTEXTA		0x0001
+/* INITCGR/MODIFYCGR-specific flags */
+#define QM_CGR_WE_MASK			0x07ff	/* 'Write Enable Mask'; */
+#define QM_CGR_WE_WR_PARM_G		0x0400
+#define QM_CGR_WE_WR_PARM_Y		0x0200
+#define QM_CGR_WE_WR_PARM_R		0x0100
+#define QM_CGR_WE_WR_EN_G		0x0080
+#define QM_CGR_WE_WR_EN_Y		0x0040
+#define QM_CGR_WE_WR_EN_R		0x0020
+#define QM_CGR_WE_CSCN_EN		0x0010
+#define QM_CGR_WE_CSCN_TARG		0x0008
+#define QM_CGR_WE_CSTD_EN		0x0004
+#define QM_CGR_WE_CS_THRES		0x0002
+#define QM_CGR_WE_MODE			0x0001
+
+struct qm_mcr_initfq {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_queryfq {
+	u8 __reserved1[8];
+	struct qm_fqd fqd;	/* the FQD fields are here */
+	u8 __reserved2[30];
+} __packed;
+struct qm_mcr_queryfq_np {
+	u8 __reserved1;
+	u8 state;	/* QM_MCR_NP_STATE_*** */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 __reserved2;
+	u32 fqd_link:24;
+	u16 __reserved3:2;
+	u16 odp_seq:14;
+	u16 __reserved4:2;
+	u16 orp_nesn:14;
+	u16 __reserved5:1;
+	u16 orp_ea_hseq:15;
+	u16 __reserved6:1;
+	u16 orp_ea_tseq:15;
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+	u8 __reserved11[5];
+	u8 __reserved12:7;
+	u8 is:1;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#else
+	u8 __reserved2;
+	u32 fqd_link:24;
+
+	u16 odp_seq:14;
+	u16 __reserved3:2;
+
+	u16 orp_nesn:14;
+	u16 __reserved4:2;
+
+	u16 orp_ea_hseq:15;
+	u16 __reserved5:1;
+
+	u16 orp_ea_tseq:15;
+	u16 __reserved6:1;
+
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+
+	u8 __reserved11[5];
+	u8 is:1;
+	u8 __reserved12:7;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#endif
+} __packed;
+
+struct qm_mcr_alterfq {
+	u8 fqs;		/* Frame Queue Status */
+	u8 __reserved1[61];
+} __packed;
+struct qm_mcr_initcgr {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_cgrtestwrite {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	u32 __reserved3:24;
+	u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u32 __reserved4:24;
+	u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+	u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+	u16 lgt;	/* Last Group Tick */
+	u16 wr_prob_g;
+	u16 wr_prob_y;
+	u16 wr_prob_r;
+	u8 __reserved5[8];
+} __packed;
+struct qm_mcr_querycgr {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved3:24;
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 __reserved3:24;
+#endif
+		};
+		u64 i_bcnt;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved4:24;
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 __reserved4:24;
+#endif
+		};
+		u64 a_bcnt;
+	};
+	union {
+		u32 cscn_targ_swp[4];
+		u8 __reserved5[16];
+	};
+} __packed;
+
+struct __qm_mcr_querycongestion {
+	u32 state[8];
+};
+
+struct qm_mcr_querycongestion {
+	u8 __reserved[30];
+	/* Access this struct using QM_MCR_QUERYCONGESTION() */
+	struct __qm_mcr_querycongestion state;
+} __packed;
+struct qm_mcr_querywq {
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved:3;
+#else
+			u16 __reserved:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved[28];
+	u32 wq_len[8];
+} __packed;
+
+struct qm_mc_result {
+	u8 verb;
+	u8 result;
+	union {
+		struct qm_mcr_initfq initfq;
+		struct qm_mcr_queryfq queryfq;
+		struct qm_mcr_queryfq_np queryfq_np;
+		struct qm_mcr_alterfq alterfq;
+		struct qm_mcr_initcgr initcgr;
+		struct qm_mcr_cgrtestwrite cgrtestwrite;
+		struct qm_mcr_querycgr querycgr;
+		struct qm_mcr_querycongestion querycongestion;
+		struct qm_mcr_querywq querywq;
+	};
+} __packed;
+
+#define QM_MCR_VERB_RRID		0x80
+#define QM_MCR_VERB_MASK		QM_MCC_VERB_MASK
+#define QM_MCR_VERB_INITFQ_PARKED	QM_MCC_VERB_INITFQ_PARKED
+#define QM_MCR_VERB_INITFQ_SCHED	QM_MCC_VERB_INITFQ_SCHED
+#define QM_MCR_VERB_QUERYFQ		QM_MCC_VERB_QUERYFQ
+#define QM_MCR_VERB_QUERYFQ_NP		QM_MCC_VERB_QUERYFQ_NP
+#define QM_MCR_VERB_QUERYWQ		QM_MCC_VERB_QUERYWQ
+#define QM_MCR_VERB_QUERYWQ_DEDICATED	QM_MCC_VERB_QUERYWQ_DEDICATED
+#define QM_MCR_VERB_ALTER_SCHED		QM_MCC_VERB_ALTER_SCHED
+#define QM_MCR_VERB_ALTER_FE		QM_MCC_VERB_ALTER_FE
+#define QM_MCR_VERB_ALTER_RETIRE	QM_MCC_VERB_ALTER_RETIRE
+#define QM_MCR_VERB_ALTER_OOS		QM_MCC_VERB_ALTER_OOS
+#define QM_MCR_RESULT_NULL		0x00
+#define QM_MCR_RESULT_OK		0xf0
+#define QM_MCR_RESULT_ERR_FQID		0xf1
+#define QM_MCR_RESULT_ERR_FQSTATE	0xf2
+#define QM_MCR_RESULT_ERR_NOTEMPTY	0xf3	/* OOS fails if FQ is !empty */
+#define QM_MCR_RESULT_ERR_BADCHANNEL	0xf4
+#define QM_MCR_RESULT_PENDING		0xf8
+#define QM_MCR_RESULT_ERR_BADCOMMAND	0xff
+#define QM_MCR_NP_STATE_FE		0x10
+#define QM_MCR_NP_STATE_R		0x08
+#define QM_MCR_NP_STATE_MASK		0x07	/* Reads FQD::STATE; */
+#define QM_MCR_NP_STATE_OOS		0x00
+#define QM_MCR_NP_STATE_RETIRED		0x01
+#define QM_MCR_NP_STATE_TEN_SCHED	0x02
+#define QM_MCR_NP_STATE_TRU_SCHED	0x03
+#define QM_MCR_NP_STATE_PARKED		0x04
+#define QM_MCR_NP_STATE_ACTIVE		0x05
+#define QM_MCR_NP_PTR_MASK		0x07ff	/* for RA[12] & OD[123] */
+#define QM_MCR_NP_RA1_NRA(v)		(((v) >> 14) & 0x3)	/* FQD::NRA */
+#define QM_MCR_NP_RA2_IT(v)		(((v) >> 14) & 0x1)	/* FQD::IT */
+#define QM_MCR_NP_OD1_NOD(v)		(((v) >> 14) & 0x3)	/* FQD::NOD */
+#define QM_MCR_NP_OD3_NPC(v)		(((v) >> 14) & 0x3)	/* FQD::NPC */
+#define QM_MCR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MCR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+/* This extracts the state for congestion group 'n' from a query response.
+ * Eg.
+ *   u8 cgr = [...];
+ *   struct qm_mc_result *res = [...];
+ *   printf("congestion group %d congestion state: %d\n", cgr,
+ *       QM_MCR_QUERYCONGESTION(&res->querycongestion.state, cgr));
+ */
+#define __CGR_WORD(num)		(num >> 5)
+#define __CGR_SHIFT(num)	(num & 0x1f)
+#define __CGR_NUM		(sizeof(struct __qm_mcr_querycongestion) << 3)
+static inline int QM_MCR_QUERYCONGESTION(struct __qm_mcr_querycongestion *p,
+					 u8 cgr)
+{
+	return be32_to_cpu(p->state[__CGR_WORD(cgr)]) &
+	       (0x80000000 >> __CGR_SHIFT(cgr));
+}
+
+	/* Portal and Frame Queues */
+/* Represents a managed portal */
+struct qman_portal;
+
+/*
+ * This object type represents QMan frame queue descriptors (FQD), it is
+ * cacheline-aligned, and initialised by qman_create_fq(). The structure is
+ * defined further down.
+ */
+struct qman_fq;
+
+/*
+ * This object type represents a QMan congestion group, it is defined further
+ * down.
+ */
+struct qman_cgr;
+
+/*
+ * This enum, and the callback type that returns it, are used when handling
+ * dequeued frames via DQRR. Note that for "null" callbacks registered with the
+ * portal object (for handling dequeues that do not demux because context_b is
+ * NULL), the return value *MUST* be qman_cb_dqrr_consume.
+ */
+enum qman_cb_dqrr_result {
+	/* DQRR entry can be consumed */
+	qman_cb_dqrr_consume,
+	/* Like _consume, but requests parking - FQ must be held-active */
+	qman_cb_dqrr_park,
+	/* Does not consume, for DCA mode only. This allows out-of-order
+	 * consumes by explicit calls to qman_dca() and/or the use of implicit
+	 * DCA via EQCR entries.
+	 */
+	qman_cb_dqrr_defer,
+	/*
+	 * Stop processing without consuming this ring entry. Exits the current
+	 * qman_p_poll_dqrr() or interrupt-handling, as appropriate. If within
+	 * an interrupt handler, the callback would typically call
+	 * qman_irqsource_remove(QM_PIRQ_DQRI) before returning this value,
+	 * otherwise the interrupt will reassert immediately.
+	 */
+	qman_cb_dqrr_stop,
+	/* Like qman_cb_dqrr_stop, but consumes the current entry. */
+	qman_cb_dqrr_consume_stop
+};
+
+typedef enum qman_cb_dqrr_result (*qman_cb_dqrr)(struct qman_portal *qm,
+					struct qman_fq *fq,
+					const struct qm_dqrr_entry *dqrr);
+
+/*
+ * This callback type is used when handling ERNs, FQRNs and FQRLs via MR. They
+ * are always consumed after the callback returns.
+ */
+typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq,
+				const struct qm_mr_entry *msg);
+
+/* This callback type is used when handling DCP ERNs */
+typedef void (*qman_cb_dc_ern)(struct qman_portal *qm,
+				const struct qm_mr_entry *msg);
+/*
+ * s/w-visible states. Ie. tentatively scheduled + truly scheduled + active +
+ * held-active + held-suspended are just "sched". Things like "retired" will not
+ * be assumed until it is complete (ie. QMAN_FQ_STATE_CHANGING is set until
+ * then, to indicate it's completing and to gate attempts to retry the retire
+ * command). Note, park commands do not set QMAN_FQ_STATE_CHANGING because it's
+ * technically impossible in the case of enqueue DCAs (which refer to DQRR ring
+ * index rather than the FQ that ring entry corresponds to), so repeated park
+ * commands are allowed (if you're silly enough to try) but won't change FQ
+ * state, and the resulting park notifications move FQs from "sched" to
+ * "parked".
+ */
+enum qman_fq_state {
+	qman_fq_state_oos,
+	qman_fq_state_parked,
+	qman_fq_state_sched,
+	qman_fq_state_retired
+};
+
+
+/*
+ * Frame queue objects (struct qman_fq) are stored within memory passed to
+ * qman_create_fq(), as this allows stashing of caller-provided demux callback
+ * pointers at no extra cost to stashing of (driver-internal) FQ state. If the
+ * caller wishes to add per-FQ state and have it benefit from dequeue-stashing,
+ * they should;
+ *
+ * (a) extend the qman_fq structure with their state; eg.
+ *
+ *     // myfq is allocated and driver_fq callbacks filled in;
+ *     struct my_fq {
+ *	   struct qman_fq base;
+ *	   int an_extra_field;
+ *	   [ ... add other fields to be associated with each FQ ...]
+ *     } *myfq = some_my_fq_allocator();
+ *     struct qman_fq *fq = qman_create_fq(fqid, flags, &myfq->base);
+ *
+ *     // in a dequeue callback, access extra fields from 'fq' via a cast;
+ *     struct my_fq *myfq = (struct my_fq *)fq;
+ *     do_something_with(myfq->an_extra_field);
+ *     [...]
+ *
+ * (b) when and if configuring the FQ for context stashing, specify how ever
+ *     many cachelines are required to stash 'struct my_fq', to accelerate not
+ *     only the QMan driver but the callback as well.
+ */
+
+struct qman_fq_cb {
+	qman_cb_dqrr dqrr;	/* for dequeued frames */
+	qman_cb_mr ern;		/* for s/w ERNs */
+	qman_cb_mr fqs;		/* frame-queue state changes*/
+};
+
+struct qman_fq {
+	/* Caller of qman_create_fq() provides these demux callbacks */
+	struct qman_fq_cb cb;
+	/*
+	 * These are internal to the driver, don't touch. In particular, they
+	 * may change, be removed, or extended (so you shouldn't rely on
+	 * sizeof(qman_fq) being a constant).
+	 */
+	spinlock_t fqlock;
+	u32 fqid;
+	/* DPDK Interface */
+	void *dpaa_intf;
+
+	volatile unsigned long flags;
+	enum qman_fq_state state;
+	int cgr_groupid;
+	struct rb_node node;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	u32 key;
+#endif
+};
+
+/*
+ * This callback type is used when handling congestion group entry/exit.
+ * 'congested' is non-zero on congestion-entry, and zero on congestion-exit.
+ */
+typedef void (*qman_cb_cgr)(struct qman_portal *qm,
+			    struct qman_cgr *cgr, int congested);
+
+struct qman_cgr {
+	/* Set these prior to qman_create_cgr() */
+	u32 cgrid; /* 0..255, but u32 to allow specials like -1, 256, etc.*/
+	qman_cb_cgr cb;
+	/* These are private to the driver */
+	u16 chan; /* portal channel this object is created on */
+	struct list_head node;
+};
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_QMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index 4ff48c6..b0d953f 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -47,6 +47,10 @@
 extern "C" {
 #endif
 
+/* Thread-entry/exit hooks; */
+int qman_thread_init(void);
+int qman_thread_finish(void);
+
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
 /* Obtain and free raw (unitialized) portals */
@@ -81,6 +85,15 @@ int qman_free_raw_portal(struct dpaa_raw_portal *portal);
 int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
 int bman_free_raw_portal(struct dpaa_raw_portal *portal);
 
+/* Post-process interrupts. NB, the kernel IRQ handler disables the interrupt
+ * line before notifying us, and this post-processing re-enables it once
+ * processing is complete. As such, it is essential to call this before going
+ * into another blocking read/select/poll.
+ */
+void qman_thread_irq(void);
+
+/* Global setup */
+int qman_global_init(void);
 #ifdef __cplusplus
 }
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 12/38] bus/dpaa: add QMan driver core routines
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (10 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 11/38] bus/dpaa: add QMAN interface driver Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 13/38] bus/dpaa: add BMAN driver core Shreyansh Jain
                   ` (26 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |    2 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |   88 ++
 drivers/bus/dpaa/base/qbman/qman.c        | 2402 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h        |  888 +++++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c |   12 +
 drivers/bus/dpaa/base/qbman/qman_priv.h   |   11 -
 drivers/bus/dpaa/include/fsl_qman.h       |  767 ++++++++-
 drivers/bus/dpaa/include/fsl_usd.h        |    1 +
 8 files changed, 4148 insertions(+), 23 deletions(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index f1120bd..ad68828 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -71,7 +71,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
+	base/qbman/dpaa_alloc.c \
 	base/qbman/dpaa_sys.c
 
 # Link Pthread
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
new file mode 100644
index 0000000..690576a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2009-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "dpaa_sys.h"
+#include <process.h>
+#include <fsl_qman.h>
+
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_fqid, result, count, align, partial);
+}
+
+void qman_release_fqid_range(u32 fqid, u32 count)
+{
+	process_release(dpaa_id_fqid, fqid, count);
+}
+
+int qman_reserve_fqid_range(u32 fqid, unsigned int count)
+{
+	return process_reserve(dpaa_id_fqid, fqid, count);
+}
+
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_qpool, result, count, align, partial);
+}
+
+void qman_release_pool_range(u32 pool, u32 count)
+{
+	process_release(dpaa_id_qpool, pool, count);
+}
+
+int qman_reserve_pool_range(u32 pool, u32 count)
+{
+	return process_reserve(dpaa_id_qpool, pool, count);
+}
+
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_cgrid, result, count, align, partial);
+}
+
+void qman_release_cgrid_range(u32 cgrid, u32 count)
+{
+	process_release(dpaa_id_cgrid, cgrid, count);
+}
+
+int qman_reserve_cgrid_range(u32 cgrid, u32 count)
+{
+	return process_reserve(dpaa_id_cgrid, cgrid, count);
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
new file mode 100644
index 0000000..d46e96a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -0,0 +1,2402 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define DQRR_MAXFILL	15
+#define EQCR_ITHRESH	4	/* if EQCR congests, interrupt threshold */
+#define IRQNAME		"QMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "QMan portal %d" */
+/* maximum number of DQRR entries to process in qman_poll() */
+#define FSL_QMAN_POLL_LIMIT 8
+
+/* Lock/unlock frame queues, subject to the "LOCKED" flag. This is about
+ * inter-processor locking only. Note, FQLOCK() is always called either under a
+ * local_irq_save() or from interrupt context - hence there's no need for irq
+ * protection (and indeed, attempting to nest irq-protection doesn't work, as
+ * the "irq en/disable" machinery isn't recursive...).
+ */
+#define FQLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_lock(&__fq478->fqlock); \
+	} while (0)
+#define FQUNLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_unlock(&__fq478->fqlock); \
+	} while (0)
+
+static inline void fq_set(struct qman_fq *fq, u32 mask)
+{
+	dpaa_set_bits(mask, &fq->flags);
+}
+
+static inline void fq_clear(struct qman_fq *fq, u32 mask)
+{
+	dpaa_clear_bits(mask, &fq->flags);
+}
+
+static inline int fq_isset(struct qman_fq *fq, u32 mask)
+{
+	return fq->flags & mask;
+}
+
+static inline int fq_isclear(struct qman_fq *fq, u32 mask)
+{
+	return !(fq->flags & mask);
+}
+
+struct qman_portal {
+	struct qm_portal p;
+	/* PORTAL_BITS_*** - dynamic, strictly internal */
+	unsigned long bits;
+	/* interrupt sources processed by portal_isr(), configurable */
+	unsigned long irq_sources;
+	u32 use_eqcr_ci_stashing;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* only 1 volatile dequeue at a time */
+	struct qman_fq *vdqcr_owned;
+	u32 sdqcr;
+	int dqrr_disable_ref;
+	/* A portal-specific handler for DCP ERNs. If this is NULL, the global
+	 * handler is called instead.
+	 */
+	qman_cb_dc_ern cb_dc_ern;
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct qm_portal_config *config;
+	struct dpa_rbtree retire_table;
+	char irqname[MAX_IRQNAME];
+	/* 2-element array. cgrs[0] is mask, cgrs[1] is snapshot. */
+	struct qman_cgrs *cgrs;
+	/* linked-list of CSCN handlers. */
+	struct list_head cgr_cbs;
+	/* list lock */
+	spinlock_t cgr_lock;
+	/* track if memory was allocated by the driver */
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* Keep a shadow copy of the DQRR on LE systems as the SW needs to
+	 * do byte swaps of DQRR read only memory.  First entry must be aligned
+	 * to 2 ** 10 to ensure DQRR index calculations based shadow copy
+	 * address (6 bits for address shift + 4 bits for the DQRR size).
+	 */
+	struct qm_dqrr_entry shadow_dqrr[QM_DQRR_SIZE]
+		    __attribute__((aligned(1024)));
+#endif
+};
+
+/* Global handler for DCP ERNs. Used when the portal receiving the message does
+ * not have a portal-specific handler.
+ */
+static qman_cb_dc_ern cb_dc_ern;
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static u16 affine_channels[NR_CPUS];
+static DEFINE_PER_CPU(struct qman_portal, qman_affine_portal);
+
+static inline struct qman_portal *get_affine_portal(void)
+{
+	return &get_cpu_var(qman_affine_portal);
+}
+
+/* This gives a FQID->FQ lookup to cover the fact that we can't directly demux
+ * retirement notifications (the fact they are sometimes h/w-consumed means that
+ * contextB isn't always a s/w demux - and as we can't know which case it is
+ * when looking at the notification, we have to use the slow lookup for all of
+ * them). NB, it's possible to have multiple FQ objects refer to the same FQID
+ * (though at most one of them should be the consumer), so this table isn't for
+ * all FQs - FQs are added when retirement commands are issued, and removed when
+ * they complete, which also massively reduces the size of this table.
+ */
+IMPLEMENT_DPAA_RBTREE(fqtree, struct qman_fq, node, fqid);
+/*
+ * This is what everything can wait on, even if it migrates to a different cpu
+ * to the one whose affine portal it is waiting on.
+ */
+static DECLARE_WAIT_QUEUE_HEAD(affine_queue);
+
+static inline int table_push_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	int ret = fqtree_push(&p->retire_table, fq);
+
+	if (ret)
+		pr_err("ERROR: double FQ-retirement %d\n", fq->fqid);
+	return ret;
+}
+
+static inline void table_del_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	fqtree_del(&p->retire_table, fq);
+}
+
+static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
+{
+	return fqtree_find(&p->retire_table, fqid);
+}
+
+static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to HW format */
+	fqd->fq_ctrl = cpu_to_be16(fqd->fq_ctrl);
+	fqd->dest_wq = cpu_to_be16(fqd->dest_wq);
+	fqd->ics_cred = cpu_to_be16(fqd->ics_cred);
+	fqd->context_b = cpu_to_be32(fqd->context_b);
+	fqd->context_a.opaque = cpu_to_be64(fqd->context_a.opaque);
+	fqd->opaque_td = cpu_to_be16(fqd->opaque_td);
+}
+
+static inline void hw_fqd_to_cpu(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to CPU format */
+	fqd->fq_ctrl = be16_to_cpu(fqd->fq_ctrl);
+	fqd->dest_wq = be16_to_cpu(fqd->dest_wq);
+	fqd->ics_cred = be16_to_cpu(fqd->ics_cred);
+	fqd->context_b = be32_to_cpu(fqd->context_b);
+	fqd->context_a.opaque = be64_to_cpu(fqd->context_a.opaque);
+}
+
+static inline void cpu_to_hw_fd(struct qm_fd *fd)
+{
+	fd->addr = cpu_to_be40(fd->addr);
+	fd->status = cpu_to_be32(fd->status);
+	fd->opaque = cpu_to_be32(fd->opaque);
+}
+
+static inline void hw_fd_to_cpu(struct qm_fd *fd)
+{
+	fd->addr = be40_to_cpu(fd->addr);
+	fd->status = be32_to_cpu(fd->status);
+	fd->opaque = be32_to_cpu(fd->opaque);
+}
+
+/* In the case that slow- and fast-path handling are both done by qman_poll()
+ * (ie. because there is no interrupt handling), we ought to balance how often
+ * we do the fast-path poll versus the slow-path poll. We'll use two decrementer
+ * sources, so we call the fast poll 'n' times before calling the slow poll
+ * once. The idle decrementer constant is used when the last slow-poll detected
+ * no work to do, and the busy decrementer constant when the last slow-poll had
+ * work to do.
+ */
+#define SLOW_POLL_IDLE   1000
+#define SLOW_POLL_BUSY   10
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is);
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit);
+
+/* Portal interrupt handler */
+static irqreturn_t portal_isr(__always_unused int irq, void *ptr)
+{
+	struct qman_portal *p = ptr;
+	/*
+	 * The CSCI/CCSCI source is cleared inside __poll_portal_slow(), because
+	 * it could race against a Query Congestion State command also given
+	 * as part of the handling of this interrupt source. We mustn't
+	 * clear it a second time in this top-level function.
+	 */
+	u32 clear = QM_DQAVAIL_MASK | (p->irq_sources &
+		~(QM_PIRQ_CSCI | QM_PIRQ_CCSCI));
+	u32 is = qm_isr_status_read(&p->p) & p->irq_sources;
+	/* DQRR-handling if it's interrupt-driven */
+	if (is & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+	/* Handling of anything else that's interrupt-driven */
+	clear |= __poll_portal_slow(p, is);
+	qm_isr_status_clear(&p->p, clear);
+	return IRQ_HANDLED;
+}
+
+/* This inner version is used privately by qman_create_affine_portal(), as well
+ * as by the exported qman_stop_dequeues().
+ */
+static inline void qman_stop_dequeues_ex(struct qman_portal *p)
+{
+	if (!(p->dqrr_disable_ref++))
+		qm_dqrr_set_maxfill(&p->p, 0);
+}
+
+static int drain_mr_fqrni(struct qm_portal *p)
+{
+	const struct qm_mr_entry *msg;
+loop:
+	msg = qm_mr_current(p);
+	if (!msg) {
+		/*
+		 * if MR was full and h/w had other FQRNI entries to produce, we
+		 * need to allow it time to produce those entries once the
+		 * existing entries are consumed. A worst-case situation
+		 * (fully-loaded system) means h/w sequencers may have to do 3-4
+		 * other things before servicing the portal's MR pump, each of
+		 * which (if slow) may take ~50 qman cycles (which is ~200
+		 * processor cycles). So rounding up and then multiplying this
+		 * worst-case estimate by a factor of 10, just to be
+		 * ultra-paranoid, goes as high as 10,000 cycles. NB, we consume
+		 * one entry at a time, so h/w has an opportunity to produce new
+		 * entries well before the ring has been fully consumed, so
+		 * we're being *really* paranoid here.
+		 */
+		u64 now, then = mfatb();
+
+		do {
+			now = mfatb();
+		} while ((then + 10000) > now);
+		msg = qm_mr_current(p);
+		if (!msg)
+			return 0;
+	}
+	if ((msg->verb & QM_MR_VERB_TYPE_MASK) != QM_MR_VERB_FQRNI) {
+		/* We aren't draining anything but FQRNIs */
+		pr_err("Found verb 0x%x in MR\n", msg->verb);
+		return -1;
+	}
+	qm_mr_next(p);
+	qm_mr_cci_consume(p, 1);
+	goto loop;
+}
+
+static inline int qm_eqcr_init(struct qm_portal *portal,
+			       enum qm_eqcr_pmode pmode,
+				unsigned int eq_stash_thresh,
+				int eq_stash_prio)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u32 cfg;
+	u8 pi;
+
+	eqcr->ring = portal->addr.ce + QM_CL_EQCR;
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	eqcr->cursor = eqcr->ring + pi;
+	eqcr->vbit = (qm_in(EQCR_PI_CINH) & QM_EQCR_SIZE) ?
+			QM_EQCR_VERB_VBIT : 0;
+	eqcr->available = QM_EQCR_SIZE - 1 -
+			qm_cyc_diff(QM_EQCR_SIZE, eqcr->ci, pi);
+	eqcr->ithresh = qm_in(EQCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+	eqcr->pmode = pmode;
+#endif
+	cfg = (qm_in(CFG) & 0x00ffffff) |
+		(eq_stash_thresh << 28) | /* QCSP_CFG: EST */
+		(eq_stash_prio << 26)	| /* QCSP_CFG: EP */
+		((pmode & 0x3) << 24);	/* QCSP_CFG::EPM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_eqcr_finish(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 pi, ci;
+	u32 cfg;
+
+	/*
+	 * Disable EQCI stashing because the QMan only
+	 * presents the value it previously stashed to
+	 * maintain coherency.  Setting the stash threshold
+	 * to 1 then 0 ensures that QMan has resyncronized
+	 * its internal copy so that the portal is clean
+	 * when it is reinitialized in the future
+	 */
+	cfg = (qm_in(CFG) & 0x0fffffff) |
+		(1 << 28); /* QCSP_CFG: EST */
+	qm_out(CFG, cfg);
+	cfg &= 0x0fffffff; /* stash threshold = 0 */
+	qm_out(CFG, cfg);
+
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+
+	/* Refresh EQCR CI cache value */
+	qm_cl_invalidate(EQCR_CI);
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (pi != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("losing uncommited EQCR entries\n");
+	if (ci != eqcr->ci)
+		pr_crit("missing existing EQCR completions\n");
+	if (eqcr->ci != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("EQCR destroyed unquiesced\n");
+}
+
+static inline int qm_dqrr_init(struct qm_portal *portal,
+			__maybe_unused const struct qm_portal_config *config,
+			enum qm_dqrr_dmode dmode,
+			__maybe_unused enum qm_dqrr_pmode pmode,
+			enum qm_dqrr_cmode cmode, u8 max_fill)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u32 cfg;
+
+	/* Make sure the DQRR will be idle when we enable */
+	qm_out(DQRR_SDQCR, 0);
+	qm_out(DQRR_VDQCR, 0);
+	qm_out(DQRR_PDQCR, 0);
+	dqrr->ring = portal->addr.ce + QM_CL_DQRR;
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->cursor = dqrr->ring + dqrr->ci;
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+	dqrr->vbit = (qm_in(DQRR_PI_CINH) & QM_DQRR_SIZE) ?
+			QM_DQRR_VERB_VBIT : 0;
+	dqrr->ithresh = qm_in(DQRR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	dqrr->dmode = dmode;
+	dqrr->pmode = pmode;
+	dqrr->cmode = cmode;
+#endif
+	/* Invalidate every ring entry before beginning */
+	for (cfg = 0; cfg < QM_DQRR_SIZE; cfg++)
+		dccivac(qm_cl(dqrr->ring, cfg));
+	cfg = (qm_in(CFG) & 0xff000f00) |
+		((max_fill & (QM_DQRR_SIZE - 1)) << 20) | /* DQRR_MF */
+		((dmode & 1) << 18) |			/* DP */
+		((cmode & 3) << 16) |			/* DCM */
+		0xa0 |					/* RE+SE */
+		(0 ? 0x40 : 0) |			/* Ignore RP */
+		(0 ? 0x10 : 0);				/* Ignore SP */
+	qm_out(CFG, cfg);
+	qm_dqrr_set_maxfill(portal, max_fill);
+	return 0;
+}
+
+static inline void qm_dqrr_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if ((dqrr->cmode != qm_dqrr_cdc) &&
+	    (dqrr->ci != DQRR_PTR2IDX(dqrr->cursor)))
+		pr_crit("Ignoring completed DQRR entries\n");
+#endif
+}
+
+static inline int qm_mr_init(struct qm_portal *portal,
+			     __maybe_unused enum qm_mr_pmode pmode,
+			     enum qm_mr_cmode cmode)
+{
+	register struct qm_mr *mr = &portal->mr;
+	u32 cfg;
+
+	mr->ring = portal->addr.ce + QM_CL_MR;
+	mr->pi = qm_in(MR_PI_CINH) & (QM_MR_SIZE - 1);
+	mr->ci = qm_in(MR_CI_CINH) & (QM_MR_SIZE - 1);
+	mr->cursor = mr->ring + mr->ci;
+	mr->fill = qm_cyc_diff(QM_MR_SIZE, mr->ci, mr->pi);
+	mr->vbit = (qm_in(MR_PI_CINH) & QM_MR_SIZE) ? QM_MR_VERB_VBIT : 0;
+	mr->ithresh = qm_in(MR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mr->pmode = pmode;
+	mr->cmode = cmode;
+#endif
+	cfg = (qm_in(CFG) & 0xfffff0ff) |
+		((cmode & 1) << 8);		/* QCSP_CFG:MM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+	const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+	DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+		mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+		if (!mr->pi)
+			mr->vbit ^= QM_MR_VERB_VBIT;
+		mr->fill++;
+		res = MR_INC(res);
+	}
+	dcbit_ro(res);
+}
+
+static inline
+struct qman_portal *qman_create_portal(
+			struct qman_portal *portal,
+			      const struct qm_portal_config *c,
+			      const struct qman_cgrs *cgrs)
+{
+	struct qm_portal *p;
+	char buf[16];
+	int ret;
+	u32 isdr;
+
+	p = &portal->p;
+
+	portal->use_eqcr_ci_stashing = ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	/*
+	 * If CI-stashing is used, the current defaults use a threshold of 3,
+	 * and stash with high-than-DQRR priority.
+	 */
+	if (qm_eqcr_init(p, qm_eqcr_pvb,
+			 portal->use_eqcr_ci_stashing ? 3 : 0, 1)) {
+		pr_err("Qman EQCR initialisation failed\n");
+		goto fail_eqcr;
+	}
+	if (qm_dqrr_init(p, c, qm_dqrr_dpush, qm_dqrr_pvb,
+			 qm_dqrr_cdc, DQRR_MAXFILL)) {
+		pr_err("Qman DQRR initialisation failed\n");
+		goto fail_dqrr;
+	}
+	if (qm_mr_init(p, qm_mr_pvb, qm_mr_cci)) {
+		pr_err("Qman MR initialisation failed\n");
+		goto fail_mr;
+	}
+	if (qm_mc_init(p)) {
+		pr_err("Qman MC initialisation failed\n");
+		goto fail_mc;
+	}
+
+	/* static interrupt-gating controls */
+	qm_dqrr_set_ithresh(p, 0);
+	qm_mr_set_ithresh(p, 0);
+	qm_isr_set_iperiod(p, 0);
+	portal->cgrs = kmalloc(2 * sizeof(*cgrs), GFP_KERNEL);
+	if (!portal->cgrs)
+		goto fail_cgrs;
+	/* initial snapshot is no-depletion */
+	qman_cgrs_init(&portal->cgrs[1]);
+	if (cgrs)
+		portal->cgrs[0] = *cgrs;
+	else
+		/* if the given mask is NULL, assume all CGRs can be seen */
+		qman_cgrs_fill(&portal->cgrs[0]);
+	INIT_LIST_HEAD(&portal->cgr_cbs);
+	spin_lock_init(&portal->cgr_lock);
+	portal->bits = 0;
+	portal->slowpoll = 0;
+	portal->sdqcr = QM_SDQCR_SOURCE_CHANNELS | QM_SDQCR_COUNT_UPTO3 |
+			QM_SDQCR_DEDICATED_PRECEDENCE | QM_SDQCR_TYPE_PRIO_QOS |
+			QM_SDQCR_TOKEN_SET(0xab) | QM_SDQCR_CHANNELS_DEDICATED;
+	portal->dqrr_disable_ref = 0;
+	portal->cb_dc_ern = NULL;
+	sprintf(buf, "qportal-%d", c->channel);
+	dpa_rbtree_init(&portal->retire_table);
+	isdr = 0xffffffff;
+	qm_isr_disable_write(p, isdr);
+	portal->irq_sources = 0;
+	qm_isr_enable_write(p, portal->irq_sources);
+	qm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, portal_isr, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need EQCR to be empty before continuing */
+	isdr &= ~QM_PIRQ_EQCI;
+	qm_isr_disable_write(p, isdr);
+	ret = qm_eqcr_get_fill(p);
+	if (ret) {
+		pr_err("Qman EQCR unclean\n");
+		goto fail_eqcr_empty;
+	}
+	isdr &= ~(QM_PIRQ_DQRI | QM_PIRQ_MRI);
+	qm_isr_disable_write(p, isdr);
+	if (qm_dqrr_current(p)) {
+		pr_err("Qman DQRR unclean\n");
+		qm_dqrr_cdc_consume_n(p, 0xffff);
+	}
+	if (qm_mr_current(p) && drain_mr_fqrni(p)) {
+		/* special handling, drain just in case it's a few FQRNIs */
+		if (drain_mr_fqrni(p))
+			goto fail_dqrr_mr_empty;
+	}
+	/* Success */
+	portal->config = c;
+	qm_isr_disable_write(p, 0);
+	qm_isr_uninhibit(p);
+	/* Write a sane SDQCR */
+	qm_dqrr_sdqcr_set(p, portal->sdqcr);
+	return portal;
+fail_dqrr_mr_empty:
+fail_eqcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->cgrs);
+	spin_lock_destroy(&portal->cgr_lock);
+fail_cgrs:
+	qm_mc_finish(p);
+fail_mc:
+	qm_mr_finish(p);
+fail_mr:
+	qm_dqrr_finish(p);
+fail_dqrr:
+	qm_eqcr_finish(p);
+fail_eqcr:
+	return NULL;
+}
+
+struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c,
+					      const struct qman_cgrs *cgrs)
+{
+	struct qman_portal *res;
+	struct qman_portal *portal = get_affine_portal();
+	/* A criteria for calling this function (from qman_driver.c) is that
+	 * we're already affine to the cpu and won't schedule onto another cpu.
+	 */
+
+	res = qman_create_portal(portal, c, cgrs);
+	if (res) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		affine_channels[c->cpu] =
+			c->channel;
+		spin_unlock(&affine_mask_lock);
+	}
+	return res;
+}
+
+static inline
+void qman_destroy_portal(struct qman_portal *qm)
+{
+	const struct qm_portal_config *pcfg;
+
+	/* Stop dequeues on the portal */
+	qm_dqrr_sdqcr_set(&qm->p, 0);
+
+	/*
+	 * NB we do this to "quiesce" EQCR. If we add enqueue-completions or
+	 * something related to QM_PIRQ_EQCI, this may need fixing.
+	 * Also, due to the prefetching model used for CI updates in the enqueue
+	 * path, this update will only invalidate the CI cacheline *after*
+	 * working on it, so we need to call this twice to ensure a full update
+	 * irrespective of where the enqueue processing was at when the teardown
+	 * began.
+	 */
+	qm_eqcr_cce_update(&qm->p);
+	qm_eqcr_cce_update(&qm->p);
+	pcfg = qm->config;
+
+	free_irq(pcfg->irq, qm);
+
+	kfree(qm->cgrs);
+	qm_mc_finish(&qm->p);
+	qm_mr_finish(&qm->p);
+	qm_dqrr_finish(&qm->p);
+	qm_eqcr_finish(&qm->p);
+
+	qm->config = NULL;
+
+	spin_lock_destroy(&qm->cgr_lock);
+}
+
+const struct qm_portal_config *qman_destroy_affine_portal(void)
+{
+	/* We don't want to redirect if we're a slave, use "raw" */
+	struct qman_portal *qm = get_affine_portal();
+	const struct qm_portal_config *pcfg;
+	int cpu;
+
+	pcfg = qm->config;
+	cpu = pcfg->cpu;
+
+	qman_destroy_portal(qm);
+
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int qman_get_portal_index(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+/* Inline helper to reduce nesting in __poll_portal_slow() */
+static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,
+				   const struct qm_mr_entry *msg, u8 verb)
+{
+	FQLOCK(fq);
+	switch (verb) {
+	case QM_MR_VERB_FQRL:
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_ORL));
+		fq_clear(fq, QMAN_FQ_STATE_ORL);
+		table_del_fq(p, fq);
+		break;
+	case QM_MR_VERB_FQRN:
+		DPAA_ASSERT((fq->state == qman_fq_state_parked) ||
+			    (fq->state == qman_fq_state_sched));
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_CHANGING));
+		fq_clear(fq, QMAN_FQ_STATE_CHANGING);
+		if (msg->fq.fqs & QM_MR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (msg->fq.fqs & QM_MR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		fq->state = qman_fq_state_retired;
+		break;
+	case QM_MR_VERB_FQPN:
+		DPAA_ASSERT(fq->state == qman_fq_state_sched);
+		DPAA_ASSERT(fq_isclear(fq, QMAN_FQ_STATE_CHANGING));
+		fq->state = qman_fq_state_parked;
+	}
+	FQUNLOCK(fq);
+}
+
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
+{
+	const struct qm_mr_entry *msg;
+	struct qm_mr_entry swapped_msg;
+
+	if (is & QM_PIRQ_CSCI) {
+		struct qman_cgrs rr, c;
+		struct qm_mc_result *mcr;
+		struct qman_cgr *cgr;
+
+		spin_lock(&p->cgr_lock);
+		/*
+		 * The CSCI bit must be cleared _before_ issuing the
+		 * Query Congestion State command, to ensure that a long
+		 * CGR State Change callback cannot miss an intervening
+		 * state change.
+		 */
+		qm_isr_status_clear(&p->p, QM_PIRQ_CSCI);
+		qm_mc_start(&p->p);
+		qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+		while (!(mcr = qm_mc_result(&p->p)))
+			cpu_relax();
+		/* mask out the ones I'm not interested in */
+		qman_cgrs_and(&rr, (const struct qman_cgrs *)
+			&mcr->querycongestion.state, &p->cgrs[0]);
+		/* check previous snapshot for delta, enter/exit congestion */
+		qman_cgrs_xor(&c, &rr, &p->cgrs[1]);
+		/* update snapshot */
+		qman_cgrs_cp(&p->cgrs[1], &rr);
+		/* Invoke callback */
+		list_for_each_entry(cgr, &p->cgr_cbs, node)
+			if (cgr->cb && qman_cgrs_get(&c, cgr->cgrid))
+				cgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid));
+		spin_unlock(&p->cgr_lock);
+	}
+
+	if (is & QM_PIRQ_EQRI) {
+		qm_eqcr_cce_update(&p->p);
+		qm_eqcr_set_ithresh(&p->p, 0);
+		wake_up(&affine_queue);
+	}
+
+	if (is & QM_PIRQ_MRI) {
+		struct qman_fq *fq;
+		u8 verb, num = 0;
+mr_loop:
+		qm_mr_pvb_update(&p->p);
+		msg = qm_mr_current(&p->p);
+		if (!msg)
+			goto mr_done;
+		swapped_msg = *msg;
+		hw_fd_to_cpu(&swapped_msg.ern.fd);
+		verb = msg->verb & QM_MR_VERB_TYPE_MASK;
+		/* The message is a software ERN iff the 0x20 bit is set */
+		if (verb & 0x20) {
+			switch (verb) {
+			case QM_MR_VERB_FQRNI:
+				/* nada, we drop FQRNIs on the floor */
+				break;
+			case QM_MR_VERB_FQRN:
+			case QM_MR_VERB_FQRL:
+				/* Lookup in the retirement table */
+				fq = table_find_fq(p,
+						   be32_to_cpu(msg->fq.fqid));
+				BUG_ON(!fq);
+				fq_state_change(p, fq, &swapped_msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_FQPN:
+				/* Parked */
+				fq = (void *)(uintptr_t)
+					be32_to_cpu(msg->fq.contextB);
+				fq_state_change(p, fq, msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_DC_ERN:
+				/* DCP ERN */
+				if (p->cb_dc_ern)
+					p->cb_dc_ern(p, msg);
+				else if (cb_dc_ern)
+					cb_dc_ern(p, msg);
+				else {
+					static int warn_once;
+
+					if (!warn_once) {
+						pr_crit("Leaking DCP ERNs!\n");
+						warn_once = 1;
+					}
+				}
+				break;
+			default:
+				pr_crit("Invalid MR verb 0x%02x\n", verb);
+			}
+		} else {
+			/* Its a software ERN */
+			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+			fq->cb.ern(p, fq, &swapped_msg);
+		}
+		num++;
+		qm_mr_next(&p->p);
+		goto mr_loop;
+mr_done:
+		qm_mr_cci_consume(&p->p, num);
+	}
+	/*
+	 * QM_PIRQ_CSCI/CCSCI has already been cleared, as part of its specific
+	 * processing. If that interrupt source has meanwhile been re-asserted,
+	 * we mustn't clear it here (or in the top-level interrupt handler).
+	 */
+	return is & (QM_PIRQ_EQCI | QM_PIRQ_EQRI | QM_PIRQ_MRI);
+}
+
+/*
+ * remove some slowish-path stuff from the "fast path" and make sure it isn't
+ * inlined.
+ */
+static noinline void clear_vdqcr(struct qman_portal *p, struct qman_fq *fq)
+{
+	p->vdqcr_owned = NULL;
+	FQLOCK(fq);
+	fq_clear(fq, QMAN_FQ_STATE_VDQCR);
+	FQUNLOCK(fq);
+	wake_up(&affine_queue);
+}
+
+/*
+ * The only states that would conflict with other things if they ran at the
+ * same time on the same cpu are:
+ *
+ *   (i) setting/clearing vdqcr_owned, and
+ *  (ii) clearing the NE (Not Empty) flag.
+ *
+ * Both are safe. Because;
+ *
+ *   (i) this clearing can only occur after qman_set_vdq() has set the
+ *	 vdqcr_owned field (which it does before setting VDQCR), and
+ *	 qman_volatile_dequeue() blocks interrupts and preemption while this is
+ *	 done so that we can't interfere.
+ *  (ii) the NE flag is only cleared after qman_retire_fq() has set it, and as
+ *	 with (i) that API prevents us from interfering until it's safe.
+ *
+ * The good thing is that qman_set_vdq() and qman_retire_fq() run far
+ * less frequently (ie. per-FQ) than __poll_portal_fast() does, so the nett
+ * advantage comes from this function not having to "lock" anything at all.
+ *
+ * Note also that the callbacks are invoked at points which are safe against the
+ * above potential conflicts, but that this function itself is not re-entrant
+ * (this is because the function tracks one end of each FIFO in the portal and
+ * we do *not* want to lock that). So the consequence is that it is safe for
+ * user callbacks to call into any QMan API.
+ */
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit)
+{
+	const struct qm_dqrr_entry *dq;
+	struct qman_fq *fq;
+	enum qman_cb_dqrr_result res;
+	unsigned int limit = 0;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+	do {
+		qm_dqrr_pvb_update(&p->p);
+		dq = qm_dqrr_current(&p->p);
+		if (!dq)
+			break;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* If running on an LE system the fields of the
+	 * dequeue entry must be swapper.  Because the
+	 * QMan HW will ignore writes the DQRR entry is
+	 * copied and the index stored within the copy
+	 */
+		shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+		*shadow = *dq;
+		dq = shadow;
+		shadow->fqid = be32_to_cpu(shadow->fqid);
+		shadow->contextB = be32_to_cpu(shadow->contextB);
+		shadow->seqnum = be16_to_cpu(shadow->seqnum);
+		hw_fd_to_cpu(&shadow->fd);
+#endif
+
+		if (dq->stat & QM_DQRR_STAT_UNSCHEDULED) {
+			/*
+			 * VDQCR: don't trust context_b as the FQ may have
+			 * been configured for h/w consumption and we're
+			 * draining it post-retirement.
+			 */
+			fq = p->vdqcr_owned;
+			/*
+			 * We only set QMAN_FQ_STATE_NE when retiring, so we
+			 * only need to check for clearing it when doing
+			 * volatile dequeues.  It's one less thing to check
+			 * in the critical path (SDQCR).
+			 */
+			if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+				fq_clear(fq, QMAN_FQ_STATE_NE);
+			/*
+			 * This is duplicated from the SDQCR code, but we
+			 * have stuff to do before *and* after this callback,
+			 * and we don't want multiple if()s in the critical
+			 * path (SDQCR).
+			 */
+			res = fq->cb.dqrr(p, fq, dq);
+			if (res == qman_cb_dqrr_stop)
+				break;
+			/* Check for VDQCR completion */
+			if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+				clear_vdqcr(p, fq);
+		} else {
+			/* SDQCR: context_b points to the FQ */
+			fq = (void *)(uintptr_t)dq->contextB;
+			/* Now let the callback do its stuff */
+			res = fq->cb.dqrr(p, fq, dq);
+			/*
+			 * The callback can request that we exit without
+			 * consuming this entry nor advancing;
+			 */
+			if (res == qman_cb_dqrr_stop)
+				break;
+		}
+		/* Interpret 'dq' from a driver perspective. */
+		/*
+		 * Parking isn't possible unless HELDACTIVE was set. NB,
+		 * FORCEELIGIBLE implies HELDACTIVE, so we only need to
+		 * check for HELDACTIVE to cover both.
+		 */
+		DPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||
+			    (res != qman_cb_dqrr_park));
+		/* just means "skip it, I'll consume it myself later on" */
+		if (res != qman_cb_dqrr_defer)
+			qm_dqrr_cdc_consume_1ptr(&p->p, dq,
+						 res == qman_cb_dqrr_park);
+		/* Move forward */
+		qm_dqrr_next(&p->p);
+		/*
+		 * Entry processed and consumed, increment our counter.  The
+		 * callback can request that we exit after consuming the
+		 * entry, and we also exit if we reach our processing limit,
+		 * so loop back only if neither of these conditions is met.
+		 */
+	} while (++limit < poll_limit && res != qman_cb_dqrr_consume_stop);
+
+	return limit;
+}
+
+u16 qman_affine_channel(int cpu)
+{
+	if (cpu < 0) {
+		struct qman_portal *portal = get_affine_portal();
+
+		cpu = portal->config->cpu;
+	}
+	BUG_ON(!CPU_ISSET(cpu, &affine_mask));
+	return affine_channels[cpu];
+}
+
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)
+{
+	struct qman_portal *p = get_affine_portal();
+	const struct qm_dqrr_entry *dq;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+
+	qm_dqrr_pvb_update(&p->p);
+	dq = qm_dqrr_current(&p->p);
+	if (!dq)
+		return NULL;
+
+	if (!(dq->stat & QM_DQRR_STAT_FD_VALID)) {
+		/* Invalid DQRR - put the portal and consume the DQRR.
+		 * Return NULL to user as no packet is seen.
+		 */
+		qman_dqrr_consume(fq, (struct qm_dqrr_entry *)dq);
+		return NULL;
+	}
+
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+	*shadow = *dq;
+	dq = shadow;
+	shadow->fqid = be32_to_cpu(shadow->fqid);
+	shadow->contextB = be32_to_cpu(shadow->contextB);
+	shadow->seqnum = be16_to_cpu(shadow->seqnum);
+	hw_fd_to_cpu(&shadow->fd);
+#endif
+
+	if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+		fq_clear(fq, QMAN_FQ_STATE_NE);
+
+	return (struct qm_dqrr_entry *)dq;
+}
+
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+		clear_vdqcr(p, fq);
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, 0);
+	qm_dqrr_next(&p->p);
+}
+
+int qman_poll_dqrr(unsigned int limit)
+{
+	struct qman_portal *p = get_affine_portal();
+	int ret;
+
+	ret = __poll_portal_fast(p, limit);
+	return ret;
+}
+
+void qman_poll(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if ((~p->irq_sources) & QM_PIRQ_SLOW) {
+		if (!(p->slowpoll--)) {
+			u32 is = qm_isr_status_read(&p->p) & ~p->irq_sources;
+			u32 active = __poll_portal_slow(p, is);
+
+			if (active) {
+				qm_isr_status_clear(&p->p, active);
+				p->slowpoll = SLOW_POLL_BUSY;
+			} else
+				p->slowpoll = SLOW_POLL_IDLE;
+		}
+	}
+	if ((~p->irq_sources) & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+}
+
+void qman_stop_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qman_stop_dequeues_ex(p);
+}
+
+void qman_start_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	DPAA_ASSERT(p->dqrr_disable_ref > 0);
+	if (!(--p->dqrr_disable_ref))
+		qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
+}
+
+void qman_static_dequeue_add(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr |= pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+void qman_static_dequeue_del(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr &= ~pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+u32 qman_static_dequeue_get(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->sdqcr;
+}
+
+void qman_dca(struct qm_dqrr_entry *dq, int park_request)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, park_request);
+}
+
+/* Frame queue API */
+static const char *mcr_result_str(u8 result)
+{
+	switch (result) {
+	case QM_MCR_RESULT_NULL:
+		return "QM_MCR_RESULT_NULL";
+	case QM_MCR_RESULT_OK:
+		return "QM_MCR_RESULT_OK";
+	case QM_MCR_RESULT_ERR_FQID:
+		return "QM_MCR_RESULT_ERR_FQID";
+	case QM_MCR_RESULT_ERR_FQSTATE:
+		return "QM_MCR_RESULT_ERR_FQSTATE";
+	case QM_MCR_RESULT_ERR_NOTEMPTY:
+		return "QM_MCR_RESULT_ERR_NOTEMPTY";
+	case QM_MCR_RESULT_PENDING:
+		return "QM_MCR_RESULT_PENDING";
+	case QM_MCR_RESULT_ERR_BADCOMMAND:
+		return "QM_MCR_RESULT_ERR_BADCOMMAND";
+	}
+	return "<unknown MCR result>";
+}
+
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
+{
+	struct qm_fqd fqd;
+	struct qm_mcr_queryfq_np np;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID) {
+		int ret = qman_alloc_fqid(&fqid);
+
+		if (ret)
+			return ret;
+	}
+	spin_lock_init(&fq->fqlock);
+	fq->fqid = fqid;
+	fq->flags = flags;
+	fq->state = qman_fq_state_oos;
+	fq->cgr_groupid = 0;
+
+	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
+		return 0;
+	/* Everything else is AS_IS support */
+	p = get_affine_portal();
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(&fqd);
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ_NP);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ_NP failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	np = mcr->queryfq_np;
+	/* Phew, have queryfq and queryfq_np results, stitch together
+	 * the FQ object from those.
+	 */
+	fq->cgr_groupid = fqd.cgid;
+	switch (np.state & QM_MCR_NP_STATE_MASK) {
+	case QM_MCR_NP_STATE_OOS:
+		break;
+	case QM_MCR_NP_STATE_RETIRED:
+		fq->state = qman_fq_state_retired;
+		if (np.frm_cnt)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		break;
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+		fq->state = qman_fq_state_sched;
+		if (np.state & QM_MCR_NP_STATE_R)
+			fq_set(fq, QMAN_FQ_STATE_CHANGING);
+		break;
+	case QM_MCR_NP_STATE_PARKED:
+		fq->state = qman_fq_state_parked;
+		break;
+	default:
+		DPAA_ASSERT(NULL == "invalid FQ state");
+	}
+	if (fqd.fq_ctrl & QM_FQCTRL_CGE)
+		fq->state |= QMAN_FQ_STATE_CGR_EN;
+	return 0;
+err:
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID)
+		qman_release_fqid(fqid);
+	return -EIO;
+}
+
+void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
+{
+	/*
+	 * We don't need to lock the FQ as it is a pre-condition that the FQ be
+	 * quiesced. Instead, run some checks.
+	 */
+	switch (fq->state) {
+	case qman_fq_state_parked:
+		DPAA_ASSERT(flags & QMAN_FQ_DESTROY_PARKED);
+	case qman_fq_state_oos:
+		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
+			qman_release_fqid(fq->fqid);
+
+		return;
+	default:
+		break;
+	}
+	DPAA_ASSERT(NULL == "qman_free_fq() on unquiesced FQ!");
+}
+
+u32 qman_fq_fqid(struct qman_fq *fq)
+{
+	return fq->fqid;
+}
+
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags)
+{
+	if (state)
+		*state = fq->state;
+	if (flags)
+		*flags = fq->flags;
+}
+
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	u8 res, myverb = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		QM_MCC_VERB_INITFQ_SCHED : QM_MCC_VERB_INITFQ_PARKED;
+
+	if ((fq->state != qman_fq_state_oos) &&
+	    (fq->state != qman_fq_state_parked))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	if (opts && (opts->we_mask & QM_INITFQ_WE_OAC)) {
+		/* And can't be set at the same time as TDTHRESH */
+		if (opts->we_mask & QM_INITFQ_WE_TDTHRESH)
+			return -EINVAL;
+	}
+	/* Issue an INITFQ_[PARKED|SCHED] management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     ((fq->state != qman_fq_state_oos) &&
+				(fq->state != qman_fq_state_parked)))) {
+		FQUNLOCK(fq);
+		return -EBUSY;
+	}
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initfq = *opts;
+	mcc->initfq.fqid = cpu_to_be32(fq->fqid);
+	mcc->initfq.count = 0;
+	/*
+	 * If the FQ does *not* have the TO_DCPORTAL flag, context_b is set as a
+	 * demux pointer. Otherwise, the caller-provided value is allowed to
+	 * stand, don't overwrite it.
+	 */
+	if (fq_isclear(fq, QMAN_FQ_FLAG_TO_DCPORTAL)) {
+		dma_addr_t phys_fq;
+
+		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+		/*
+		 *  and the physical address - NB, if the user wasn't trying to
+		 * set CONTEXTA, clear the stashing settings.
+		 */
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_CONTEXTA)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;
+			memset(&mcc->initfq.fqd.context_a, 0,
+			       sizeof(mcc->initfq.fqd.context_a));
+		} else {
+			phys_fq = rte_mem_virt2phy(fq);
+			qm_fqd_stashing_set64(&mcc->initfq.fqd, phys_fq);
+		}
+	}
+	if (flags & QMAN_INITFQ_FLAG_LOCAL) {
+		mcc->initfq.fqd.dest.channel = p->config->channel;
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_DESTWQ)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_DESTWQ;
+			mcc->initfq.fqd.dest.wq = 4;
+		}
+	}
+	mcc->initfq.we_mask = cpu_to_be16(mcc->initfq.we_mask);
+	cpu_to_hw_fqd(&mcc->initfq.fqd);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		FQUNLOCK(fq);
+		return -EIO;
+	}
+	if (opts) {
+		if (opts->we_mask & QM_INITFQ_WE_FQCTRL) {
+			if (opts->fqd.fq_ctrl & QM_FQCTRL_CGE)
+				fq_set(fq, QMAN_FQ_STATE_CGR_EN);
+			else
+				fq_clear(fq, QMAN_FQ_STATE_CGR_EN);
+		}
+		if (opts->we_mask & QM_INITFQ_WE_CGID)
+			fq->cgr_groupid = opts->fqd.cgid;
+	}
+	fq->state = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		qman_fq_state_sched : qman_fq_state_parked;
+	FQUNLOCK(fq);
+	return 0;
+}
+
+int qman_schedule_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_parked)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTERFQ_SCHED management command */
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state != qman_fq_state_parked))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_SCHED);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_SCHED);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_sched;
+out:
+	FQUNLOCK(fq);
+
+	return ret;
+}
+
+int qman_retire_fq(struct qman_fq *fq, u32 *flags)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int rval;
+	u8 res;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_sched))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_retired) ||
+				(fq->state == qman_fq_state_oos))) {
+		rval = -EBUSY;
+		goto out;
+	}
+	rval = table_push_fq(p, fq);
+	if (rval)
+		goto out;
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_RETIRE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_RETIRE);
+	res = mcr->result;
+	/*
+	 * "Elegant" would be to treat OK/PENDING the same way; set CHANGING,
+	 * and defer the flags until FQRNI or FQRN (respectively) show up. But
+	 * "Friendly" is to process OK immediately, and not set CHANGING. We do
+	 * friendly, otherwise the caller doesn't necessarily have a fully
+	 * "retired" FQ on return even if the retirement was immediate. However
+	 * this does mean some code duplication between here and
+	 * fq_state_change().
+	 */
+	if (likely(res == QM_MCR_RESULT_OK)) {
+		rval = 0;
+		/* Process 'fq' right away, we'll ignore FQRNI */
+		if (mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		if (flags)
+			*flags = fq->flags;
+		fq->state = qman_fq_state_retired;
+		if (fq->cb.fqs) {
+			/*
+			 * Another issue with supporting "immediate" retirement
+			 * is that we're forced to drop FQRNIs, because by the
+			 * time they're seen it may already be "too late" (the
+			 * fq may have been OOS'd and free()'d already). But if
+			 * the upper layer wants a callback whether it's
+			 * immediate or not, we have to fake a "MR" entry to
+			 * look like an FQRNI...
+			 */
+			struct qm_mr_entry msg;
+
+			msg.verb = QM_MR_VERB_FQRNI;
+			msg.fq.fqs = mcr->alterfq.fqs;
+			msg.fq.fqid = fq->fqid;
+			msg.fq.contextB = (u32)(uintptr_t)fq;
+			fq->cb.fqs(p, fq, &msg);
+		}
+	} else if (res == QM_MCR_RESULT_PENDING) {
+		rval = 1;
+		fq_set(fq, QMAN_FQ_STATE_CHANGING);
+	} else {
+		rval = -EIO;
+		table_del_fq(p, fq);
+	}
+out:
+	FQUNLOCK(fq);
+	return rval;
+}
+
+int qman_oos_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_retired)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_BLOCKOOS)) ||
+		     (fq->state != qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_OOS);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_OOS);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_oos;
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_fq_flow_control(struct qman_fq *fq, int xon)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+	u8 myverb;
+
+	if ((fq->state == qman_fq_state_oos) ||
+	    (fq->state == qman_fq_state_retired) ||
+		(fq->state == qman_fq_state_parked))
+		return -EINVAL;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTER_FQXON or ALTER_FQXOFF management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_parked) ||
+			(fq->state == qman_fq_state_oos) ||
+			(fq->state == qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = fq->fqid;
+	mcc->alterfq.count = 0;
+	myverb = xon ? QM_MCC_VERB_ALTER_FQXON : QM_MCC_VERB_ALTER_FQXOFF;
+
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(fqd);
+	if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_fq_has_pkts(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	int ret = 0;
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		ret = !!mcr->queryfq_np.frm_cnt;
+	return ret;
+}
+
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		*np = mcr->queryfq_np;
+		np->fqd_link = be24_to_cpu(np->fqd_link);
+		np->odp_seq = be16_to_cpu(np->odp_seq);
+		np->orp_nesn = be16_to_cpu(np->orp_nesn);
+		np->orp_ea_hseq  = be16_to_cpu(np->orp_ea_hseq);
+		np->orp_ea_tseq  = be16_to_cpu(np->orp_ea_tseq);
+		np->orp_ea_hptr = be24_to_cpu(np->orp_ea_hptr);
+		np->orp_ea_tptr = be24_to_cpu(np->orp_ea_tptr);
+		np->pfdr_hptr = be24_to_cpu(np->pfdr_hptr);
+		np->pfdr_tptr = be24_to_cpu(np->pfdr_tptr);
+		np->ics_surp = be16_to_cpu(np->ics_surp);
+		np->byte_cnt = be32_to_cpu(np->byte_cnt);
+		np->frm_cnt = be24_to_cpu(np->frm_cnt);
+		np->ra1_sfdr = be16_to_cpu(np->ra1_sfdr);
+		np->ra2_sfdr = be16_to_cpu(np->ra2_sfdr);
+		np->od1_sfdr = be16_to_cpu(np->od1_sfdr);
+		np->od2_sfdr = be16_to_cpu(np->od2_sfdr);
+		np->od3_sfdr = be16_to_cpu(np->od3_sfdr);
+	}
+	if (res == QM_MCR_RESULT_ERR_FQID)
+		return -ERANGE;
+	else if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res, myverb;
+
+	myverb = (query_dedicated) ? QM_MCR_VERB_QUERYWQ_DEDICATED :
+				 QM_MCR_VERB_QUERYWQ;
+	mcc = qm_mc_start(&p->p);
+	mcc->querywq.channel.id = cpu_to_be16(wq->channel.id);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		int i, array_len;
+
+		wq->channel.id = be16_to_cpu(mcr->querywq.channel.id);
+		array_len = ARRAY_SIZE(mcr->querywq.wq_len);
+		for (i = 0; i < array_len; i++)
+			wq->wq_len[i] = be32_to_cpu(mcr->querywq.wq_len[i]);
+	}
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERYWQ failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->cgrtestwrite.cgid = cgr->cgrid;
+	mcc->cgrtestwrite.i_bcnt_hi = (u8)(i_bcnt >> 32);
+	mcc->cgrtestwrite.i_bcnt_lo = (u32)i_bcnt;
+	qm_mc_commit(&p->p, QM_MCC_VERB_CGRTESTWRITE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_CGRTESTWRITE);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*result = mcr->cgrtestwrite;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("CGR TEST WRITE failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->querycgr.cgid = cgr->cgrid;
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCGR);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYCGR);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*cgrd = mcr->querycgr;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CGR failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	cgrd->cgr.wr_parm_g.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_g.word);
+	cgrd->cgr.wr_parm_y.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_y.word);
+	cgrd->cgr.wr_parm_r.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_r.word);
+	cgrd->cgr.cscn_targ =  be32_to_cpu(cgrd->cgr.cscn_targ);
+	cgrd->cgr.__cs_thres = be16_to_cpu(cgrd->cgr.__cs_thres);
+	for (i = 0; i < ARRAY_SIZE(cgrd->cscn_targ_swp); i++)
+		cgrd->cscn_targ_swp[i] =
+			be32_to_cpu(cgrd->cscn_targ_swp[i]);
+	return 0;
+}
+
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion)
+{
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	qm_mc_start(&p->p);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			QM_MCC_VERB_QUERYCONGESTION);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*congestion = mcr->querycongestion;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CONGESTION failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	for (i = 0; i < ARRAY_SIZE(congestion->state.state); i++)
+		congestion->state.state[i] =
+			be32_to_cpu(congestion->state.state[i]);
+	return 0;
+}
+
+int qman_set_vdq(struct qman_fq *fq, u16 num)
+{
+	struct qman_portal *p = get_affine_portal();
+	uint32_t vdqcr;
+	int ret = -EBUSY;
+
+	vdqcr = QM_VDQCR_EXACT;
+	vdqcr |= QM_VDQCR_NUMFRAMES_SET(num);
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired)) {
+		ret = -EINVAL;
+		goto out;
+	}
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR)) {
+		ret = -EBUSY;
+		goto out;
+	}
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (!ret)
+		qm_dqrr_vdqcr_set(&p->p, vdqcr);
+
+out:
+	return ret;
+}
+
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
+			  u32 vdqcr)
+{
+	struct qman_portal *p;
+	int ret = -EBUSY;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired))
+		return -EINVAL;
+	if (vdqcr & QM_VDQCR_FQID_MASK)
+		return -EINVAL;
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+		return -EBUSY;
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	p = get_affine_portal();
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (ret)
+		return ret;
+
+	/* VDQCR is set */
+	qm_dqrr_vdqcr_set(&p->p, vdqcr);
+	return 0;
+}
+
+static noinline void update_eqcr_ci(struct qman_portal *p, u8 avail)
+{
+	if (avail)
+		qm_eqcr_cce_prefetch(&p->p);
+	else
+		qm_eqcr_cce_update(&p->p);
+}
+
+int qman_eqcr_is_empty(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	u8 avail;
+
+	update_eqcr_ci(p, 0);
+	avail = qm_eqcr_get_fill(&p->p);
+	return (avail == 0);
+}
+
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine)
+{
+	if (affine) {
+		struct qman_portal *p = get_affine_portal();
+
+		p->cb_dc_ern = handler;
+	} else
+		cb_dc_ern = handler;
+}
+
+static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
+					struct qman_fq *fq,
+					const struct qm_fd *fd,
+					u32 flags)
+{
+	struct qm_eqcr_entry *eq;
+	u8 avail;
+
+	if (p->use_eqcr_ci_stashing) {
+		/*
+		 * The stashing case is easy, only update if we need to in
+		 * order to try and liberate ring entries.
+		 */
+		eq = qm_eqcr_start_stash(&p->p);
+	} else {
+		/*
+		 * The non-stashing case is harder, need to prefetch ahead of
+		 * time.
+		 */
+		avail = qm_eqcr_get_avail(&p->p);
+		if (avail < 2)
+			update_eqcr_ci(p, avail);
+		eq = qm_eqcr_start_no_stash(&p->p);
+	}
+
+	if (unlikely(!eq))
+		return NULL;
+
+	if (flags & QMAN_ENQUEUE_FLAG_DCA)
+		eq->dca = QM_EQCR_DCA_ENABLE |
+			((flags & QMAN_ENQUEUE_FLAG_DCA_PARK) ?
+					QM_EQCR_DCA_PARK : 0) |
+			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
+	eq->fqid = cpu_to_be32(fq->fqid);
+	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+	eq->fd = *fd;
+	cpu_to_hw_fd(&eq->fd);
+	return eq;
+}
+
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_CMD_ENQUEUE |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+	/* Factor the below out, it's used from qman_enqueue_orp() too */
+	return 0;
+}
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_portal *portal = &p->p;
+
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
+
+	u8 i, diff, old_ci, sent = 0;
+
+	/* Update the available entries if no entry is free */
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return 0;
+	}
+
+	/* try to send as many frames as possible */
+	while (eqcr->available && frames_to_send--) {
+		eq->fqid = cpu_to_be32(fq->fqid);
+		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+		eq->fd.opaque_addr = fd->opaque_addr;
+		eq->fd.addr = cpu_to_be40(fd->addr);
+		eq->fd.status = cpu_to_be32(fd->status);
+		eq->fd.opaque = cpu_to_be32(fd->opaque);
+
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		eqcr->available--;
+		sent++;
+		fd++;
+	}
+	lwsync();
+
+	/* In order for flushes to complete faster, all lines are recorded in
+	 * 32 bit word.
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		eq->__dont_write_directly__verb =
+			QM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;
+		prev_eq = eq;
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		if (unlikely((prev_eq + 1) != eq))
+			eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+	}
+
+	/* We need  to flush all the lines but without load/store operations
+	 * between them
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		dcbf(eq);
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+	}
+	/* Update cursor for the next call */
+	eqcr->cursor = eq;
+	return sent;
+}
+
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum)
+{
+	struct qman_portal *p  = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Process ORP-specifics here */
+	if (flags & QMAN_ENQUEUE_FLAG_NLIS)
+		orp_seqnum |= QM_EQCR_SEQNUM_NLIS;
+	else {
+		orp_seqnum &= ~QM_EQCR_SEQNUM_NLIS;
+		if (flags & QMAN_ENQUEUE_FLAG_NESN)
+			orp_seqnum |= QM_EQCR_SEQNUM_NESN;
+		else
+			/* No need to check 4 QMAN_ENQUEUE_FLAG_HOLE */
+			orp_seqnum &= ~QM_EQCR_SEQNUM_NESN;
+	}
+	eq->seqnum = cpu_to_be16(orp_seqnum);
+	eq->orp = cpu_to_be32(orp->fqid);
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_ORP |
+		((flags & (QMAN_ENQUEUE_FLAG_HOLE | QMAN_ENQUEUE_FLAG_NESN)) ?
+				0 : QM_EQCR_VERB_CMD_ENQUEUE) |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+
+	return 0;
+}
+
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+	u8 verb = QM_MCC_VERB_MODIFYCGR;
+
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initcgr = *opts;
+	mcc->initcgr.we_mask = cpu_to_be16(mcc->initcgr.we_mask);
+	mcc->initcgr.cgr.wr_parm_g.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_g.word);
+	mcc->initcgr.cgr.wr_parm_y.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_y.word);
+	mcc->initcgr.cgr.wr_parm_r.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_r.word);
+	mcc->initcgr.cgr.cscn_targ =  cpu_to_be32(mcc->initcgr.cgr.cscn_targ);
+	mcc->initcgr.cgr.__cs_thres = cpu_to_be16(mcc->initcgr.cgr.__cs_thres);
+
+	mcc->initcgr.cgid = cgr->cgrid;
+	if (flags & QMAN_CGR_FLAG_USE_INIT)
+		verb = QM_MCC_VERB_INITCGR;
+	qm_mc_commit(&p->p, verb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == verb);
+	res = mcr->result;
+	return (res == QM_MCR_RESULT_OK) ? 0 : -EIO;
+}
+
+#define TARG_MASK(n) (0x80000000 >> (n->config->channel - \
+					QM_CHANNEL_SWPORTAL0))
+#define TARG_DCP_MASK(n) (0x80000000 >> (10 + n))
+#define PORTAL_IDX(n) (n->config->channel - QM_CHANNEL_SWPORTAL0)
+
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret;
+	struct qman_portal *p;
+
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	p = get_affine_portal();
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	cgr->chan = p->config->channel;
+	spin_lock(&p->cgr_lock);
+
+	/* if no opts specified, just add it to the list */
+	if (!opts)
+		goto add_list;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		goto release_lock;
+	if (opts)
+		local_opts = *opts;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+			QM_CGR_TARG_UDP_CTRL_WRITE_BIT | PORTAL_IDX(p);
+	else
+		/* Overwrite TARG */
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+							TARG_MASK(p);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT, &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		goto release_lock;
+add_list:
+	list_add(&cgr->node, &p->cgr_cbs);
+
+	/* Determine if newly added object requires its callback to be called */
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret) {
+		/* we can't go back, so proceed and return success, but screen
+		 * and wail to the log file.
+		 */
+		pr_crit("CGR HW state partially modified\n");
+		ret = 0;
+		goto release_lock;
+	}
+	if (cgr->cb && cgr_state.cgr.cscn_en && qman_cgrs_get(&p->cgrs[1],
+							      cgr->cgrid))
+		cgr->cb(p, cgr, 1);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+	return ret;
+}
+
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcc_initcgr local_opts;
+	struct qm_mcr_querycgr cgr_state;
+	int ret;
+
+	if ((qman_ip_rev & 0xFF00) < QMAN_REV30) {
+		pr_warn("QMan version doesn't support CSCN => DCP portal\n");
+		return -EINVAL;
+	}
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		return ret;
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	if (opts)
+		local_opts = *opts;
+
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+				QM_CGR_TARG_UDP_CTRL_WRITE_BIT |
+				QM_CGR_TARG_UDP_CTRL_DCP | dcp_portal;
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+					TARG_DCP_MASK(dcp_portal);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT,
+				      &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+
+	return ret;
+}
+
+int qman_delete_cgr(struct qman_cgr *cgr)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret = 0;
+	struct qman_cgr *i;
+	struct qman_portal *p = get_affine_portal();
+
+	if (cgr->chan != p->config->channel) {
+		pr_crit("Attempting to delete cgr from different portal than"
+			" it was create: create 0x%x, delete 0x%x\n",
+			cgr->chan, p->config->channel);
+		ret = -EINVAL;
+		goto put_portal;
+	}
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	spin_lock(&p->cgr_lock);
+	list_del(&cgr->node);
+	/*
+	 * If there are no other CGR objects for this CGRID in the list,
+	 * update CSCN_TARG accordingly
+	 */
+	list_for_each_entry(i, &p->cgr_cbs, node)
+		if ((i->cgrid == cgr->cgrid) && i->cb)
+			goto release_lock;
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)  {
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+		goto release_lock;
+	}
+	/* Overwrite TARG */
+	local_opts.we_mask = QM_CGR_WE_CSCN_TARG;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl = PORTAL_IDX(p);
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ &
+							 ~(TARG_MASK(p));
+	ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+put_portal:
+	return ret;
+}
+
+int qman_shutdown_fq(u32 fqid)
+{
+	struct qman_portal *p;
+	struct qm_portal *low_p;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	u8 state;
+	int orl_empty, fq_empty, drain = 0;
+	u32 result;
+	u32 channel, wq;
+	u16 dest_wq;
+
+	p = get_affine_portal();
+	low_p = &p->p;
+
+	/* Determine the state of the FQID */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	state = mcr->queryfq_np.state & QM_MCR_NP_STATE_MASK;
+	if (state == QM_MCR_NP_STATE_OOS)
+		return 0; /* Already OOS, no need to do anymore checks */
+
+	/* Query which channel the FQ is using */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+
+	/* Need to store these since the MCR gets reused */
+	dest_wq = be16_to_cpu(mcr->queryfq.fqd.dest_wq);
+	channel = dest_wq & 0x7;
+	wq = dest_wq >> 3;
+
+	switch (state) {
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+	case QM_MCR_NP_STATE_PARKED:
+		orl_empty = 0;
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_RETIRE);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_RETIRE);
+		result = mcr->result; /* Make a copy as we reuse MCR below */
+
+		if (result == QM_MCR_RESULT_PENDING) {
+			/* Need to wait for the FQRN in the message ring, which
+			 * will only occur once the FQ has been drained.  In
+			 * order for the FQ to drain the portal needs to be set
+			 * to dequeue from the channel the FQ is scheduled on
+			 */
+			const struct qm_mr_entry *msg;
+			const struct qm_dqrr_entry *dqrr = NULL;
+			int found_fqrn = 0;
+			__maybe_unused u16 dequeue_wq = 0;
+
+			/* Flag that we need to drain FQ */
+			drain = 1;
+
+			if (channel >= qm_channel_pool1 &&
+			    channel < (u16)(qm_channel_pool1 + 15)) {
+				/* Pool channel, enable the bit in the portal */
+				dequeue_wq = (channel -
+					      qm_channel_pool1 + 1) << 4 | wq;
+			} else if (channel < qm_channel_pool1) {
+				/* Dedicated channel */
+				dequeue_wq = wq;
+			} else {
+				pr_info("Cannot recover FQ 0x%x,"
+					" it is scheduled on channel 0x%x",
+					fqid, channel);
+				return -EBUSY;
+			}
+			/* Set the sdqcr to drain this channel */
+			if (channel < qm_channel_pool1)
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+					  QM_SDQCR_CHANNELS_DEDICATED);
+			else
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+						  QM_SDQCR_CHANNELS_POOL_CONV
+						  (channel));
+			while (!found_fqrn) {
+				/* Keep draining DQRR while checking the MR*/
+				qm_dqrr_pvb_update(low_p);
+				dqrr = qm_dqrr_current(low_p);
+				while (dqrr) {
+					qm_dqrr_cdc_consume_1ptr(
+						low_p, dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+				/* Process message ring too */
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+				while (msg) {
+					if ((msg->verb &
+					     QM_MR_VERB_TYPE_MASK)
+					    == QM_MR_VERB_FQRN)
+						found_fqrn = 1;
+					qm_mr_next(low_p);
+					qm_mr_cci_consume_to_current(low_p);
+					qm_mr_pvb_update(low_p);
+					msg = qm_mr_current(low_p);
+				}
+				cpu_relax();
+			}
+		}
+		if (result != QM_MCR_RESULT_OK &&
+		    result !=  QM_MCR_RESULT_PENDING) {
+			/* error */
+			pr_err("qman_retire_fq failed on FQ 0x%x,"
+			       " result=0x%x\n", fqid, result);
+			return -1;
+		}
+		if (!(mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)) {
+			/* ORL had no entries, no need to wait until the
+			 * ERNs come in.
+			 */
+			orl_empty = 1;
+		}
+		/* Retirement succeeded, check to see if FQ needs
+		 * to be drained.
+		 */
+		if (drain || mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY) {
+			/* FQ is Not Empty, drain using volatile DQ commands */
+			fq_empty = 0;
+			do {
+				const struct qm_dqrr_entry *dqrr = NULL;
+				u32 vdqcr = fqid | QM_VDQCR_NUMFRAMES_SET(3);
+
+				qm_dqrr_vdqcr_set(low_p, vdqcr);
+
+				/* Wait for a dequeue to occur */
+				while (dqrr == NULL) {
+					qm_dqrr_pvb_update(low_p);
+					dqrr = qm_dqrr_current(low_p);
+					if (!dqrr)
+						cpu_relax();
+				}
+				/* Process the dequeues, making sure to
+				 * empty the ring completely.
+				 */
+				while (dqrr) {
+					if (dqrr->fqid == fqid &&
+					    dqrr->stat & QM_DQRR_STAT_FQ_EMPTY)
+						fq_empty = 1;
+					qm_dqrr_cdc_consume_1ptr(low_p,
+								 dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+			} while (fq_empty == 0);
+		}
+		qm_dqrr_sdqcr_set(low_p, 0);
+
+		/* Wait for the ORL to have been completely drained */
+		while (orl_empty == 0) {
+			const struct qm_mr_entry *msg;
+
+			qm_mr_pvb_update(low_p);
+			msg = qm_mr_current(low_p);
+			while (msg) {
+				if ((msg->verb & QM_MR_VERB_TYPE_MASK) ==
+				    QM_MR_VERB_FQRL)
+					orl_empty = 1;
+				qm_mr_next(low_p);
+				qm_mr_cci_consume_to_current(low_p);
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+			}
+			cpu_relax();
+		}
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result != QM_MCR_RESULT_OK) {
+			pr_err(
+			"OOS after drain Failed on FQID 0x%x, result 0x%x\n",
+			       fqid, mcr->result);
+			return -1;
+		}
+		return 0;
+
+	case QM_MCR_NP_STATE_RETIRED:
+		/* Send OOS Command */
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result) {
+			pr_err("OOS Failed on FQID 0x%x\n", fqid);
+			return -1;
+		}
+		return 0;
+
+	}
+	return -1;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.h b/drivers/bus/dpaa/base/qbman/qman.h
new file mode 100644
index 0000000..ee78d31
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.h
@@ -0,0 +1,888 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman_priv.h"
+
+/***************************/
+/* Portal register assists */
+/***************************/
+#define QM_REG_EQCR_PI_CINH	0x3000
+#define QM_REG_EQCR_CI_CINH	0x3040
+#define QM_REG_EQCR_ITR		0x3080
+#define QM_REG_DQRR_PI_CINH	0x3100
+#define QM_REG_DQRR_CI_CINH	0x3140
+#define QM_REG_DQRR_ITR		0x3180
+#define QM_REG_DQRR_DCAP	0x31C0
+#define QM_REG_DQRR_SDQCR	0x3200
+#define QM_REG_DQRR_VDQCR	0x3240
+#define QM_REG_DQRR_PDQCR	0x3280
+#define QM_REG_MR_PI_CINH	0x3300
+#define QM_REG_MR_CI_CINH	0x3340
+#define QM_REG_MR_ITR		0x3380
+#define QM_REG_CFG		0x3500
+#define QM_REG_ISR		0x3600
+#define QM_REG_IIR              0x36C0
+#define QM_REG_ITPR		0x3740
+
+/* Cache-enabled register offsets */
+#define QM_CL_EQCR		0x0000
+#define QM_CL_DQRR		0x1000
+#define QM_CL_MR		0x2000
+#define QM_CL_EQCR_PI_CENA	0x3000
+#define QM_CL_EQCR_CI_CENA	0x3040
+#define QM_CL_DQRR_PI_CENA	0x3100
+#define QM_CL_DQRR_CI_CENA	0x3140
+#define QM_CL_MR_PI_CENA	0x3300
+#define QM_CL_MR_CI_CENA	0x3340
+#define QM_CL_CR		0x3800
+#define QM_CL_RR0		0x3900
+#define QM_CL_RR1		0x3940
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __qm_in(qm, o)		be32_to_cpu(__raw_readl((qm)->ci  + (o)))
+#define __qm_out(qm, o, val)	__raw_writel((cpu_to_be32(val)), \
+					     (qm)->ci + (o))
+#define qm_in(reg)		__qm_in(&portal->addr, QM_REG_##reg)
+#define qm_out(reg, val)	__qm_out(&portal->addr, QM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __qm_cl_touch_ro(qm, o) dcbt_ro((qm)->ce + (o))
+#define __qm_cl_touch_rw(qm, o) dcbt_rw((qm)->ce + (o))
+#define __qm_cl_in(qm, o)	be32_to_cpu(__raw_readl((qm)->ce + (o)))
+#define __qm_cl_out(qm, o, val) \
+	do { \
+		u32 *__tmpclout = (qm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __qm_cl_invalidate(qm, o) dccivac((qm)->ce + (o))
+#define qm_cl_touch_ro(reg) __qm_cl_touch_ro(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_touch_rw(reg) __qm_cl_touch_rw(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_in(reg)	    __qm_cl_in(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_out(reg, val) __qm_cl_out(&portal->addr, QM_CL_##reg##_CENA, val)
+#define qm_cl_invalidate(reg)\
+	__qm_cl_invalidate(&portal->addr, QM_CL_##reg##_CENA)
+
+/* Cache-enabled ring access */
+#define qm_cl(base, idx)	((void *)base + ((idx) << 6))
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 qm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *     dmode == h/w dequeue mode.
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ *     dc == DCA (Discrete Consumption Acknowledgment), DQRR-only
+ *   As for "enum qm_dqrr_dmode", it should be self-explanatory.
+ */
+enum qm_eqcr_pmode {		/* matches QCSP_CFG::EPM */
+	qm_eqcr_pci = 0,	/* PI index, cache-inhibited */
+	qm_eqcr_pce = 1,	/* PI index, cache-enabled */
+	qm_eqcr_pvb = 2		/* valid-bit */
+};
+
+enum qm_dqrr_dmode {		/* matches QCSP_CFG::DP */
+	qm_dqrr_dpush = 0,	/* SDQCR  + VDQCR */
+	qm_dqrr_dpull = 1	/* PDQCR */
+};
+
+enum qm_dqrr_pmode {		/* s/w-only */
+	qm_dqrr_pci,		/* reads DQRR_PI_CINH */
+	qm_dqrr_pce,		/* reads DQRR_PI_CENA */
+	qm_dqrr_pvb		/* reads valid-bit */
+};
+
+enum qm_dqrr_cmode {		/* matches QCSP_CFG::DCM */
+	qm_dqrr_cci = 0,	/* CI index, cache-inhibited */
+	qm_dqrr_cce = 1,	/* CI index, cache-enabled */
+	qm_dqrr_cdc = 2		/* Discrete Consumption Acknowledgment */
+};
+
+enum qm_mr_pmode {		/* s/w-only */
+	qm_mr_pci,		/* reads MR_PI_CINH */
+	qm_mr_pce,		/* reads MR_PI_CENA */
+	qm_mr_pvb		/* reads valid-bit */
+};
+
+enum qm_mr_cmode {		/* matches QCSP_CFG::MM */
+	qm_mr_cci = 0,		/* CI index, cache-inhibited */
+	qm_mr_cce = 1		/* CI index, cache-enabled */
+};
+
+/* ------------------------- */
+/* --- Portal structures --- */
+
+#define QM_EQCR_SIZE		8
+#define QM_DQRR_SIZE		16
+#define QM_MR_SIZE		8
+
+struct qm_eqcr {
+	struct qm_eqcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	u32 busy;
+	enum qm_eqcr_pmode pmode;
+#endif
+};
+
+struct qm_dqrr {
+	const struct qm_dqrr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum qm_dqrr_dmode dmode;
+	enum qm_dqrr_pmode pmode;
+	enum qm_dqrr_cmode cmode;
+#endif
+};
+
+struct qm_mr {
+	const struct qm_mr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum qm_mr_pmode pmode;
+	enum qm_mr_cmode cmode;
+#endif
+};
+
+struct qm_mc {
+	struct qm_mc_command *cr;
+	struct qm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum {
+		/* Can be _mc_start()ed */
+		qman_mc_idle,
+		/* Can be _mc_commit()ed or _mc_abort()ed */
+		qman_mc_user,
+		/* Can only be _mc_retry()ed */
+		qman_mc_hw
+	} state;
+#endif
+};
+
+#define QM_PORTAL_ALIGNMENT ____cacheline_aligned
+
+struct qm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct qm_portal {
+	struct qm_addr addr;
+	struct qm_eqcr eqcr;
+	struct qm_dqrr dqrr;
+	struct qm_mr mr;
+	struct qm_mc mc;
+} QM_PORTAL_ALIGNMENT;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define EQCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_EQCR_SIZE << 6)))
+
+extern dma_addr_t rte_mem_virt2phy(const void *addr);
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 EQCR_PTR2IDX(struct qm_eqcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_EQCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void EQCR_INC(struct qm_eqcr *eqcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates fast
+	 * code with essentially no branching overheads. We increment to the
+	 * next EQCR pointer and handle overflow and 'vbit'.
+	 */
+	struct qm_eqcr_entry *partial = eqcr->cursor + 1;
+
+	eqcr->cursor = EQCR_CARRYCLEAR(partial);
+	if (partial != eqcr->cursor)
+		eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_no_stash(struct qm_portal
+								 *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available)
+		return NULL;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 1;
+#endif
+
+	return eqcr->cursor;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_stash(struct qm_portal
+								*portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return NULL;
+	}
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 1;
+#endif
+	return eqcr->cursor;
+}
+
+static inline void qm_eqcr_abort(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_pend_and_next(
+					struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+	DPAA_ASSERT(eqcr->pmode != qm_eqcr_pvb);
+	if (eqcr->available == 1)
+		return NULL;
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcr->cursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	return eqcr->cursor;
+}
+
+#define EQCR_COMMIT_CHECKS(eqcr) \
+do { \
+	DPAA_ASSERT(eqcr->busy); \
+	DPAA_ASSERT(eqcr->cursor->orp == (eqcr->cursor->orp & 0x00ffffff)); \
+	DPAA_ASSERT(eqcr->cursor->fqid == (eqcr->cursor->fqid & 0x00ffffff)); \
+} while (0)
+
+static inline void qm_eqcr_pci_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pci);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	hwsync();
+	qm_out(EQCR_PI_CINH, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	qm_cl_invalidate(EQCR_PI);
+	qm_cl_touch_rw(EQCR_PI);
+}
+
+static inline void qm_eqcr_pce_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	lwsync();
+	qm_cl_out(EQCR_PI, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pvb_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eqcursor;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pvb);
+	lwsync();
+	eqcursor = eqcr->cursor;
+	eqcursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline u8 qm_eqcr_cci_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline void qm_eqcr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	qm_cl_touch_ro(EQCR_CI);
+}
+
+static inline u8 qm_eqcr_cce_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline u8 qm_eqcr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->ithresh;
+}
+
+static inline void qm_eqcr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	eqcr->ithresh = ithresh;
+	qm_out(EQCR_ITR, ithresh);
+}
+
+static inline u8 qm_eqcr_get_avail(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->available;
+}
+
+static inline u8 qm_eqcr_get_fill(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return QM_EQCR_SIZE - 1 - eqcr->available;
+}
+
+#define DQRR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_DQRR_SIZE << 6)))
+
+static inline u8 DQRR_PTR2IDX(const struct qm_dqrr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_DQRR_SIZE - 1);
+}
+
+static inline const struct qm_dqrr_entry *DQRR_INC(
+						const struct qm_dqrr_entry *e)
+{
+	return DQRR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_dqrr_set_maxfill(struct qm_portal *portal, u8 mf)
+{
+	qm_out(CFG, (qm_in(CFG) & 0xff0fffff) |
+		((mf & (QM_DQRR_SIZE - 1)) << 20));
+}
+
+static inline const struct qm_dqrr_entry *qm_dqrr_current(
+						struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	if (!dqrr->fill)
+		return NULL;
+	return dqrr->cursor;
+}
+
+static inline u8 qm_dqrr_cursor(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return DQRR_PTR2IDX(dqrr->cursor);
+}
+
+static inline u8 qm_dqrr_next(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->fill);
+	dqrr->cursor = DQRR_INC(dqrr->cursor);
+	return --dqrr->fill;
+}
+
+static inline u8 qm_dqrr_pci_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pci);
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	qm_cl_invalidate(DQRR_PI);
+	qm_cl_touch_ro(DQRR_PI);
+}
+
+static inline u8 qm_dqrr_pce_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	dqrr->pi = qm_cl_in(DQRR_PI) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	const struct qm_dqrr_entry *res = qm_cl(dqrr->ring, dqrr->pi);
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_DQRR_VERB_VBIT) == dqrr->vbit) {
+		dqrr->pi = (dqrr->pi + 1) & (QM_DQRR_SIZE - 1);
+		if (!dqrr->pi)
+			dqrr->vbit ^= QM_DQRR_VERB_VBIT;
+		dqrr->fill++;
+	}
+}
+
+static inline void qm_dqrr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_rw(DQRR_CI);
+}
+
+static inline void qm_dqrr_cce_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cdc_consume_1(struct qm_portal *portal, u8 idx,
+					 int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |	/* S */
+		((park ? 1 : 0) << 6) |	/* PK */
+		idx);			/* DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_1ptr(struct qm_portal *portal,
+					    const struct qm_dqrr_entry *dq,
+					int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 idx = DQRR_PTR2IDX(dq);
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* DQRR_DCAP::S */
+		((park ? 1 : 0) << 6) |		/* DQRR_DCAP::PK */
+		idx);				/* DQRR_DCAP::DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_n(struct qm_portal *portal, u16 bitmask)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (1 << 8) |		/* DQRR_DCAP::S */
+		((u32)bitmask << 16));		/* DQRR_DCAP::DCAP_CI */
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+}
+
+static inline u8 qm_dqrr_cdc_cci(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+}
+
+static inline void qm_dqrr_cdc_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_ro(DQRR_CI);
+}
+
+static inline u8 qm_dqrr_cdc_cce(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_cl_in(DQRR_CI) & (QM_DQRR_SIZE - 1);
+}
+
+static inline u8 qm_dqrr_get_ci(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	return dqrr->ci;
+}
+
+static inline void qm_dqrr_park(struct qm_portal *portal, u8 idx)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		(idx & (QM_DQRR_SIZE - 1)));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_park_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		DQRR_PTR2IDX(dqrr->cursor));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_sdqcr_set(struct qm_portal *portal, u32 sdqcr)
+{
+	qm_out(DQRR_SDQCR, sdqcr);
+}
+
+static inline u32 qm_dqrr_sdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_SDQCR);
+}
+
+static inline void qm_dqrr_vdqcr_set(struct qm_portal *portal, u32 vdqcr)
+{
+	qm_out(DQRR_VDQCR, vdqcr);
+}
+
+static inline u32 qm_dqrr_vdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_VDQCR);
+}
+
+static inline u8 qm_dqrr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return dqrr->ithresh;
+}
+
+static inline void qm_dqrr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(DQRR_ITR, ithresh);
+}
+
+static inline u8 qm_dqrr_get_maxfill(struct qm_portal *portal)
+{
+	return (qm_in(CFG) & 0x00f00000) >> 20;
+}
+
+/* -------------- */
+/* --- MR API --- */
+
+#define MR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_MR_SIZE << 6)))
+
+static inline u8 MR_PTR2IDX(const struct qm_mr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_MR_SIZE - 1);
+}
+
+static inline const struct qm_mr_entry *MR_INC(const struct qm_mr_entry *e)
+{
+	return MR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_mr_finish(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (mr->ci != MR_PTR2IDX(mr->cursor))
+		pr_crit("Ignoring completed MR entries\n");
+}
+
+static inline const struct qm_mr_entry *qm_mr_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (!mr->fill)
+		return NULL;
+	return mr->cursor;
+}
+
+static inline u8 qm_mr_next(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->fill);
+	mr->cursor = MR_INC(mr->cursor);
+	return --mr->fill;
+}
+
+static inline void qm_mr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = (mr->ci + num) & (QM_MR_SIZE - 1);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = MR_PTR2IDX(mr->cursor);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(MR_ITR, ithresh);
+}
+
+/* ------------------------------ */
+/* --- Management command API --- */
+static inline int qm_mc_init(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + QM_CL_CR;
+	mc->rr = portal->addr.ce + QM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			QM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? QM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_idle;
+#endif
+	return 0;
+}
+
+static inline void qm_mc_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (mc->state != qman_mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct qm_mc_command *qm_mc_start(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void qm_mc_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_hw;
+#endif
+}
+
+static inline struct qm_mc_result *qm_mc_result(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= QM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_idle;
+#endif
+	return rr;
+}
+
+/* Portal interrupt register API */
+static inline void qm_isr_set_iperiod(struct qm_portal *portal, u16 iperiod)
+{
+	qm_out(ITPR, iperiod);
+}
+
+static inline u32 __qm_isr_read(struct qm_portal *portal, enum qm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 6));
+#else
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __qm_isr_write(struct qm_portal *portal, enum qm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 6), val);
+#else
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 2), val);
+#endif
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 80dde20..a7faf17 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -66,6 +66,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_qman_portal_init(uint32_t index, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct qman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -116,6 +117,14 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 	pcfg.node = NULL;
 	pcfg.irq = fd;
 
+	portal = qman_create_affine_portal(&pcfg, NULL);
+	if (!portal) {
+		pr_err("Qman portal initialisation failed (%d)\n",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	irq_map.type = dpaa_portal_qman;
 	irq_map.portal_cinh = map.addr.cinh;
 	process_portal_irq_map(fd, &irq_map);
@@ -124,10 +133,13 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 
 static int fsl_qman_portal_finish(void)
 {
+	__maybe_unused const struct qm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = qman_destroy_affine_portal();
+	BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index e9826c2..4ae2ea5 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -44,10 +44,6 @@
 #include "dpaa_sys.h"
 #include <fsl_qman.h>
 
-#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)
-#error "_ARM64 requires _FSL_QMAN_FQ_LOOKUP"
-#endif
-
 /* Congestion Groups */
 /*
  * This wrapper represents a bit-array for the state of the 256 QMan congestion
@@ -201,13 +197,6 @@ void qm_set_liodns(struct qm_portal_config *pcfg);
 int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
 		       struct qm_mcr_cgrtestwrite *result);
 
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
-/* If the fq object pointer is greater than the size of context_b field,
- * than a lookup table is required.
- */
-int qman_setup_fq_lookup_table(size_t num_entries);
-#endif
-
 /*   QMan s/w corenet portal, low-level i/face	 */
 
 /*
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 740ee25..7d9ad00 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -46,15 +46,6 @@ extern "C" {
 
 #include <dpaa_rbtree.h>
 
-/* FQ lookups (turn this on for 64bit user-space) */
-#if (__WORDSIZE == 64)
-#define CONFIG_FSL_QMAN_FQ_LOOKUP
-/* if FQ lookups are supported, this controls the number of initialised,
- * s/w-consumed FQs that can be supported at any one time.
- */
-#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)
-#endif
-
 /* Last updated for v00.800 of the BG */
 
 /* Hardware constants */
@@ -1254,9 +1245,6 @@ struct qman_fq {
 	enum qman_fq_state state;
 	int cgr_groupid;
 	struct rb_node node;
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
-	u32 key;
-#endif
 };
 
 /*
@@ -1275,6 +1263,761 @@ struct qman_cgr {
 	struct list_head node;
 };
 
+/* Flags to qman_create_fq() */
+#define QMAN_FQ_FLAG_NO_ENQUEUE      0x00000001 /* can't enqueue */
+#define QMAN_FQ_FLAG_NO_MODIFY       0x00000002 /* can only enqueue */
+#define QMAN_FQ_FLAG_TO_DCPORTAL     0x00000004 /* consumed by CAAM/PME/Fman */
+#define QMAN_FQ_FLAG_LOCKED          0x00000008 /* multi-core locking */
+#define QMAN_FQ_FLAG_AS_IS           0x00000010 /* query h/w state */
+#define QMAN_FQ_FLAG_DYNAMIC_FQID    0x00000020 /* (de)allocate fqid */
+
+/* Flags to qman_destroy_fq() */
+#define QMAN_FQ_DESTROY_PARKED       0x00000001 /* FQ can be parked or OOS */
+
+/* Flags from qman_fq_state() */
+#define QMAN_FQ_STATE_CHANGING       0x80000000 /* 'state' is changing */
+#define QMAN_FQ_STATE_NE             0x40000000 /* retired FQ isn't empty */
+#define QMAN_FQ_STATE_ORL            0x20000000 /* retired FQ has ORL */
+#define QMAN_FQ_STATE_BLOCKOOS       0xe0000000 /* if any are set, no OOS */
+#define QMAN_FQ_STATE_CGR_EN         0x10000000 /* CGR enabled */
+#define QMAN_FQ_STATE_VDQCR          0x08000000 /* being volatile dequeued */
+
+/* Flags to qman_init_fq() */
+#define QMAN_INITFQ_FLAG_SCHED       0x00000001 /* schedule rather than park */
+#define QMAN_INITFQ_FLAG_LOCAL       0x00000004 /* set dest portal */
+
+/* Flags to qman_enqueue(). NB, the strange numbering is to align with hardware,
+ * bit-wise. (NB: the PME API is sensitive to these precise numberings too, so
+ * any change here should be audited in PME.)
+ */
+#define QMAN_ENQUEUE_FLAG_WATCH_CGR  0x00080000 /* watch congestion state */
+#define QMAN_ENQUEUE_FLAG_DCA        0x00008000 /* perform enqueue-DCA */
+#define QMAN_ENQUEUE_FLAG_DCA_PARK   0x00004000 /* If DCA, requests park */
+#define QMAN_ENQUEUE_FLAG_DCA_PTR(p)		/* If DCA, p is DQRR entry */ \
+		(((u32)(p) << 2) & 0x00000f00)
+#define QMAN_ENQUEUE_FLAG_C_GREEN    0x00000000 /* choose one C_*** flag */
+#define QMAN_ENQUEUE_FLAG_C_YELLOW   0x00000008
+#define QMAN_ENQUEUE_FLAG_C_RED      0x00000010
+#define QMAN_ENQUEUE_FLAG_C_OVERRIDE 0x00000018
+/* For the ORP-specific qman_enqueue_orp() variant;
+ * - this flag indicates "Not Last In Sequence", ie. all but the final fragment
+ *   of a frame.
+ */
+#define QMAN_ENQUEUE_FLAG_NLIS       0x01000000
+/* - this flag performs no enqueue but fills in an ORP sequence number that
+ *   would otherwise block it (eg. if a frame has been dropped).
+ */
+#define QMAN_ENQUEUE_FLAG_HOLE       0x02000000
+/* - this flag performs no enqueue but advances NESN to the given sequence
+ *   number.
+ */
+#define QMAN_ENQUEUE_FLAG_NESN       0x04000000
+
+/* Flags to qman_modify_cgr() */
+#define QMAN_CGR_FLAG_USE_INIT       0x00000001
+#define QMAN_CGR_MODE_FRAME          0x00000001
+
+/**
+ * qman_get_portal_index - get portal configuration index
+ */
+int qman_get_portal_index(void);
+
+/**
+ * qman_affine_channel - return the channel ID of an portal
+ * @cpu: the cpu whose affine portal is the subject of the query
+ *
+ * If @cpu is -1, the affine portal for the current CPU will be used. It is a
+ * bug to call this function for any value of @cpu (other than -1) that is not a
+ * member of the cpu mask.
+ */
+u16 qman_affine_channel(int cpu);
+
+/**
+ * qman_set_vdq - Issue a volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @num: Number of Frames requested for volatile dequeue
+ *
+ * This function will issue a volatile dequeue command to the QMAN.
+ */
+int qman_set_vdq(struct qman_fq *fq, u16 num);
+
+/**
+ * qman_dequeue - Get the DQRR entry after volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ *
+ * This function will return the DQRR entry after a volatile dequeue command
+ * is issued. It will keep returning NULL until there is no packet available on
+ * the DQRR.
+ */
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
+
+/**
+ * qman_dqrr_consume - Consume the DQRR entriy after volatile dequeue
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @dq: DQRR entry to consume. This is the one which is provided by the
+ *    'qbman_dequeue' command.
+ *
+ * This will comsume the DQRR enrey and make it available for next volatile
+ * dequeue.
+ */
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq);
+
+/**
+ * qman_poll_dqrr - process DQRR (fast-path) entries
+ * @limit: the maximum number of DQRR entries to process
+ *
+ * Use of this function requires that DQRR processing not be interrupt-driven.
+ * Ie. the value returned by qman_irqsource_get() should not include
+ * QM_PIRQ_DQRI. If the current CPU is sharing a portal hosted on another CPU,
+ * this function will return -EINVAL, otherwise the return value is >=0 and
+ * represents the number of DQRR entries processed.
+ */
+int qman_poll_dqrr(unsigned int limit);
+
+/**
+ * qman_poll
+ *
+ * Dispatcher logic on a cpu can use this to trigger any maintenance of the
+ * affine portal. There are two classes of portal processing in question;
+ * fast-path (which involves demuxing dequeue ring (DQRR) entries and tracking
+ * enqueue ring (EQCR) consumption), and slow-path (which involves EQCR
+ * thresholds, congestion state changes, etc). This function does whatever
+ * processing is not triggered by interrupts.
+ *
+ * Note, if DQRR and some slow-path processing are poll-driven (rather than
+ * interrupt-driven) then this function uses a heuristic to determine how often
+ * to run slow-path processing - as slow-path processing introduces at least a
+ * minimum latency each time it is run, whereas fast-path (DQRR) processing is
+ * close to zero-cost if there is no work to be done.
+ */
+void qman_poll(void);
+
+/**
+ * qman_stop_dequeues - Stop h/w dequeuing to the s/w portal
+ *
+ * Disables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_stop_dequeues(void);
+
+/**
+ * qman_start_dequeues - (Re)start h/w dequeuing to the s/w portal
+ *
+ * Enables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_start_dequeues(void);
+
+/**
+ * qman_static_dequeue_add - Add pool channels to the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Adds a set of pool channels to the portal's static dequeue command register
+ * (SDQCR). The requested pools are limited to those the portal has dequeue
+ * access to.
+ */
+void qman_static_dequeue_add(u32 pools);
+
+/**
+ * qman_static_dequeue_del - Remove pool channels from the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Removes a set of pool channels from the portal's static dequeue command
+ * register (SDQCR). The requested pools are limited to those the portal has
+ * dequeue access to.
+ */
+void qman_static_dequeue_del(u32 pools);
+
+/**
+ * qman_static_dequeue_get - return the portal's current SDQCR
+ *
+ * Returns the portal's current static dequeue command register (SDQCR). The
+ * entire register is returned, so if only the currently-enabled pool channels
+ * are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
+ */
+u32 qman_static_dequeue_get(void);
+
+/**
+ * qman_dca - Perform a Discrete Consumption Acknowledgment
+ * @dq: the DQRR entry to be consumed
+ * @park_request: indicates whether the held-active @fq should be parked
+ *
+ * Only allowed in DCA-mode portals, for DQRR entries whose handler callback had
+ * previously returned 'qman_cb_dqrr_defer'. NB, as with the other APIs, this
+ * does not take a 'portal' argument but implies the core affine portal from the
+ * cpu that is currently executing the function. For reasons of locking, this
+ * function must be called from the same CPU as that which processed the DQRR
+ * entry in the first place.
+ */
+void qman_dca(struct qm_dqrr_entry *dq, int park_request);
+
+/**
+ * qman_eqcr_is_empty - Determine if portal's EQCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * enqueues for the local portal have been processed by Qman but can't use the
+ * QMAN_ENQUEUE_FLAG_WAIT_SYNC flag to do this from the final qman_enqueue().
+ * The function forces tracking of EQCR consumption (which normally doesn't
+ * happen until enqueue processing needs to find space to put new enqueue
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int qman_eqcr_is_empty(void);
+
+/**
+ * qman_set_dc_ern - Set the handler for DCP enqueue rejection notifications
+ * @handler: callback for processing DCP ERNs
+ * @affine: whether this handler is specific to the locally affine portal
+ *
+ * If a hardware block's interface to Qman (ie. its direct-connect portal, or
+ * DCP) is configured not to receive enqueue rejections, then any enqueues
+ * through that DCP that are rejected will be sent to a given software portal.
+ * If @affine is non-zero, then this handler will only be used for DCP ERNs
+ * received on the portal affine to the current CPU. If multiple CPUs share a
+ * portal and they all call this function, they will be setting the handler for
+ * the same portal! If @affine is zero, then this handler will be global to all
+ * portals handled by this instance of the driver. Only those portals that do
+ * not have their own affine handler will use the global handler.
+ */
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
+
+	/* FQ management */
+	/* ------------- */
+/**
+ * qman_create_fq - Allocates a FQ
+ * @fqid: the index of the FQD to encapsulate, must be "Out of Service"
+ * @flags: bit-mask of QMAN_FQ_FLAG_*** options
+ * @fq: memory for storing the 'fq', with callbacks filled in
+ *
+ * Creates a frame queue object for the given @fqid, unless the
+ * QMAN_FQ_FLAG_DYNAMIC_FQID flag is set in @flags, in which case a FQID is
+ * dynamically allocated (or the function fails if none are available). Once
+ * created, the caller should not touch the memory at 'fq' except as extended to
+ * adjacent memory for user-defined fields (see the definition of "struct
+ * qman_fq" for more info). NO_MODIFY is only intended for enqueuing to
+ * pre-existing frame-queues that aren't to be otherwise interfered with, it
+ * prevents all other modifications to the frame queue. The TO_DCPORTAL flag
+ * causes the driver to honour any contextB modifications requested in the
+ * qm_init_fq() API, as this indicates the frame queue will be consumed by a
+ * direct-connect portal (PME, CAAM, or Fman). When frame queues are consumed by
+ * software portals, the contextB field is controlled by the driver and can't be
+ * modified by the caller. If the AS_IS flag is specified, management commands
+ * will be used on portal @p to query state for frame queue @fqid and construct
+ * a frame queue object based on that, rather than assuming/requiring that it be
+ * Out of Service.
+ */
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
+
+/**
+ * qman_destroy_fq - Deallocates a FQ
+ * @fq: the frame queue object to release
+ * @flags: bit-mask of QMAN_FQ_FREE_*** options
+ *
+ * The memory for this frame queue object ('fq' provided in qman_create_fq()) is
+ * not deallocated but the caller regains ownership, to do with as desired. The
+ * FQ must be in the 'out-of-service' state unless the QMAN_FQ_FREE_PARKED flag
+ * is specified, in which case it may also be in the 'parked' state.
+ */
+void qman_destroy_fq(struct qman_fq *fq, u32 flags);
+
+/**
+ * qman_fq_fqid - Queries the frame queue ID of a FQ object
+ * @fq: the frame queue object to query
+ */
+u32 qman_fq_fqid(struct qman_fq *fq);
+
+/**
+ * qman_fq_state - Queries the state of a FQ object
+ * @fq: the frame queue object to query
+ * @state: pointer to state enum to return the FQ scheduling state
+ * @flags: pointer to state flags to receive QMAN_FQ_STATE_*** bitmask
+ *
+ * Queries the state of the FQ object, without performing any h/w commands.
+ * This captures the state, as seen by the driver, at the time the function
+ * executes.
+ */
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
+
+/**
+ * qman_init_fq - Initialises FQ fields, leaves the FQ "parked" or "scheduled"
+ * @fq: the frame queue object to modify, must be 'parked' or new.
+ * @flags: bit-mask of QMAN_INITFQ_FLAG_*** options
+ * @opts: the FQ-modification settings, as defined in the low-level API
+ *
+ * The @opts parameter comes from the low-level portal API. Select
+ * QMAN_INITFQ_FLAG_SCHED in @flags to cause the frame queue to be scheduled
+ * rather than parked. NB, @opts can be NULL.
+ *
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver;
+ * 1. the 'count' and 'fqid' fields are always ignored (this operation only
+ * affects one frame queue: @fq).
+ * 2. the QM_INITFQ_WE_CONTEXTB option of the 'we_mask' field and the associated
+ * 'fqd' structure's 'context_b' field are sometimes overwritten;
+ *   - if @fq was not created with QMAN_FQ_FLAG_TO_DCPORTAL, then context_b is
+ *     initialised to a value used by the driver for demux.
+ *   - if context_b is initialised for demux, so is context_a in case stashing
+ *     is requested (see item 4).
+ * (So caller control of context_b is only possible for TO_DCPORTAL frame queue
+ * objects.)
+ * 3. if @flags contains QMAN_INITFQ_FLAG_LOCAL, the 'fqd' structure's
+ * 'dest::channel' field will be overwritten to match the portal used to issue
+ * the command. If the WE_DESTWQ write-enable bit had already been set by the
+ * caller, the channel workqueue will be left as-is, otherwise the write-enable
+ * bit is set and the workqueue is set to a default of 4. If the "LOCAL" flag
+ * isn't set, the destination channel/workqueue fields and the write-enable bit
+ * are left as-is.
+ * 4. if the driver overwrites context_a/b for demux, then if
+ * QM_INITFQ_WE_CONTEXTA is set, the driver will only overwrite
+ * context_a.address fields and will leave the stashing fields provided by the
+ * user alone, otherwise it will zero out the context_a.stashing fields.
+ */
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
+
+/**
+ * qman_schedule_fq - Schedules a FQ
+ * @fq: the frame queue object to schedule, must be 'parked'
+ *
+ * Schedules the frame queue, which must be Parked, which takes it to
+ * Tentatively-Scheduled or Truly-Scheduled depending on its fill-level.
+ */
+int qman_schedule_fq(struct qman_fq *fq);
+
+/**
+ * qman_retire_fq - Retires a FQ
+ * @fq: the frame queue object to retire
+ * @flags: FQ flags (as per qman_fq_state) if retirement completes immediately
+ *
+ * Retires the frame queue. This returns zero if it succeeds immediately, +1 if
+ * the retirement was started asynchronously, otherwise it returns negative for
+ * failure. When this function returns zero, @flags is set to indicate whether
+ * the retired FQ is empty and/or whether it has any ORL fragments (to show up
+ * as ERNs). Otherwise the corresponding flags will be known when a subsequent
+ * FQRN message shows up on the portal's message ring.
+ *
+ * NB, if the retirement is asynchronous (the FQ was in the Truly Scheduled or
+ * Active state), the completion will be via the message ring as a FQRN - but
+ * the corresponding callback may occur before this function returns!! Ie. the
+ * caller should be prepared to accept the callback as the function is called,
+ * not only once it has returned.
+ */
+int qman_retire_fq(struct qman_fq *fq, u32 *flags);
+
+/**
+ * qman_oos_fq - Puts a FQ "out of service"
+ * @fq: the frame queue object to be put out-of-service, must be 'retired'
+ *
+ * The frame queue must be retired and empty, and if any order restoration list
+ * was released as ERNs at the time of retirement, they must all be consumed.
+ */
+int qman_oos_fq(struct qman_fq *fq);
+
+/**
+ * qman_fq_flow_control - Set the XON/XOFF state of a FQ
+ * @fq: the frame queue object to be set to XON/XOFF state, must not be 'oos',
+ * or 'retired' or 'parked' state
+ * @xon: boolean to set fq in XON or XOFF state
+ *
+ * The frame should be in Tentatively Scheduled state or Truly Schedule sate,
+ * otherwise the IFSI interrupt will be asserted.
+ */
+int qman_fq_flow_control(struct qman_fq *fq, int xon);
+
+/**
+ * qman_query_fq - Queries FQD fields (via h/w query command)
+ * @fq: the frame queue object to be queried
+ * @fqd: storage for the queried FQD fields
+ */
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd);
+
+/**
+ * qman_query_fq_has_pkts - Queries non-programmable FQD fields and returns '1'
+ * if packets are in the frame queue. If there are no packets on frame
+ * queue '0' is returned.
+ * @fq: the frame queue object to be queried
+ */
+int qman_query_fq_has_pkts(struct qman_fq *fq);
+
+/**
+ * qman_query_fq_np - Queries non-programmable FQD fields
+ * @fq: the frame queue object to be queried
+ * @np: storage for the queried FQD fields
+ */
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
+
+/**
+ * qman_query_wq - Queries work queue lengths
+ * @query_dedicated: If non-zero, query length of WQs in the channel dedicated
+ *		to this software portal. Otherwise, query length of WQs in a
+ *		channel  specified in wq.
+ * @wq: storage for the queried WQs lengths. Also specified the channel to
+ *	to query if query_dedicated is zero.
+ */
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
+
+/**
+ * qman_volatile_dequeue - Issue a volatile dequeue command
+ * @fq: the frame queue object to dequeue from
+ * @flags: a bit-mask of QMAN_VOLATILE_FLAG_*** options
+ * @vdqcr: bit mask of QM_VDQCR_*** options, as per qm_dqrr_vdqcr_set()
+ *
+ * Attempts to lock access to the portal's VDQCR volatile dequeue functionality.
+ * The function will block and sleep if QMAN_VOLATILE_FLAG_WAIT is specified and
+ * the VDQCR is already in use, otherwise returns non-zero for failure. If
+ * QMAN_VOLATILE_FLAG_FINISH is specified, the function will only return once
+ * the VDQCR command has finished executing (ie. once the callback for the last
+ * DQRR entry resulting from the VDQCR command has been called). If not using
+ * the FINISH flag, completion can be determined either by detecting the
+ * presence of the QM_DQRR_STAT_UNSCHEDULED and QM_DQRR_STAT_DQCR_EXPIRED bits
+ * in the "stat" field of the "struct qm_dqrr_entry" passed to the FQ's dequeue
+ * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
+ * "flags" retrieved from qman_fq_state().
+ */
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
+
+/**
+ * qman_enqueue - Enqueue a frame to a frame queue
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ *
+ * Fills an entry in the EQCR of portal @qm to enqueue the frame described by
+ * @fd. The descriptor details are copied from @fd to the EQCR entry, the 'pid'
+ * field is ignored. The return value is non-zero on error, such as ring full
+ * (and FLAG_WAIT not specified), congestion avoidance (FLAG_WATCH_CGR
+ * specified), etc. If the ring is full and FLAG_WAIT is specified, this
+ * function will block. If FLAG_INTERRUPT is set, the EQCI bit of the portal
+ * interrupt will assert when Qman consumes the EQCR entry (subject to "status
+ * disable", "enable", and "inhibit" registers). If FLAG_DCA is set, Qman will
+ * perform an implied "discrete consumption acknowledgment" on the dequeue
+ * ring's (DQRR) entry, at the ring index specified by the FLAG_DCA_IDX(x)
+ * macro. (As an alternative to issuing explicit DCA actions on DQRR entries,
+ * this implicit DCA can delay the release of a "held active" frame queue
+ * corresponding to a DQRR entry until Qman consumes the EQCR entry - providing
+ * order-preservation semantics in packet-forwarding scenarios.) If FLAG_DCA is
+ * set, then FLAG_DCA_PARK can also be set to imply that the DQRR consumption
+ * acknowledgment should "park request" the "held active" frame queue. Ie.
+ * when the portal eventually releases that frame queue, it will be left in the
+ * Parked state rather than Tentatively Scheduled or Truly Scheduled. If the
+ * portal is watching congestion groups, the QMAN_ENQUEUE_FLAG_WATCH_CGR flag
+ * is requested, and the FQ is a member of a congestion group, then this
+ * function returns -EAGAIN if the congestion group is currently congested.
+ * Note, this does not eliminate ERNs, as the async interface means we can be
+ * sending enqueue commands to an un-congested FQ that becomes congested before
+ * the enqueue commands are processed, but it does minimise needless thrashing
+ * of an already busy hardware resource by throttling many of the to-be-dropped
+ * enqueues "at the source".
+ */
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send);
+
+typedef int (*qman_cb_precommit) (void *arg);
+
+/**
+ * qman_enqueue_orp - Enqueue a frame to a frame queue using an ORP
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ * @orp: the frame queue object used as an order restoration point.
+ * @orp_seqnum: the sequence number of this frame in the order restoration path
+ *
+ * Similar to qman_enqueue(), but with the addition of an Order Restoration
+ * Point (@orp) and corresponding sequence number (@orp_seqnum) for this
+ * enqueue operation to employ order restoration. Each frame queue object acts
+ * as an Order Definition Point (ODP) by providing each frame dequeued from it
+ * with an incrementing sequence number, this value is generally ignored unless
+ * that sequence of dequeued frames will need order restoration later. Each
+ * frame queue object also encapsulates an Order Restoration Point (ORP), which
+ * is a re-assembly context for re-ordering frames relative to their sequence
+ * numbers as they are enqueued. The ORP does not have to be within the frame
+ * queue that receives the enqueued frame, in fact it is usually the frame
+ * queue from which the frames were originally dequeued. For the purposes of
+ * order restoration, multiple frames (or "fragments") can be enqueued for a
+ * single sequence number by setting the QMAN_ENQUEUE_FLAG_NLIS flag for all
+ * enqueues except the final fragment of a given sequence number. Ordering
+ * between sequence numbers is guaranteed, even if fragments of different
+ * sequence numbers are interlaced with one another. Fragments of the same
+ * sequence number will retain the order in which they are enqueued. If no
+ * enqueue is to performed, QMAN_ENQUEUE_FLAG_HOLE indicates that the given
+ * sequence number is to be "skipped" by the ORP logic (eg. if a frame has been
+ * dropped from a sequence), or QMAN_ENQUEUE_FLAG_NESN indicates that the given
+ * sequence number should become the ORP's "Next Expected Sequence Number".
+ *
+ * Side note: a frame queue object can be used purely as an ORP, without
+ * carrying any frames at all. Care should be taken not to deallocate a frame
+ * queue object that is being actively used as an ORP, as a future allocation
+ * of the frame queue object may start using the internal ORP before the
+ * previous use has finished.
+ */
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum);
+
+/**
+ * qman_alloc_fqid_range - Allocate a contiguous range of FQIDs
+ * @result: is set by the API to the base FQID of the allocated range
+ * @count: the number of FQIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count FQIDs
+ *
+ * Returns the number of frame queues allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * FQs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_fqid(u32 *result)
+{
+	int ret = qman_alloc_fqid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_fqid_range - Release the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of FQIDs
+ * that it can subsequently allocate from.
+ */
+void qman_release_fqid_range(u32 fqid, unsigned int count);
+static inline void qman_release_fqid(u32 fqid)
+{
+	qman_release_fqid_range(fqid, 1);
+}
+
+void qman_seed_fqid_range(u32 fqid, unsigned int count);
+
+int qman_shutdown_fq(u32 fqid);
+
+/**
+ * qman_reserve_fqid_range - Reserve the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ */
+int qman_reserve_fqid_range(u32 fqid, unsigned int count);
+static inline int qman_reserve_fqid(u32 fqid)
+{
+	return qman_reserve_fqid_range(fqid, 1);
+}
+
+/* Pool-channel management */
+/**
+ * qman_alloc_pool_range - Allocate a contiguous range of pool-channel IDs
+ * @result: is set by the API to the base pool-channel ID of the allocated range
+ * @count: the number of pool-channel IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of pool-channel IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_pool(u32 *result)
+{
+	int ret = qman_alloc_pool_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_pool_range - Release the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to deallocate
+ * @count: the number of pool-channel IDs in the range
+ */
+void qman_release_pool_range(u32 id, unsigned int count);
+static inline void qman_release_pool(u32 id)
+{
+	qman_release_pool_range(id, 1);
+}
+
+/**
+ * qman_reserve_pool_range - Reserve the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to reserve
+ * @count: the number of pool-channel IDs in the range
+ */
+int qman_reserve_pool_range(u32 id, unsigned int count);
+static inline int qman_reserve_pool(u32 id)
+{
+	return qman_reserve_pool_range(id, 1);
+}
+
+void qman_seed_pool_range(u32 id, unsigned int count);
+
+	/* CGR management */
+	/* -------------- */
+/**
+ * qman_create_cgr - Register a congestion group object
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: optional state of CGR settings
+ *
+ * Registers this object to receiving congestion entry/exit callbacks on the
+ * portal affine to the cpu portal on which this API is executed. If opts is
+ * NULL then only the callback (cgr->cb) function is registered. If @flags
+ * contains QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset
+ * any unspecified parameters) will be used rather than a modify hw hardware
+ * (which only modifies the specified parameters).
+ */
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_create_cgr_to_dcp - Register a congestion group object to DCP portal
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @dcp_portal: the DCP portal to which the cgr object is registered.
+ * @opts: optional state of CGR settings
+ *
+ */
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_delete_cgr - Deregisters a congestion group object
+ * @cgr: the 'cgr' object to deregister
+ *
+ * "Unplugs" this CGR object from the portal affine to the cpu on which this API
+ * is executed. This must be excuted on the same affine portal on which it was
+ * created.
+ */
+int qman_delete_cgr(struct qman_cgr *cgr);
+
+/**
+ * qman_modify_cgr - Modify CGR fields
+ * @cgr: the 'cgr' object to modify
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: the CGR-modification settings
+ *
+ * The @opts parameter comes from the low-level portal API, and can be NULL.
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver, in particular the 'cgrid' field is ignored (this operation
+ * only affects the given CGR object). If @flags contains
+ * QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset any
+ * unspecified parameters) will be used rather than a modify hw hardware (which
+ * only modifies the specified parameters).
+ */
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+* qman_query_cgr - Queries CGR fields
+* @cgr: the 'cgr' object to query
+* @result: storage for the queried congestion group record
+*/
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *result);
+
+/**
+ * qman_query_congestion - Queries the state of all congestion groups
+ * @congestion: storage for the queried state of all congestion groups
+ */
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
+
+/**
+ * qman_alloc_cgrid_range - Allocate a contiguous range of CGR IDs
+ * @result: is set by the API to the base CGR ID of the allocated range
+ * @count: the number of CGR IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of CGR IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_cgrid(u32 *result)
+{
+	int ret = qman_alloc_cgrid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_cgrid_range - Release the specified range of CGR IDs
+ * @id: the base CGR ID of the range to deallocate
+ * @count: the number of CGR IDs in the range
+ */
+void qman_release_cgrid_range(u32 id, unsigned int count);
+static inline void qman_release_cgrid(u32 id)
+{
+	qman_release_cgrid_range(id, 1);
+}
+
+/**
+ * qman_reserve_cgrid_range - Reserve the specified range of CGR ID
+ * @id: the base CGR ID of the range to reserve
+ * @count: the number of CGR IDs in the range
+ */
+int qman_reserve_cgrid_range(u32 id, unsigned int count);
+static inline int qman_reserve_cgrid(u32 id)
+{
+	return qman_reserve_cgrid_range(id, 1);
+}
+
+void qman_seed_cgrid_range(u32 id, unsigned int count);
+
+	/* Helpers */
+	/* ------- */
+/**
+ * qman_poll_fq_for_init - Check if an FQ has been initialised from OOS
+ * @fqid: the FQID that will be initialised by other s/w
+ *
+ * In many situations, a FQID is provided for communication between s/w
+ * entities, and whilst the consumer is responsible for initialising and
+ * scheduling the FQ, the producer(s) generally create a wrapper FQ object using
+ * and only call qman_enqueue() (no FQ initialisation, scheduling, etc). Ie;
+ *     qman_create_fq(..., QMAN_FQ_FLAG_NO_MODIFY, ...);
+ * However, data can not be enqueued to the FQ until it is initialised out of
+ * the OOS state - this function polls for that condition. It is particularly
+ * useful for users of IPC functions - each endpoint's Rx FQ is the other
+ * endpoint's Tx FQ, so each side can initialise and schedule their Rx FQ object
+ * and then use this API on the (NO_MODIFY) Tx FQ object in order to
+ * synchronise. The function returns zero for success, +1 if the FQ is still in
+ * the OOS state, or negative if there was an error.
+ */
+static inline int qman_poll_fq_for_init(struct qman_fq *fq)
+{
+	struct qm_mcr_queryfq_np np;
+	int err;
+
+	err = qman_query_fq_np(fq, &np);
+	if (err)
+		return err;
+	if ((np.state & QM_MCR_NP_STATE_MASK) == QM_MCR_NP_STATE_OOS)
+		return 1;
+	return 0;
+}
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+#define cpu_to_hw_sg(x) (x)
+#define hw_sg_to_cpu(x) (x)
+#else
+#define cpu_to_hw_sg(x)  __cpu_to_hw_sg(x)
+#define hw_sg_to_cpu(x)  __hw_sg_to_cpu(x)
+
+static inline void __cpu_to_hw_sg(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = cpu_to_be64(sgentry->opaque);
+	sgentry->val = cpu_to_be32(sgentry->val);
+	sgentry->val_off = cpu_to_be16(sgentry->val_off);
+}
+
+static inline void __hw_sg_to_cpu(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = be64_to_cpu(sgentry->opaque);
+	sgentry->val = be32_to_cpu(sgentry->val);
+	sgentry->val_off = be16_to_cpu(sgentry->val_off);
+}
+#endif
 
 #ifdef __cplusplus
 }
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index b0d953f..a4897b0 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -42,6 +42,7 @@
 #define __FSL_USD_H
 
 #include <compat.h>
+#include <fsl_qman.h>
 
 #ifdef __cplusplus
 extern "C" {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 13/38] bus/dpaa: add BMAN driver core
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (11 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 12/38] bus/dpaa: add QMan driver core routines Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 14/38] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
                   ` (25 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Buffer Manager (BMan) is a hardware buffer pool management block that
allows software and accelerators on the datapath to acquire and release
buffers in order to build frames.

This patch adds the core routines.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman_driver.c | 311 +++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_priv.h   | 125 ++++++++++
 drivers/bus/dpaa/include/fsl_bman.h       | 375 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   5 +
 5 files changed, 817 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index ad68828..24dfa13 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -71,6 +71,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
 	base/qbman/dpaa_alloc.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
new file mode 100644
index 0000000..fb3c50e
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -0,0 +1,311 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_branch_prediction.h>
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "bman_priv.h"
+#include <sys/ioctl.h>
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+u16 bman_ip_rev;
+u16 bman_pool_max;
+void *bman_ccsr_map;
+
+/*****************/
+/* Portal driver */
+/*****************/
+
+static __thread int fd = -1;
+static __thread struct bm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_bman
+};
+
+static int fsl_bman_portal_init(uint32_t idx, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!");
+		return -EINVAL;
+	}
+	/* Allocate and map a bman portal */
+	map.index = idx;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+	pcfg.is_shared = is_shared;
+	pcfg.index = map.index;
+	bman_depletion_fill(&pcfg.mask);
+
+	fd = open(BMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("BMan irq init failed");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+	/* Use the IRQ FD as a unique IRQ number */
+	pcfg.irq = fd;
+
+	/* Set the IRQ number */
+	irq_map.type = dpaa_portal_bman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_bman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int bman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_bman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int bman_thread_finish(void)
+{
+	return fsl_bman_portal_finish();
+}
+
+void bman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int bman_init_ccsr(const struct device_node *node)
+{
+	static int ccsr_map_fd;
+	uint64_t phys_addr;
+	const uint32_t *bman_addr;
+	uint64_t regs_size;
+
+	bman_addr = of_get_address(node, 0, &regs_size, NULL);
+	if (!bman_addr) {
+		pr_err("of_get_address cannot return BMan address");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(node, bman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open(BMAN_CCSR_MAP, O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for BMan CCSR map");
+		return ccsr_map_fd;
+	}
+
+	bman_ccsr_map = mmap(NULL, regs_size, PROT_READ |
+			     PROT_WRITE, MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (bman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map BMan CCSR base Bman: "
+		       "0x%x Phys: 0x%lx size 0x%lx",
+		       *bman_addr, phys_addr, regs_size);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int bman_global_init(void)
+{
+	const struct device_node *dt_node;
+	static int done;
+
+	if (done)
+		return -EBUSY;
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,bman-portal");
+	if (!dt_node) {
+		pr_err("No bman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,bman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,bman-portal-1.0.0")) {
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.0.8")) {
+		bman_ip_rev = BMAN_REV20;
+		bman_pool_max = 8;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.3")) {
+		bman_ip_rev = BMAN_REV21;
+		bman_pool_max = 64;
+	} else {
+		pr_warn("unknown BMan version in portal node,default "
+			"to rev1.0");
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	}
+
+	if (!bman_ip_rev) {
+		pr_err("Unknown bman portal version\n");
+		return -ENODEV;
+	}
+	{
+		const struct device_node *dn = of_find_compatible_node(NULL,
+							NULL, "fsl,bman");
+		if (!dn)
+			pr_err("No bman device node available");
+
+		if (bman_init_ccsr(dn))
+			pr_err("BMan CCSR map failed.");
+	}
+
+	done = 1;
+	return 0;
+}
+
+#define BMAN_POOL_CONTENT(n) (0x0600 + ((n) * 0x04))
+u32 bm_pool_free_buffers(u32 bpid)
+{
+	return in_be32(bman_ccsr_map + BMAN_POOL_CONTENT(bpid));
+}
+
+static u32 __generate_thresh(u32 val, int roundup)
+{
+	u32 e = 0;      /* co-efficient, exponent */
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	DPAA_ASSERT(e < 0x10);
+	return (val | (e << 8));
+}
+
+#define POOL_SWDET(n)       (0x0000 + ((n) * 0x04))
+#define POOL_HWDET(n)       (0x0100 + ((n) * 0x04))
+#define POOL_SWDXT(n)       (0x0200 + ((n) * 0x04))
+#define POOL_HWDXT(n)       (0x0300 + ((n) * 0x04))
+int bm_pool_set(u32 bpid, const u32 *thresholds)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	out_be32(bman_ccsr_map + POOL_SWDET(bpid),
+		 __generate_thresh(thresholds[0], 0));
+	out_be32(bman_ccsr_map + POOL_SWDXT(bpid),
+		 __generate_thresh(thresholds[1], 1));
+	out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+		 __generate_thresh(thresholds[2], 0));
+	out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+		 __generate_thresh(thresholds[3], 1));
+	return 0;
+}
+
+#define BMAN_LOW_DEFAULT_THRESH		0x40
+#define BMAN_HIGH_DEFAULT_THRESH		0x80
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	if (low_thresh && high_thresh) {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(low_thresh, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(high_thresh, 1));
+	} else {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(BMAN_LOW_DEFAULT_THRESH, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(BMAN_HIGH_DEFAULT_THRESH, 1));
+	}
+	return 0;
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman_priv.h b/drivers/bus/dpaa/base/qbman/bman_priv.h
new file mode 100644
index 0000000..07d9cec
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_priv.h
@@ -0,0 +1,125 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_PRIV_H
+#define __BMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_bman.h>
+
+/* Revision info (for errata and feature handling) */
+#define BMAN_REV10 0x0100
+#define BMAN_REV20 0x0200
+#define BMAN_REV21 0x0201
+
+#define BMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+#define BMAN_CCSR_MAP "/dev/mem"
+
+/* This mask contains all the "irqsource" bits visible to API users */
+#define BM_PIRQ_VISIBLE	(BM_PIRQ_RCRI | BM_PIRQ_BSCN)
+
+/* These are bm_<reg>_<verb>(). So for example, bm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define bm_isr_status_read(bm)		__bm_isr_read(bm, bm_isr_status)
+#define bm_isr_status_clear(bm, m)	__bm_isr_write(bm, bm_isr_status, m)
+#define bm_isr_enable_read(bm)		__bm_isr_read(bm, bm_isr_enable)
+#define bm_isr_enable_write(bm, v)	__bm_isr_write(bm, bm_isr_enable, v)
+#define bm_isr_disable_read(bm)		__bm_isr_read(bm, bm_isr_disable)
+#define bm_isr_disable_write(bm, v)	__bm_isr_write(bm, bm_isr_disable, v)
+#define bm_isr_inhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 1)
+#define bm_isr_uninhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 0)
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+extern u16 bman_pool_max;
+
+/* used by CCSR and portal interrupt code */
+enum bm_isr_reg {
+	bm_isr_status = 0,
+	bm_isr_enable = 1,
+	bm_isr_disable = 2,
+	bm_isr_inhibit = 3
+};
+
+struct bm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* This is used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* These are the buffer pool IDs that may be used via this portal. */
+	struct bman_depletion mask;
+
+};
+
+int bman_init_ccsr(const struct device_node *node);
+
+struct bman_portal *bman_create_affine_portal(
+			const struct bm_portal_config *config);
+const struct bm_portal_config *bman_destroy_affine_portal(void);
+
+/* Set depletion thresholds associated with a buffer pool. Requires that the
+ * operating system have access to Bman CCSR (ie. compiled in support and
+ * run-time access courtesy of the device-tree).
+ */
+int bm_pool_set(u32 bpid, const u32 *thresholds);
+
+/* Read the free buffer count for a given buffer */
+u32 bm_pool_free_buffers(u32 bpid);
+
+#endif /* __BMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
new file mode 100644
index 0000000..383106b
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -0,0 +1,375 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_BMAN_H
+#define __FSL_BMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* This wrapper represents a bit-array for the depletion state of the 64 Bman
+ * buffer pools.
+ */
+struct bman_depletion {
+	u32 state[2];
+};
+
+static inline void bman_depletion_init(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = 0;
+}
+
+static inline void bman_depletion_fill(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = ~0;
+}
+
+/* --- Bman data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct bm_rcr_entry;	/* RCR (Release Command Ring) entries */
+struct bm_mc_command;	/* MC (Management Command) command */
+struct bm_mc_result;	/* MC result */
+
+/* Code-reduction, define a wrapper for 48-bit buffers. In cases where a buffer
+ * pool id specific to this buffer is needed (BM_RCR_VERB_CMD_BPID_MULTI,
+ * BM_MCC_VERB_ACQUIRE), the 'bpid' field is used.
+ */
+struct bm_buffer {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1;
+			u8 bpid;
+			u16 hi; /* High 16-bits of 48-bit address */
+			u32 lo; /* Low 32-bits of 48-bit address */
+#else
+			u32 lo;
+			u16 hi;
+			u8 bpid;
+			u8 __reserved;
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:16;
+			u64 addr:48;
+#else
+			u64 addr:48;
+			u64 __notaddress:16;
+#endif
+		};
+		u64 opaque;
+	};
+} __attribute__((aligned(8)));
+static inline u64 bm_buffer_get64(const struct bm_buffer *buf)
+{
+	return buf->addr;
+}
+
+static inline dma_addr_t bm_buf_addr(const struct bm_buffer *buf)
+{
+	return (dma_addr_t)buf->addr;
+}
+
+#define bm_buffer_set64(buf, v) \
+	do { \
+		struct bm_buffer *__buf931 = (buf); \
+		__buf931->hi = upper_32_bits(v); \
+		__buf931->lo = lower_32_bits(v); \
+	} while (0)
+
+/* See 1.5.3.5.4: "Release Command" */
+struct bm_rcr_entry {
+	union {
+		struct {
+			u8 __dont_write_directly__verb;
+			u8 bpid; /* used with BM_RCR_VERB_CMD_BPID_SINGLE */
+			u8 __reserved1[62];
+		};
+		struct bm_buffer bufs[8];
+	};
+} __packed;
+#define BM_RCR_VERB_VBIT		0x80
+#define BM_RCR_VERB_CMD_MASK		0x70	/* one of two values; */
+#define BM_RCR_VERB_CMD_BPID_SINGLE	0x20
+#define BM_RCR_VERB_CMD_BPID_MULTI	0x30
+#define BM_RCR_VERB_BUFCOUNT_MASK	0x0f	/* values 1..8 */
+
+/* See 1.5.3.1: "Acquire Command" */
+/* See 1.5.3.2: "Query Command" */
+struct bm_mcc_acquire {
+	u8 bpid;
+	u8 __reserved1[62];
+} __packed;
+struct bm_mcc_query {
+	u8 __reserved2[63];
+} __packed;
+struct bm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct bm_mcc_acquire acquire;
+		struct bm_mcc_query query;
+	};
+} __packed;
+#define BM_MCC_VERB_VBIT		0x80
+#define BM_MCC_VERB_CMD_MASK		0x70	/* where the verb contains; */
+#define BM_MCC_VERB_CMD_ACQUIRE		0x10
+#define BM_MCC_VERB_CMD_QUERY		0x40
+#define BM_MCC_VERB_ACQUIRE_BUFCOUNT	0x0f	/* values 1..8 go here */
+
+/* See 1.5.3.3: "Acquire Response" */
+/* See 1.5.3.4: "Query Response" */
+struct bm_pool_state {
+	u8 __reserved1[32];
+	/* "availability state" and "depletion state" */
+	struct {
+		u8 __reserved1[8];
+		/* Access using bman_depletion_***() */
+		struct bman_depletion state;
+	} as, ds;
+};
+
+struct bm_mc_result {
+	union {
+		struct {
+			u8 verb;
+			u8 __reserved1[63];
+		};
+		union {
+			struct {
+				u8 __reserved1;
+				u8 bpid;
+				u8 __reserved2[62];
+			};
+			struct bm_buffer bufs[8];
+		} acquire;
+		struct bm_pool_state query;
+	};
+} __packed;
+#define BM_MCR_VERB_VBIT		0x80
+#define BM_MCR_VERB_CMD_MASK		BM_MCC_VERB_CMD_MASK
+#define BM_MCR_VERB_CMD_ACQUIRE		BM_MCC_VERB_CMD_ACQUIRE
+#define BM_MCR_VERB_CMD_QUERY		BM_MCC_VERB_CMD_QUERY
+#define BM_MCR_VERB_CMD_ERR_INVALID	0x60
+#define BM_MCR_VERB_CMD_ERR_ECC		0x70
+#define BM_MCR_VERB_ACQUIRE_BUFCOUNT	BM_MCC_VERB_ACQUIRE_BUFCOUNT /* 0..8 */
+
+/* Portal and Buffer Pools */
+/* Represents a managed portal */
+struct bman_portal;
+
+/* This object type represents Bman buffer pools. */
+struct bman_pool;
+
+/* This struct specifies parameters for a bman_pool object. */
+struct bman_pool_params {
+	/* index of the buffer pool to encapsulate (0-63), ignored if
+	 * BMAN_POOL_FLAG_DYNAMIC_BPID is set.
+	 */
+	u32 bpid;
+	/* bit-mask of BMAN_POOL_FLAG_*** options */
+	u32 flags;
+	/* depletion-entry/exit thresholds, if BMAN_POOL_FLAG_THRESH is set. NB:
+	 * this is only allowed if BMAN_POOL_FLAG_DYNAMIC_BPID is used *and*
+	 * when run in the control plane (which controls Bman CCSR). This array
+	 * matches the definition of bm_pool_set().
+	 */
+	u32 thresholds[4];
+};
+
+/* Flags to bman_new_pool() */
+#define BMAN_POOL_FLAG_NO_RELEASE    0x00000001 /* can't release to pool */
+#define BMAN_POOL_FLAG_ONLY_RELEASE  0x00000002 /* can only release to pool */
+#define BMAN_POOL_FLAG_DYNAMIC_BPID  0x00000008 /* (de)allocate bpid */
+#define BMAN_POOL_FLAG_THRESH        0x00000010 /* set depletion thresholds */
+
+/* Flags to bman_release() */
+#define BMAN_RELEASE_FLAG_NOW        0x00000008 /* issue immediate release */
+
+
+/**
+ * bman_get_portal_index - get portal configuration index
+ */
+int bman_get_portal_index(void);
+
+/**
+ * bman_rcr_is_empty - Determine if portal's RCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * releases for the local portal have been processed by Bman but can't use the
+ * BMAN_RELEASE_FLAG_WAIT_SYNC flag to do this from the final bman_release().
+ * The function forces tracking of RCR consumption (which normally doesn't
+ * happen until release processing needs to find space to put new release
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int bman_rcr_is_empty(void);
+
+/**
+ * bman_alloc_bpid_range - Allocate a contiguous range of BPIDs
+ * @result: is set by the API to the base BPID of the allocated range
+ * @count: the number of BPIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count BPIDs
+ *
+ * Returns the number of buffer pools allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * BPs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int bman_alloc_bpid(u32 *result)
+{
+	int ret = bman_alloc_bpid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * bman_release_bpid_range - Release the specified range of buffer pool IDs
+ * @bpid: the base BPID of the range to deallocate
+ * @count: the number of BPIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of BPIDs
+ * that it can subsequently allocate from.
+ */
+void bman_release_bpid_range(u32 bpid, unsigned int count);
+static inline void bman_release_bpid(u32 bpid)
+{
+	bman_release_bpid_range(bpid, 1);
+}
+
+int bman_reserve_bpid_range(u32 bpid, unsigned int count);
+static inline int bman_reserve_bpid(u32 bpid)
+{
+	return bman_reserve_bpid_range(bpid, 1);
+}
+
+void bman_seed_bpid_range(u32 bpid, unsigned int count);
+
+int bman_shutdown_pool(u32 bpid);
+
+/**
+ * bman_new_pool - Allocates a Buffer Pool object
+ * @params: parameters specifying the buffer pool ID and behaviour
+ *
+ * Creates a pool object for the given @params. A portal and the depletion
+ * callback field of @params are only used if the BMAN_POOL_FLAG_DEPLETION flag
+ * is set. NB, the fields from @params are copied into the new pool object, so
+ * the structure provided by the caller can be released or reused after the
+ * function returns.
+ */
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params);
+
+/**
+ * bman_free_pool - Deallocates a Buffer Pool object
+ * @pool: the pool object to release
+ */
+void bman_free_pool(struct bman_pool *pool);
+
+/**
+ * bman_get_params - Returns a pool object's parameters.
+ * @pool: the pool object
+ *
+ * The returned pointer refers to state within the pool object so must not be
+ * modified and can no longer be read once the pool object is destroyed.
+ */
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
+
+/**
+ * bman_release - Release buffer(s) to the buffer pool
+ * @pool: the buffer pool object to release to
+ * @bufs: an array of buffers to release
+ * @num: the number of buffers in @bufs (1-8)
+ * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options
+ *
+ */
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_acquire - Acquire buffer(s) from a buffer pool
+ * @pool: the buffer pool object to acquire from
+ * @bufs: array for storing the acquired buffers
+ * @num: the number of buffers desired (@bufs is at least this big)
+ *
+ * Issues an "Acquire" command via the portal's management command interface.
+ * The return value will be the number of buffers obtained from the pool, or a
+ * negative error code if a h/w error or pool starvation was encountered.
+ */
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_query_pools - Query all buffer pool states
+ * @state: storage for the queried availability and depletion states
+ */
+int bman_query_pools(struct bm_pool_state *state);
+
+/**
+ * bman_query_free_buffers - Query how many free buffers are in buffer pool
+ * @pool: the buffer pool object to query
+ *
+ * Return the number of the free buffers
+ */
+u32 bman_query_free_buffers(struct bman_pool *pool);
+
+/**
+ * bman_update_pool_thresholds - Change the buffer pool's depletion thresholds
+ * @pool: the buffer pool object to which the thresholds will be set
+ * @thresholds: the new thresholds
+ */
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds);
+
+/**
+ * bm_pool_set_hw_threshold - Change the buffer pool's thresholds
+ * @pool: Pool id
+ * @low_thresh: low threshold
+ * @high_thresh: high threshold
+ */
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_BMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index a4897b0..a3243af 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -50,7 +50,9 @@ extern "C" {
 
 /* Thread-entry/exit hooks; */
 int qman_thread_init(void);
+int bman_thread_init(void);
 int qman_thread_finish(void);
+int bman_thread_finish(void);
 
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
@@ -92,9 +94,12 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal);
  * into another blocking read/select/poll.
  */
 void qman_thread_irq(void);
+void bman_thread_irq(void);
 
 /* Global setup */
 int qman_global_init(void);
+int bman_global_init(void);
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 14/38] bus/dpaa: add support for FMAN frame queue lookup
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (12 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 13/38] bus/dpaa: add BMAN driver core Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 15/38] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
                   ` (24 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/qbman/qman.c        | 99 ++++++++++++++++++++++++++++++-
 drivers/bus/dpaa/base/qbman/qman_driver.c |  7 ++-
 drivers/bus/dpaa/base/qbman/qman_priv.h   | 11 ++++
 drivers/bus/dpaa/include/fsl_qman.h       | 12 ++++
 4 files changed, 126 insertions(+), 3 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index d46e96a..2056842 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -176,6 +176,65 @@ static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
 	return fqtree_find(&p->retire_table, fqid);
 }
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+static void **qman_fq_lookup_table;
+static size_t qman_fq_lookup_table_size;
+
+int qman_setup_fq_lookup_table(size_t num_entries)
+{
+	num_entries++;
+	/* Allocate 1 more entry since the first entry is not used */
+	qman_fq_lookup_table = vmalloc((num_entries * sizeof(void *)));
+	if (!qman_fq_lookup_table) {
+		pr_err("QMan: Could not allocate fq lookup table\n");
+		return -ENOMEM;
+	}
+	memset(qman_fq_lookup_table, 0, num_entries * sizeof(void *));
+	qman_fq_lookup_table_size = num_entries;
+	pr_info("QMan: Allocated lookup table at %p, entry count %lu\n",
+		qman_fq_lookup_table,
+			(unsigned long)qman_fq_lookup_table_size);
+	return 0;
+}
+
+/* global structure that maintains fq object mapping */
+static DEFINE_SPINLOCK(fq_hash_table_lock);
+
+static int find_empty_fq_table_entry(u32 *entry, struct qman_fq *fq)
+{
+	u32 i;
+
+	spin_lock(&fq_hash_table_lock);
+	/* Can't use index zero because this has special meaning
+	 * in context_b field.
+	 */
+	for (i = 1; i < qman_fq_lookup_table_size; i++) {
+		if (qman_fq_lookup_table[i] == NULL) {
+			*entry = i;
+			qman_fq_lookup_table[i] = fq;
+			spin_unlock(&fq_hash_table_lock);
+			return 0;
+		}
+	}
+	spin_unlock(&fq_hash_table_lock);
+	return -ENOMEM;
+}
+
+static void clear_fq_table_entry(u32 entry)
+{
+	spin_lock(&fq_hash_table_lock);
+	BUG_ON(entry >= qman_fq_lookup_table_size);
+	qman_fq_lookup_table[entry] = NULL;
+	spin_unlock(&fq_hash_table_lock);
+}
+
+static inline struct qman_fq *get_fq_table_entry(u32 entry)
+{
+	BUG_ON(entry >= qman_fq_lookup_table_size);
+	return qman_fq_lookup_table[entry];
+}
+#endif
+
 static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
 {
 	/* Byteswap the FQD to HW format */
@@ -766,8 +825,13 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 				break;
 			case QM_MR_VERB_FQPN:
 				/* Parked */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+				fq = get_fq_table_entry(
+					be32_to_cpu(msg->fq.contextB));
+#else
 				fq = (void *)(uintptr_t)
 					be32_to_cpu(msg->fq.contextB);
+#endif
 				fq_state_change(p, fq, msg, verb);
 				if (fq->cb.fqs)
 					fq->cb.fqs(p, fq, &swapped_msg);
@@ -792,7 +856,11 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 			}
 		} else {
 			/* Its a software ERN */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(be32_to_cpu(msg->ern.tag));
+#else
 			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+#endif
 			fq->cb.ern(p, fq, &swapped_msg);
 		}
 		num++;
@@ -907,7 +975,11 @@ static inline unsigned int __poll_portal_fast(struct qman_portal *p,
 				clear_vdqcr(p, fq);
 		} else {
 			/* SDQCR: context_b points to the FQ */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(dq->contextB);
+#else
 			fq = (void *)(uintptr_t)dq->contextB;
+#endif
 			/* Now let the callback do its stuff */
 			res = fq->cb.dqrr(p, fq, dq);
 			/*
@@ -1119,7 +1191,12 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
 	fq->flags = flags;
 	fq->state = qman_fq_state_oos;
 	fq->cgr_groupid = 0;
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	if (unlikely(find_empty_fq_table_entry(&fq->key, fq))) {
+		pr_info("Find empty table entry failed\n");
+		return -ENOMEM;
+	}
+#endif
 	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
 		return 0;
 	/* Everything else is AS_IS support */
@@ -1193,7 +1270,9 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
 	case qman_fq_state_oos:
 		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
 			qman_release_fqid(fq->fqid);
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		clear_fq_table_entry(fq->key);
+#endif
 		return;
 	default:
 		break;
@@ -1258,7 +1337,11 @@ int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
 		dma_addr_t phys_fq;
 
 		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		mcc->initfq.fqd.context_b = fq->key;
+#else
 		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+#endif
 		/*
 		 *  and the physical address - NB, if the user wasn't trying to
 		 * set CONTEXTA, clear the stashing settings.
@@ -1419,7 +1502,11 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
 			msg.verb = QM_MR_VERB_FQRNI;
 			msg.fq.fqs = mcr->alterfq.fqs;
 			msg.fq.fqid = fq->fqid;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			msg.fq.contextB = fq->key;
+#else
 			msg.fq.contextB = (u32)(uintptr_t)fq;
+#endif
 			fq->cb.fqs(p, fq, &msg);
 		}
 	} else if (res == QM_MCR_RESULT_PENDING) {
@@ -1861,7 +1948,11 @@ static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
 					QM_EQCR_DCA_PARK : 0) |
 			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
 	eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	eq->tag = cpu_to_be32(fq->key);
+#else
 	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 	eq->fd = *fd;
 	cpu_to_hw_fd(&eq->fd);
 	return eq;
@@ -1907,7 +1998,11 @@ int qman_enqueue_multi(struct qman_fq *fq,
 	/* try to send as many frames as possible */
 	while (eqcr->available && frames_to_send--) {
 		eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		eq->tag = cpu_to_be32(fq->key);
+#else
 		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 		eq->fd.opaque_addr = fd->opaque_addr;
 		eq->fd.addr = cpu_to_be40(fd->addr);
 		eq->fd.status = cpu_to_be32(fd->status);
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index a7faf17..5c535dd 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -279,5 +279,10 @@ int qman_global_init(void)
 	else
 		qman_clk = be32_to_cpu(*clk);
 
-	return ret;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	ret = qman_setup_fq_lookup_table(CONFIG_FSL_QMAN_FQ_LOOKUP_MAX);
+	if (ret)
+		return ret;
+#endif
+	return 0;
 }
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index 4ae2ea5..e9826c2 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -44,6 +44,10 @@
 #include "dpaa_sys.h"
 #include <fsl_qman.h>
 
+#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)
+#error "_ARM64 requires _FSL_QMAN_FQ_LOOKUP"
+#endif
+
 /* Congestion Groups */
 /*
  * This wrapper represents a bit-array for the state of the 256 QMan congestion
@@ -197,6 +201,13 @@ void qm_set_liodns(struct qm_portal_config *pcfg);
 int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
 		       struct qm_mcr_cgrtestwrite *result);
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+/* If the fq object pointer is greater than the size of context_b field,
+ * than a lookup table is required.
+ */
+int qman_setup_fq_lookup_table(size_t num_entries);
+#endif
+
 /*   QMan s/w corenet portal, low-level i/face	 */
 
 /*
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 7d9ad00..1867a66 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -46,6 +46,15 @@ extern "C" {
 
 #include <dpaa_rbtree.h>
 
+/* FQ lookups (turn this on for 64bit user-space) */
+#if (__WORDSIZE == 64)
+#define CONFIG_FSL_QMAN_FQ_LOOKUP
+/* if FQ lookups are supported, this controls the number of initialised,
+ * s/w-consumed FQs that can be supported at any one time.
+ */
+#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)
+#endif
+
 /* Last updated for v00.800 of the BG */
 
 /* Hardware constants */
@@ -1245,6 +1254,9 @@ struct qman_fq {
 	enum qman_fq_state state;
 	int cgr_groupid;
 	struct rb_node node;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	u32 key;
+#endif
 };
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 15/38] bus/dpaa: add BMan hardware interfaces
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (13 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 14/38] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 16/38] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
                   ` (23 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman.c        | 394 +++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman.h        | 550 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_driver.c |  12 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |  16 +
 5 files changed, 973 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 24dfa13..6d0c5ee 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -71,6 +71,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman.c \
 	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman.c b/drivers/bus/dpaa/base/qbman/bman.c
new file mode 100644
index 0000000..a0bea62
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.c
@@ -0,0 +1,394 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "bman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define RCR_THRESH	2	/* reread h/w CI when running out of space */
+#define IRQNAME		"BMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "BMan portal %d" */
+
+struct bman_portal {
+	struct bm_portal p;
+	/* 2-element array. pools[0] is mask, pools[1] is snapshot. */
+	struct bman_depletion *pools;
+	int thresh_set;
+	unsigned long irq_sources;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct bm_portal_config *config;
+	char irqname[MAX_IRQNAME];
+};
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static DEFINE_PER_CPU(struct bman_portal, bman_affine_portal);
+
+static inline struct bman_portal *get_affine_portal(void)
+{
+	return &get_cpu_var(bman_affine_portal);
+}
+
+/*
+ * This object type refers to a pool, it isn't *the* pool. There may be
+ * more than one such object per BMan buffer pool, eg. if different users of
+ * the pool are operating via different portals.
+ */
+struct bman_pool {
+	struct bman_pool_params params;
+	/* Used for hash-table admin when using depletion notifications. */
+	struct bman_portal *portal;
+	struct bman_pool *next;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	atomic_t in_use;
+#endif
+};
+
+static inline
+struct bman_portal *bman_create_portal(struct bman_portal *portal,
+				       const struct bm_portal_config *c)
+{
+	struct bm_portal *p;
+	const struct bman_depletion *pools = &c->mask;
+	int ret;
+	u8 bpid = 0;
+
+	p = &portal->p;
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference...
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	if (bm_rcr_init(p, bm_rcr_pvb, bm_rcr_cce)) {
+		pr_err("Bman RCR initialisation failed\n");
+		return NULL;
+	}
+	if (bm_mc_init(p)) {
+		pr_err("Bman MC initialisation failed\n");
+		goto fail_mc;
+	}
+	portal->pools = kmalloc(2 * sizeof(*pools), GFP_KERNEL);
+	if (!portal->pools)
+		goto fail_pools;
+	portal->pools[0] = *pools;
+	bman_depletion_init(portal->pools + 1);
+	while (bpid < bman_pool_max) {
+		/*
+		 * Default to all BPIDs disabled, we enable as required at
+		 * run-time.
+		 */
+		bm_isr_bscn_mask(p, bpid, 0);
+		bpid++;
+	}
+	portal->slowpoll = 0;
+	/* Write-to-clear any stale interrupt status bits */
+	bm_isr_disable_write(p, 0xffffffff);
+	portal->irq_sources = 0;
+	bm_isr_enable_write(p, portal->irq_sources);
+	bm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, NULL, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need RCR to be empty before continuing */
+	ret = bm_rcr_get_fill(p);
+	if (ret) {
+		pr_err("Bman RCR unclean\n");
+		goto fail_rcr_empty;
+	}
+	/* Success */
+	portal->config = c;
+
+	bm_isr_disable_write(p, 0);
+	bm_isr_uninhibit(p);
+	return portal;
+fail_rcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->pools);
+fail_pools:
+	bm_mc_finish(p);
+fail_mc:
+	bm_rcr_finish(p);
+	return NULL;
+}
+
+struct bman_portal *
+bman_create_affine_portal(const struct bm_portal_config *c)
+{
+	struct bman_portal *portal = get_affine_portal();
+
+	/*This function is called from the context which is already affine to
+	 *CPU or in other words this in non-migratable to other CPUs.
+	 */
+	portal = bman_create_portal(portal, c);
+	if (portal) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		spin_unlock(&affine_mask_lock);
+	}
+	return portal;
+}
+
+static inline
+void bman_destroy_portal(struct bman_portal *bm)
+{
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bm_rcr_cce_update(&bm->p);
+	bm_rcr_cce_update(&bm->p);
+
+	free_irq(pcfg->irq, bm);
+
+	kfree(bm->pools);
+	bm_mc_finish(&bm->p);
+	bm_rcr_finish(&bm->p);
+	bm->config = NULL;
+}
+
+const struct
+bm_portal_config *bman_destroy_affine_portal(void)
+{
+	struct bman_portal *bm = get_affine_portal();
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bman_destroy_portal(bm);
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(pcfg->cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int
+bman_get_portal_index(void)
+{
+	struct bman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+static const u32 zero_thresholds[4] = {0, 0, 0, 0};
+
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params)
+{
+	struct bman_pool *pool = NULL;
+	u32 bpid;
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID) {
+		int ret = bman_alloc_bpid(&bpid);
+
+		if (ret)
+			return NULL;
+	} else {
+		if (params->bpid >= bman_pool_max)
+			return NULL;
+		bpid = params->bpid;
+	}
+	if (params->flags & BMAN_POOL_FLAG_THRESH) {
+		int ret = bm_pool_set(bpid, params->thresholds);
+
+		if (ret)
+			goto err;
+	}
+
+	pool = kmalloc(sizeof(*pool), GFP_KERNEL);
+	if (!pool)
+		goto err;
+	pool->params = *params;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	atomic_set(&pool->in_use, 1);
+#endif
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		pool->params.bpid = bpid;
+
+	return pool;
+err:
+	if (params->flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(bpid, zero_thresholds);
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(bpid);
+	kfree(pool);
+
+	return NULL;
+}
+
+void bman_free_pool(struct bman_pool *pool)
+{
+	if (pool->params.flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(pool->params.bpid, zero_thresholds);
+	if (pool->params.flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(pool->params.bpid);
+	kfree(pool);
+}
+
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool)
+{
+	return &pool->params;
+}
+
+static void update_rcr_ci(struct bman_portal *p, int avail)
+{
+	if (avail)
+		bm_rcr_cce_prefetch(&p->p);
+	else
+		bm_rcr_cce_update(&p->p);
+}
+
+#define BMAN_BUF_MASK 0x0000fffffffffffful
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p;
+	struct bm_rcr_entry *r;
+	u32 i = num - 1;
+	u8 avail;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_NO_RELEASE)
+		return -EINVAL;
+#endif
+
+	p = get_affine_portal();
+	avail = bm_rcr_get_avail(&p->p);
+	if (avail < 2)
+		update_rcr_ci(p, avail);
+	r = bm_rcr_start(&p->p);
+	if (unlikely(!r))
+		return -EBUSY;
+
+	/*
+	 * we can copy all but the first entry, as this can trigger badness
+	 * with the valid-bit
+	 */
+	r->bufs[0].opaque =
+		cpu_to_be64(((u64)pool->params.bpid << 48) |
+			    (bufs[0].opaque & BMAN_BUF_MASK));
+	if (i) {
+		for (i = 1; i < num; i++)
+			r->bufs[i].opaque =
+				cpu_to_be64(bufs[i].opaque & BMAN_BUF_MASK);
+	}
+
+	bm_rcr_pvb_commit(&p->p, BM_RCR_VERB_CMD_BPID_SINGLE |
+			  (num & BM_RCR_VERB_BUFCOUNT_MASK));
+
+	return 0;
+}
+
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_command *mcc;
+	struct bm_mc_result *mcr;
+	int ret, i;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_ONLY_RELEASE)
+		return -EINVAL;
+#endif
+
+	mcc = bm_mc_start(&p->p);
+	mcc->acquire.bpid = pool->params.bpid;
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_ACQUIRE |
+			(num & BM_MCC_VERB_ACQUIRE_BUFCOUNT));
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	ret = mcr->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT;
+	if (bufs) {
+		for (i = 0; i < num; i++)
+			bufs[i].opaque =
+				be64_to_cpu(mcr->acquire.bufs[i].opaque);
+	}
+	if (ret != num)
+		ret = -ENOMEM;
+	return ret;
+}
+
+int bman_query_pools(struct bm_pool_state *state)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_result *mcr;
+
+	bm_mc_start(&p->p);
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_QUERY);
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & BM_MCR_VERB_CMD_MASK) ==
+		    BM_MCR_VERB_CMD_QUERY);
+	*state = mcr->query;
+	state->as.state.state[0] = be32_to_cpu(state->as.state.state[0]);
+	state->as.state.state[1] = be32_to_cpu(state->as.state.state[1]);
+	state->ds.state.state[0] = be32_to_cpu(state->ds.state.state[0]);
+	state->ds.state.state[1] = be32_to_cpu(state->ds.state.state[1]);
+	return 0;
+}
+
+u32 bman_query_free_buffers(struct bman_pool *pool)
+{
+	return bm_pool_free_buffers(pool->params.bpid);
+}
+
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds)
+{
+	u32 bpid;
+
+	bpid = bman_get_params(pool)->bpid;
+
+	return bm_pool_set(bpid, thresholds);
+}
+
+int bman_shutdown_pool(u32 bpid)
+{
+	struct bman_portal *p = get_affine_portal();
+	return bm_shutdown_pool(&p->p, bpid);
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman.h b/drivers/bus/dpaa/base/qbman/bman.h
new file mode 100644
index 0000000..dcca303
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.h
@@ -0,0 +1,550 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_H
+#define __BMAN_H
+
+#include "bman_priv.h"
+
+/* Cache-inhibited register offsets */
+#define BM_REG_RCR_PI_CINH	0x3000
+#define BM_REG_RCR_CI_CINH	0x3100
+#define BM_REG_RCR_ITR		0x3200
+#define BM_REG_CFG		0x3300
+#define BM_REG_SCN(n)		(0x3400 + ((n) << 6))
+#define BM_REG_ISR		0x3e00
+#define BM_REG_IIR              0x3ec0
+
+/* Cache-enabled register offsets */
+#define BM_CL_CR		0x0000
+#define BM_CL_RR0		0x0100
+#define BM_CL_RR1		0x0140
+#define BM_CL_RCR		0x1000
+#define BM_CL_RCR_PI_CENA	0x3000
+#define BM_CL_RCR_CI_CENA	0x3100
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __bm_in(bm, o)		be32_to_cpu(__raw_readl((bm)->ci + (o)))
+#define __bm_out(bm, o, val)    __raw_writel(cpu_to_be32(val), \
+					     (bm)->ci + (o))
+#define bm_in(reg)		__bm_in(&portal->addr, BM_REG_##reg)
+#define bm_out(reg, val)	__bm_out(&portal->addr, BM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __bm_cl_touch_ro(bm, o) dcbt_ro((bm)->ce + (o))
+#define __bm_cl_touch_rw(bm, o) dcbt_rw((bm)->ce + (o))
+#define __bm_cl_in(bm, o)	be32_to_cpu(__raw_readl((bm)->ce + (o)))
+#define __bm_cl_out(bm, o, val) \
+	do { \
+		u32 *__tmpclout = (bm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __bm_cl_invalidate(bm, o) dccivac((bm)->ce + (o))
+#define bm_cl_touch_ro(reg) __bm_cl_touch_ro(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_touch_rw(reg) __bm_cl_touch_rw(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_in(reg)	    __bm_cl_in(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_out(reg, val) __bm_cl_out(&portal->addr, BM_CL_##reg##_CENA, val)
+#define bm_cl_invalidate(reg)\
+	__bm_cl_invalidate(&portal->addr, BM_CL_##reg##_CENA)
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 bm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ */
+enum bm_rcr_pmode {		/* matches BCSP_CFG::RPM */
+	bm_rcr_pci = 0,		/* PI index, cache-inhibited */
+	bm_rcr_pce = 1,		/* PI index, cache-enabled */
+	bm_rcr_pvb = 2		/* valid-bit */
+};
+
+enum bm_rcr_cmode {		/* s/w-only */
+	bm_rcr_cci,		/* CI index, cache-inhibited */
+	bm_rcr_cce		/* CI index, cache-enabled */
+};
+
+/* --- Portal structures --- */
+
+#define BM_RCR_SIZE		8
+
+struct bm_rcr {
+	struct bm_rcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	u32 busy;
+	enum bm_rcr_pmode pmode;
+	enum bm_rcr_cmode cmode;
+#endif
+};
+
+struct bm_mc {
+	struct bm_mc_command *cr;
+	struct bm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum {
+		/* Can only be _mc_start()ed */
+		mc_idle,
+		/* Can only be _mc_commit()ed or _mc_abort()ed */
+		mc_user,
+		/* Can only be _mc_retry()ed */
+		mc_hw
+	} state;
+#endif
+};
+
+struct bm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct bm_portal {
+	struct bm_addr addr;
+	struct bm_rcr rcr;
+	struct bm_mc mc;
+	struct bm_portal_config config;
+} ____cacheline_aligned;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define RCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(BM_RCR_SIZE << 6)))
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 RCR_PTR2IDX(struct bm_rcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (BM_RCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void RCR_INC(struct bm_rcr *rcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates
+	 * fast code with essentially no branching overheads. We increment to
+	 * the next RCR pointer and handle overflow and 'vbit'.
+	 */
+	struct bm_rcr_entry *partial = rcr->cursor + 1;
+
+	rcr->cursor = RCR_CARRYCLEAR(partial);
+	if (partial != rcr->cursor)
+		rcr->vbit ^= BM_RCR_VERB_VBIT;
+}
+
+static inline int bm_rcr_init(struct bm_portal *portal, enum bm_rcr_pmode pmode,
+			      __maybe_unused enum bm_rcr_cmode cmode)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct bm_rcr *rcr = &portal->rcr;
+	u32 cfg;
+	u8 pi;
+
+	rcr->ring = portal->addr.ce + BM_CL_RCR;
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	rcr->cursor = rcr->ring + pi;
+	rcr->vbit = (bm_in(RCR_PI_CINH) & BM_RCR_SIZE) ?  BM_RCR_VERB_VBIT : 0;
+	rcr->available = BM_RCR_SIZE - 1
+		- bm_cyc_diff(BM_RCR_SIZE, rcr->ci, pi);
+	rcr->ithresh = bm_in(RCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+	rcr->pmode = pmode;
+	rcr->cmode = cmode;
+#endif
+	cfg = (bm_in(CFG) & 0xffffffe0) | (pmode & 0x3); /* BCSP_CFG::RPM */
+	bm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void bm_rcr_finish(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	u8 ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	DPAA_ASSERT(!rcr->busy);
+	if (pi != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("losing uncommited RCR entries\n");
+	if (ci != rcr->ci)
+		pr_crit("missing existing RCR completions\n");
+	if (rcr->ci != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("RCR destroyed unquiesced\n");
+}
+
+static inline struct bm_rcr_entry *bm_rcr_start(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(!rcr->busy);
+	if (!rcr->available)
+		return NULL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 1;
+#endif
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline struct bm_rcr_entry *bm_rcr_pend_and_next(
+					struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode != bm_rcr_pvb);
+	if (rcr->available == 1)
+		return NULL;
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcr->cursor);
+	RCR_INC(rcr);
+	rcr->available--;
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_pci_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pci);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	hwsync();
+	bm_out(RCR_PI_CINH, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	bm_cl_invalidate(RCR_PI);
+	bm_cl_touch_rw(RCR_PI);
+}
+
+static inline void bm_rcr_pce_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	lwsync();
+	bm_cl_out(RCR_PI, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pvb_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	struct bm_rcr_entry *rcursor;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pvb);
+	lwsync();
+	rcursor = rcr->cursor;
+	rcursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcursor);
+	RCR_INC(rcr);
+	rcr->available--;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline u8 bm_rcr_cci_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cci);
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline void bm_rcr_cce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	bm_cl_touch_ro(RCR_CI);
+}
+
+static inline u8 bm_rcr_cce_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	rcr->ci = bm_cl_in(RCR_CI) & (BM_RCR_SIZE - 1);
+	bm_cl_invalidate(RCR_CI);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline u8 bm_rcr_get_ithresh(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->ithresh;
+}
+
+static inline void bm_rcr_set_ithresh(struct bm_portal *portal, u8 ithresh)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	rcr->ithresh = ithresh;
+	bm_out(RCR_ITR, ithresh);
+}
+
+static inline u8 bm_rcr_get_avail(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->available;
+}
+
+static inline u8 bm_rcr_get_fill(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return BM_RCR_SIZE - 1 - rcr->available;
+}
+
+/* --- Management command API --- */
+
+static inline int bm_mc_init(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + BM_CL_CR;
+	mc->rr = portal->addr.ce + BM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			BM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? BM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+	return 0;
+}
+
+static inline void bm_mc_finish(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (mc->state != mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct bm_mc_command *bm_mc_start(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void bm_mc_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_user);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+}
+
+static inline void bm_mc_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_hw;
+#endif
+}
+
+static inline struct bm_mc_result *bm_mc_result(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= BM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+	return rr;
+}
+
+#define SCN_REG(bpid) BM_REG_SCN((bpid) / 32)
+#define SCN_BIT(bpid) (0x80000000 >> (bpid & 31))
+static inline void bm_isr_bscn_mask(struct bm_portal *portal, u8 bpid,
+				    int enable)
+{
+	u32 val;
+
+	DPAA_ASSERT(bpid < bman_pool_max);
+	/* REG_SCN for bpid=0..31, REG_SCN+4 for bpid=32..63 */
+	val = __bm_in(&portal->addr, SCN_REG(bpid));
+	if (enable)
+		val |= SCN_BIT(bpid);
+	else
+		val &= ~SCN_BIT(bpid);
+	__bm_out(&portal->addr, SCN_REG(bpid), val);
+}
+
+static inline u32 __bm_isr_read(struct bm_portal *portal, enum bm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 6));
+#else
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __bm_isr_write(struct bm_portal *portal, enum bm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 6), val);
+#else
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 2), val);
+#endif
+}
+
+/* Buffer Pool Cleanup */
+static inline int bm_shutdown_pool(struct bm_portal *p, u32 bpid)
+{
+	struct bm_mc_command *bm_cmd;
+	struct bm_mc_result *bm_res;
+
+	int aq_count = 0;
+	bool stop = false;
+
+	while (!stop) {
+		/* Acquire buffers until empty */
+		bm_cmd = bm_mc_start(p);
+		bm_cmd->acquire.bpid = bpid;
+		bm_mc_commit(p, BM_MCC_VERB_CMD_ACQUIRE |  1);
+		while (!(bm_res = bm_mc_result(p)))
+			cpu_relax();
+		if (!(bm_res->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT)) {
+			/* Pool is empty */
+			stop = true;
+		} else
+			++aq_count;
+	};
+	return 0;
+}
+
+#endif /* __BMAN_H */
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
index fb3c50e..28f2cf2 100644
--- a/drivers/bus/dpaa/base/qbman/bman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -65,6 +65,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct bman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -111,6 +112,14 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 	/* Use the IRQ FD as a unique IRQ number */
 	pcfg.irq = fd;
 
+	portal = bman_create_affine_portal(&pcfg);
+	if (!portal) {
+		pr_err("Bman portal initialisation failed (%d)",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	/* Set the IRQ number */
 	irq_map.type = dpaa_portal_bman;
 	irq_map.portal_cinh = map.addr.cinh;
@@ -120,10 +129,13 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 
 static int fsl_bman_portal_finish(void)
 {
+	__maybe_unused const struct bm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = bman_destroy_affine_portal();
+	BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
index 690576a..35dba7f 100644
--- a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -41,6 +41,22 @@
 #include "dpaa_sys.h"
 #include <process.h>
 #include <fsl_qman.h>
+#include <fsl_bman.h>
+
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_bpid, result, count, align, partial);
+}
+
+void bman_release_bpid_range(u32 bpid, u32 count)
+{
+	process_release(dpaa_id_bpid, bpid, count);
+}
+
+int bman_reserve_bpid_range(u32 bpid, u32 count)
+{
+	return process_reserve(dpaa_id_bpid, bpid, count);
+}
 
 int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
 {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 16/38] bus/dpaa: add fman flow control threshold setting
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (14 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 15/38] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 17/38] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
                   ` (22 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/fman/fman_hw.c | 28 ++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_fman.h  |  7 +++++++
 2 files changed, 35 insertions(+)

diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 77908ec..7618fc1 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -37,6 +37,7 @@
  */
 #include <fsl_fman.h>
 #include <fsl_fman_crc64.h>
+#include <fsl_bman.h>
 
 /* Instantiate the global variable that the inline CRC64 implementation (in
  * <fsl_fman.h>) depends on.
@@ -437,6 +438,33 @@ fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
 }
 
 int
+fman_if_get_fc_threshold(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	return in_be32(fmbm_mpd);
+}
+
+int
+fman_if_set_fc_threshold(struct fman_if *fm_if, u32 high_water,
+			 u32 low_water, u32 bpid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	out_be32(fmbm_mpd, FMAN_ENABLE_BPOOL_DEPLETION);
+	return bm_pool_set_hw_threshold(bpid, low_water, high_water);
+
+}
+
+int
 fman_if_get_fc_quanta(struct fman_if *fm_if)
 {
 	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index 0aff22c..b94bc56 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -120,6 +120,13 @@ void fman_if_loopback_disable(struct fman_if *);
 void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
 		    size_t bufsize);
 
+/* Get Flow Control threshold parameters on specific interface */
+int fman_if_get_fc_threshold(struct fman_if *fm_if);
+
+/* Enable and Set Flow Control threshold parameters on specific interface */
+int fman_if_set_fc_threshold(struct fman_if *fm_if,
+			u32 high_water, u32 low_water, u32 bpid);
+
 /* Get Flow Control pause quanta on specific interface */
 int fman_if_get_fc_quanta(struct fman_if *fm_if);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 17/38] bus/dpaa: integrate DPAA Bus with hardware blocks
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (15 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 16/38] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 18/38] doc: add NXP DPAA PMD documentation Shreyansh Jain
                   ` (21 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Now that QBMAN (QMAN, BMAN) and FMAN drivers are available, this patch
integrates the DPAA Bus driver for using the drivers for scanning
devices and calling the PMD registered probe callbacks.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c               | 236 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  39 +++++
 drivers/bus/dpaa/rte_dpaa_bus.h           |   6 +
 3 files changed, 281 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 1c4627d..6a6baf1 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -64,9 +64,19 @@
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
 
 struct rte_dpaa_bus rte_dpaa_bus;
+struct netcfg_info *dpaa_netcfg;
 
+/* define a variable to hold the portal_key, once created.*/
+pthread_key_t dpaa_portal_key;
+
+RTE_DEFINE_PER_LCORE(bool, _dpaa_io);
 
 static inline void
 dpaa_add_to_device_list(struct rte_dpaa_device *dev)
@@ -79,11 +89,237 @@ dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
 {
 	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
 }
+
+static void dpaa_clean_device_list(void);
+
+static int
+dpaa_create_device_list(void)
+{
+	int i;
+	int ret;
+	struct rte_dpaa_device *dev;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+
+	/* Creating Ethernet Devices */
+	for (i = 0; i < dpaa_netcfg->num_ethports; i++) {
+		dev = rte_zmalloc(NULL, sizeof(struct rte_dpaa_device),
+				  RTE_CACHE_LINE_SIZE);
+		if (!dev) {
+			DPAA_BUS_LOG(ERR, "Failed to allocate ETH devices");
+			ret = -ENOMEM;
+			goto cleanup;
+		}
+
+		cfg = &dpaa_netcfg->port_cfg[i];
+		fman_intf = cfg->fman_if;
+
+		/* Device identifiers */
+		dev->id.fman_id = fman_intf->fman_idx + 1;
+		dev->id.mac_id = fman_intf->mac_idx;
+		dev->id.device_type = FSL_DPAA_ETH;
+		dev->id.dev_id = i;
+
+		/* Create device name */
+		memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
+		sprintf(dev->name, "fm%d-mac%d", (fman_intf->fman_idx + 1),
+			fman_intf->mac_idx);
+		DPAA_BUS_LOG(DEBUG, "Device added: %s", dev->name);
+
+		dpaa_add_to_device_list(dev);
+	}
+
+	rte_dpaa_bus.device_count = i;
+
+	return 0;
+
+cleanup:
+	dpaa_clean_device_list();
+	return ret;
+}
+
+static void
+dpaa_clean_device_list(void)
+{
+	struct rte_dpaa_device *dev = NULL;
+	struct rte_dpaa_device *tdev = NULL;
+
+	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+		rte_free(dev);
+		dev = NULL;
+	}
+}
+
+/** XXX move this function into a separate file */
+static int
+_dpaa_portal_init(void *arg)
+{
+	cpu_set_t cpuset;
+	pthread_t id;
+	uint32_t cpu = rte_lcore_id();
+	int ret;
+	struct dpaa_portal *dpaa_io_portal;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if ((uint64_t)arg == 1 || cpu == LCORE_ID_ANY)
+		cpu = rte_get_master_lcore();
+	/* if the core id is not supported */
+	else
+		if (cpu >= RTE_MAX_LCORE)
+			return -1;
+
+	/* Set CPU affinity for this thread */
+	CPU_ZERO(&cpuset);
+	CPU_SET(cpu, &cpuset);
+	id = pthread_self();
+	ret = pthread_setaffinity_np(id, sizeof(cpu_set_t), &cpuset);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "pthread_setaffinity_np failed on "
+			"core :%d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	/* Initialise bman thread portals */
+	ret = bman_thread_init();
+	if (ret) {
+		PMD_DRV_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	PMD_DRV_LOG(DEBUG, "BMAN thread initialized");
+
+	/* Initialise qman thread portals */
+	ret = qman_thread_init();
+	if (ret) {
+		PMD_DRV_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		bman_thread_finish();
+		return ret;
+	}
+
+	PMD_DRV_LOG(DEBUG, "QMAN thread initialized");
+
+	dpaa_io_portal = rte_malloc(NULL, sizeof(struct dpaa_portal),
+				    RTE_CACHE_LINE_SIZE);
+	if (!dpaa_io_portal) {
+		PMD_DRV_LOG(ERR, "Unable to allocate memory");
+		bman_thread_finish();
+		qman_thread_finish();
+		return -ENOMEM;
+	}
+
+	dpaa_io_portal->qman_idx = qman_get_portal_index();
+	dpaa_io_portal->bman_idx = bman_get_portal_index();
+	dpaa_io_portal->tid = syscall(SYS_gettid);
+
+	ret = pthread_setspecific(dpaa_portal_key, (void *)dpaa_io_portal);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "pthread_setspecific failed on "
+			    "core %d with ret: %d", cpu, ret);
+		dpaa_portal_finish(NULL);
+
+		return ret;
+	}
+
+	RTE_PER_LCORE(_dpaa_io) = true;
+
+	PMD_DRV_LOG(DEBUG, "QMAN thread initialized");
+
+	return 0;
+}
+
+/*
+ * rte_dpaa_portal_init - Wrapper over _dpaa_portal_init with thread level check
+ * XXX Complete this
+ */
+int
+rte_dpaa_portal_init(void *arg)
+{
+	if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
+		return _dpaa_portal_init(arg);
+
+	return 0;
+}
+
+void
+dpaa_portal_finish(void *arg)
+{
+	struct dpaa_portal *dpaa_io_portal = (struct dpaa_portal *)arg;
+
+	if (!dpaa_io_portal) {
+		PMD_DRV_LOG(DEBUG, "Portal already cleaned");
+		return;
+	}
+
+	bman_thread_finish();
+	qman_thread_finish();
+
+	pthread_setspecific(dpaa_portal_key, NULL);
+
+	rte_free(dpaa_io_portal);
+	dpaa_io_portal = NULL;
+
+	RTE_PER_LCORE(_dpaa_io) = false;
+}
+
 static int
 rte_dpaa_bus_scan(void)
 {
+	int ret;
+
 	PMD_INIT_FUNC_TRACE();
 
+	/* Load the device-tree driver */
+	ret = of_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "of_init failed with ret: %d", ret);
+		return -1;
+	}
+
+	/* Get the interface configurations from device-tree */
+	dpaa_netcfg = netcfg_acquire();
+	if (!dpaa_netcfg) {
+		DPAA_BUS_LOG(ERR, "netcfg_acquire failed");
+		return -EINVAL;
+	}
+
+	if (!dpaa_netcfg->num_ethports) {
+		DPAA_BUS_LOG(INFO, "no network interfaces available");
+		/* This is not an error */
+		return 0;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "Bus: Address of netcfg=%p, Ethports=%d",
+				dpaa_netcfg, dpaa_netcfg->num_ethports);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dump_netcfg(dpaa_netcfg);
+#endif
+
+	DPAA_BUS_LOG(DEBUG, "Number of devices = %d\n",
+		    dpaa_netcfg->num_ethports);
+	ret = dpaa_create_device_list();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to create device list. (%d)", ret);
+		return ret;
+	}
+
+	/* create the key, supplying a function that'll be invoked
+	 * when a portal affined thread will be deleted.
+	 */
+	ret = pthread_key_create(&dpaa_portal_key, dpaa_portal_finish);
+	if (ret) {
+		DPAA_BUS_LOG(DEBUG, "Unable to create pthread key. (%d)", ret);
+		dpaa_clean_device_list();
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "dpaa_portal_key=%u, ret=%d\n",
+		    (unsigned int)dpaa_portal_key, ret);
+
 	return 0;
 }
 
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 8c1ea65..3d4dc88 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,7 +1,46 @@
 DPDK_17.08 {
 	global:
 
+	bman_acquire;
+	bman_free_pool;
+	bman_get_params;
+	bman_new_pool;
+	bman_release;
+	dpaa_netcfg;
+	fman_ccsr_map_fd;
+	fman_dealloc_bufs_mask_hi;
+	fman_dealloc_bufs_mask_lo;
+	fman_if_disable_rx;
+	fman_if_enable_rx;
+	fman_if_discard_rx_errors;
+	fman_if_get_fc_threshold;
+	fman_if_get_fc_quanta;
+	fman_if_promiscuous_disable;
+	fman_if_promiscuous_enable;
+	fman_if_reset_mcast_filter_table;
+	fman_if_set_bp;
+	fman_if_set_fc_threshold;
+	fman_if_set_fc_quanta;
+	fman_if_set_fdoff;
+	fman_if_set_ic_params;
+	fman_if_set_maxfrm;
+	fman_if_set_mcast_filter_table;
+	fman_if_stats_get;
+	fman_if_stats_reset;
+	fm_mac_add_exact_match_mac_addr;
+	fm_mac_rem_exact_match_mac_addr;
+	netcfg_acquire;
+	netcfg_release;
+	qman_create_fq;
+	qman_dequeue;
+	qman_dqrr_consume;
+	qman_enqueue_multi;
+	qman_init_fq;
+	qman_set_vdq;
+	qman_reserve_fqid_range;
 	rte_dpaa_driver_register;
 	rte_dpaa_driver_unregister;
+	rte_dpaa_mem_ptov;
+	rte_dpaa_portal_init;
 
 };
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 55e6793..34a7f4b 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -36,6 +36,12 @@
 #include <rte_bus.h>
 #include <rte_mempool.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
 #define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
 
 #define DEV_TO_DPAA_DEVICE(ptr)	\
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 18/38] doc: add NXP DPAA PMD documentation
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (16 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 17/38] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-28 15:51   ` Ferruh Yigit
  2017-06-16  5:40 ` [PATCH 19/38] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
                   ` (20 subsequent siblings)
  38 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                       |   2 +
 doc/guides/nics/dpaa.rst          | 360 ++++++++++++++++++++++++++++++++++++++
 doc/guides/nics/features/dpaa.ini |   8 +
 doc/guides/nics/index.rst         |   1 +
 4 files changed, 371 insertions(+)
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index 803c2af..c14b7b3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -392,6 +392,8 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: doc/guides/nics/dpaa.rst
+F: doc/guides/nics/features/dpaa.ini
 
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
new file mode 100644
index 0000000..cabd340
--- /dev/null
+++ b/doc/guides/nics/dpaa.rst
@@ -0,0 +1,360 @@
+..  BSD LICENSE
+    Copyright 2017 NXP.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+DPAA Poll Mode Driver
+=====================
+
+The DPAA NIC PMD (**librte_pmd_dpaa**) provides poll mode driver
+support for the inbuilt NIC found in the **NXP DPAA** SoC family.
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+NXP DPAA (Data Path Acceleration Architecture - Gen 1)
+------------------------------------------------------
+
+This section provides an overview of the NXP DPAA architecture
+and how it is integrated into the DPDK.
+
+Contents summary
+
+- DPAA overview
+- DPAA driver architecture overview
+
+.. _dpaa_overview:
+
+DPAA Overview
+~~~~~~~~~~~~~
+
+Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
+
+The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
+components on specific QorIQ series multicore processors. This architecture
+provides the infrastructure to support simplified sharing of networking
+interfaces and accelerators by multiple CPU cores, and the accelerators
+themselves.
+
+DPAA includes:
+
+- Cores
+- Network and packet I/O
+- Hardware offload accelerators
+- Infrastructure required to facilitate flow of packets between the components above
+
+Infrastructure components are:
+
+- The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
+  It allows  CPUs and other accelerators connected to the SoC datapath to
+  enqueue and dequeue ethernet frames, thus providing the infrastructure for
+  data exchange among CPUs and datapath accelerators.
+- The Buffer Manager (BMan) is a hardware buffer pool management block that
+  allows software and accelerators on the datapath to acquire and release
+  buffers in order to build frames.
+
+Hardware accelerators are:
+
+- SEC - Cryptographic accelerator
+- PME - Pattern matching engine
+
+The Network and packet I/O component:
+
+- The Frame Manager (FMan) is a key component in the DPAA and makes use of the
+  DPAA infrastructure (QMan and BMan). FMan  is responsible for packet
+  distribution and policing. Each frame can be parsed, classified and results
+  may be attached to the frame. This meta data can be used to select
+  particular QMan queue, which the packet is forwarded to.
+
+
+DPAA DPDK - Poll Mode Driver Overview
+-------------------------------------
+
+This section provides an overview of the drivers for DPAA:
+
+* Bus driver and associated "DPAA infrastructure" drivers
+* Functional object drivers (such as Ethernet).
+
+Brief description of each driver is provided in layout below as well as
+in the following sections.
+
+.. code-block:: console
+
+                                       +------------+
+                                       | DPDK DPAA  |
+                                       |    PMD     |
+                                       +-----+------+
+                                             |
+                                       +-----+------+       +---------------+
+                                       :  Ethernet  :.......| DPDK DPAA     |
+                    . . . . . . . . .  :   (FMAN)   :       | Mempool driver|
+                   .                   +---+---+----+       |  (BMAN)       |
+                  .                        ^   |            +-----+---------+
+                 .                         |   |<enqueue,         .
+                .                          |   | dequeue>         .
+               .                           |   |                  .
+              .                        +---+---V----+             .
+             .      . . . . . . . . . .: Portal drv :             .
+            .      .                   :            :             .
+           .      .                    +-----+------+             .
+          .      .                     :   QMAN     :             .
+         .      .                      :  Driver    :             .
+    +----+------+-------+              +-----+------+             .
+    |   DPDK DPAA Bus   |                    |                    .
+    |   driver          |....................|.....................
+    |   /bus/dpaa       |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|========================
+                                            PHY
+    =========================================|========================
+
+In the above representation, solid lines represent components which interface
+with DPDK RTE Framework and dotted lines represent DPAA internal components.
+
+DPAA Bus driver
+~~~~~~~~~~~~~~~
+
+The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
+Key functions include:
+
+- Scanning and parsing the various objects and adding them to their respective
+  device list.
+- Performing probe for available drivers against each scanned device
+- Creating necessary ethernet instance before passing control to the PMD
+
+DPAA NIC Driver (PMD)
+~~~~~~~~~~~~~~~~~~~~~
+
+DPAA PMD is traditional DPDK PMD which provides necessary interface between
+RTE framework and DPAA internal components/drivers.
+
+- Once devices have been identified by DPAA Bus, each device is associated
+  with the PMD
+- PMD is responsible for implementing necessary glue layer between RTE APIs
+  and lower level QMan and FMan blocks.
+  The Ethernet driver is bound to a FMAN port and implements the interfaces
+  needed to connect the DPAA network interface to the network stack.
+  Each FMAN Port corresponds to a DPDK network interface.
+
+
+Features
+^^^^^^^^
+
+  Features of the DPAA PMD are:
+
+  - Multiple queues for TX and RX
+  - Receive Side Scaling (RSS)
+  - Packet type information
+  - Checksum offload
+  - Promiscuous mode
+
+DPAA Mempool Driver
+~~~~~~~~~~~~~~~~~~~
+
+DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
+Manager.
+
+- Using standard Mempools operations RTE API, the mempool driver interfaces
+  with RTE to service each mempool creation, deletion, buffer allocation and
+  deallocation requests.
+- Each FMAN instance has a BMan pool attached to it during initialization.
+  Each Tx frame can be automatically released by hardware, if allocated from
+  this pool.
+
+
+Supported DPAA SoCs
+-------------------
+
+- LS1043A/LS1023A
+- LS1046A/LS1026A
+
+Prerequisites
+-------------
+
+There are three main pre-requisities for executing DPAA PMD on a DPAA
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+   For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
+
+2. **Linux Kernel**
+
+   It can be obtained from `NXP's Github hosting <https://github.com/qoriq-open-source/linux>`_.
+
+3. **Rootfile system**
+
+   Any *aarch64* supporting filesystem can be used. For example,
+   Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
+   from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
+
+As an alternative method, DPAA PMD can also be executed using images provided
+as part of SDK from NXP. The SDK includes all the above prerequisites necessary
+to bring up a DPAA board.
+
+The following dependencies are not part of DPDK and must be installed
+separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+- **DPDK Extra Scripts**
+
+  DPAA based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK Extra repository.
+
+  `DPDK Extras Scripts <https://github.com/qoriq-open-source/dpdk-extras>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+**.
+- Supported architectures:  **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
+  to setup the basic DPDK environment.
+
+.. note::
+
+   Some part of dpaa bus code (qbman and fman - library) routines are
+   dual licensed (BSD & GPLv2).
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_bus_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_pmd_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
+
+  Toggle display of generic debugging messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT`` (default ``n``)
+
+  Toggle display of initialization related messages.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_RX`` (default ``n``)
+
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_TX`` (default ``n``)
+
+  Toggle display of transmit fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_TX_FREE`` (default ``n``)
+
+  Toggle display of transmit fast path buffer free run-time message
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER_DISPLAY`` (default ``n``)
+
+  Toggle display of each Tx/Rx frame contents (dump)
+
+- ``CONFIG_RTE_LIBRTE_DPAA_CHECKING`` (default ``n``)
+
+  Toggle lower level driver validations (asserts)
+
+- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` (default ``dpaa``)
+
+  This is not a DPAA specific configuration - it is a generic RTE config.
+  For optimal performance and hardware utilization, it is expected that DPAA
+  Mempool driver is used for mempools. For that, this configuration needs to
+  enabled.
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+#. Running testpmd:
+
+   Follow instructions available in the document
+   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+   to run testpmd.
+
+   Example output:
+
+   .. code-block:: console
+
+      ./arm64-dpaa-linuxapp-gcc/testpmd -c 0xff -n 1 \
+        -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
+
+      .....
+      EAL: Registered [pci] bus.
+      EAL: Registered [dpaa] bus.
+      EAL: Detected 4 lcore(s)
+      .....
+      EAL: dpaa: Bus scan completed
+      .....
+      Configuring Port 0 (socket 0)
+      Port 0: 00:00:00:00:00:01
+      Configuring Port 1 (socket 0)
+      Port 1: 00:00:00:00:00:02
+      .....
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Port 1 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+Limitations
+-----------
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+DPAA drivers for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
+up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
new file mode 100644
index 0000000..9e8befc
--- /dev/null
+++ b/doc/guides/nics/features/dpaa.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'dpaa' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+ARMv8                = Y
+Usage doc            = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 240d082..6fc8eaf 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -42,6 +42,7 @@ Network Interface Controller Drivers
     bnx2x
     bnxt
     cxgbe
+    dpaa
     dpaa2
     e1000em
     ena
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 19/38] mempool/dpaa: add support for NXP DPAA Mempool
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (17 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 18/38] doc: add NXP DPAA PMD documentation Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 20/38] maintainers: claim ownership of DPAA Mempool driver Shreyansh Jain
                   ` (19 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This Mempool driver works with DPAA BMan hardware block. This block
manages data buffers in memory, and provides efficient interface with
other hardware and software components for buffer requests.

This patch adds support for BMan. Compilation would be enabled in
subsequent patches.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/mempool/dpaa/Makefile                     |  65 ++++++
 drivers/mempool/dpaa/dpaa_mempool.c               | 265 ++++++++++++++++++++++
 drivers/mempool/dpaa/dpaa_mempool.h               |  78 +++++++
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map |   6 +
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c          |   2 +-
 5 files changed, 415 insertions(+), 1 deletion(-)
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map

diff --git a/drivers/mempool/dpaa/Makefile b/drivers/mempool/dpaa/Makefile
new file mode 100644
index 0000000..45a1f7b
--- /dev/null
+++ b/drivers/mempool/dpaa/Makefile
@@ -0,0 +1,65 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 NXP. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
+CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
+
+# versioning export map
+EXPORT_MAP := rte_mempool_dpaa_version.map
+
+# Lbrary version
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa_mempool.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
new file mode 100644
index 0000000..ba98d48
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -0,0 +1,265 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <dpaa_mempool.h>
+
+struct pool_info_entry rte_dpaa_pool_table[DPAA_MAX_BPOOLS];
+
+static void
+dpaa_buf_free(struct pool_info_entry *bp_info, uint64_t addr)
+{
+	struct bm_buffer buf;
+	int ret;
+
+	PMD_TX_FREE_LOG(DEBUG, "Free 0x%lx to bpid: %d", addr, bp_info->bpid);
+
+	bm_buffer_set64(&buf, addr);
+retry:
+	ret = bman_release(bp_info->bp, &buf, 1, 0);
+	if (ret) {
+		PMD_TX_LOG(DEBUG, " BMAN busy. Retrying...");
+		cpu_spin(CPU_SPIN_BACKOFF_CYCLES);
+		goto retry;
+	}
+}
+
+static int
+dpaa_mbuf_create_pool(struct rte_mempool *mp)
+{
+	struct bman_pool *bp;
+	struct bm_buffer bufs[8];
+	uint8_t bpid;
+	int num_bufs = 0, ret = 0;
+	struct bman_pool_params params = {
+		.flags = BMAN_POOL_FLAG_DYNAMIC_BPID
+	};
+
+	PMD_INIT_FUNC_TRACE();
+
+	bp = bman_new_pool(&params);
+	if (!bp) {
+		PMD_DRV_LOG(ERR, "bman_new_pool() failed");
+		return -ENODEV;
+	}
+	bpid = bman_get_params(bp)->bpid;
+
+	/* Drain the pool of anything already in it. */
+	do {
+		/* Acquire is all-or-nothing, so we drain in 8s,
+		 * then in 1s for the remainder.
+		 */
+		if (ret != 1)
+			ret = bman_acquire(bp, bufs, 8, 0);
+		if (ret < 8)
+			ret = bman_acquire(bp, bufs, 1, 0);
+		if (ret > 0)
+			num_bufs += ret;
+	} while (ret > 0);
+	if (num_bufs)
+		PMD_DRV_LOG(WARNING, "drained %u bufs from BPID %d",
+			    num_bufs, bpid);
+
+	rte_dpaa_pool_table[bpid].mp = mp;
+	rte_dpaa_pool_table[bpid].bpid = bpid;
+	rte_dpaa_pool_table[bpid].size = mp->elt_size;
+	rte_dpaa_pool_table[bpid].bp = bp;
+	rte_dpaa_pool_table[bpid].meta_data_size =
+		sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(mp);
+	rte_dpaa_pool_table[bpid].dpaa_ops_index = mp->ops_index;
+	mp->pool_data = (void *)&rte_dpaa_pool_table[bpid];
+
+	PMD_DRV_LOG(INFO, "BMAN pool created for bpid =%d", bpid);
+	return 0;
+}
+
+static void
+dpaa_mbuf_free_pool(struct rte_mempool *mp)
+{
+	struct pool_info_entry *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	PMD_INIT_FUNC_TRACE();
+
+	bman_free_pool(bp_info->bp);
+	PMD_DRV_LOG(INFO, "BMAN pool freed for bpid =%d", bp_info->bpid);
+}
+
+static int
+dpaa_mbuf_free_bulk(struct rte_mempool *pool,
+		    void *const *obj_table,
+		    unsigned int n)
+{
+	struct pool_info_entry *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+	int ret;
+	unsigned int i = 0;
+
+	PMD_TX_FREE_LOG(DEBUG, " Request to free %d buffers in bpid = %d",
+		    n, bp_info->bpid);
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "rte_dpaa_portal_init failed "
+			"with ret: %d", ret);
+		return 0;
+	}
+
+	while (i < n) {
+		dpaa_buf_free(bp_info, (uint64_t)rte_mempool_virt2phy(pool,
+			      obj_table[i]) + bp_info->meta_data_size);
+		i = i + 1;
+	}
+
+	PMD_TX_FREE_LOG(DEBUG, " freed %d buffers in bpid =%d",
+		    n, bp_info->bpid);
+
+	return 0;
+}
+
+static int
+dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
+		     void **obj_table,
+		     unsigned int count)
+{
+	struct rte_mbuf **m = (struct rte_mbuf **)obj_table;
+	struct bm_buffer bufs[DPAA_MBUF_MAX_ACQ_REL];
+	struct pool_info_entry *bp_info;
+	void *bufaddr;
+	int i, ret;
+	unsigned int n = 0;
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+
+	PMD_RX_LOG(DEBUG, " Request to alloc %d buffers in bpid = %d",
+		    count, bp_info->bpid);
+
+	if (unlikely(count >= (RTE_MEMPOOL_CACHE_MAX_SIZE * 2))) {
+		PMD_DRV_LOG(ERR, "Unable to allocate requested (%u) buffers",
+			    count);
+		return -1;
+	}
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "rte_dpaa_portal_init failed with "
+			"ret: %d", ret);
+		return 0;
+	}
+
+	while (n < count) {
+		/* Acquire is all-or-nothing, so we drain in 7s,
+		 * then the remainder.
+		 */
+		if ((count - n) > DPAA_MBUF_MAX_ACQ_REL) {
+			ret = bman_acquire(bp_info->bp, bufs,
+					   DPAA_MBUF_MAX_ACQ_REL, 0);
+		} else {
+			ret = bman_acquire(bp_info->bp, bufs, count - n, 0);
+		}
+		/* In case of less than requested number of buffers available
+		 * in pool, qbman_swp_acquire returns 0
+		 */
+		if (ret <= 0) {
+			PMD_DRV_LOG(DEBUG, "Buffer acquire failed with"
+				    " err code: %d", ret);
+			/* The API expect the exact number of requested
+			 * buffers. Releasing all buffers allocated
+			 */
+			dpaa_mbuf_free_bulk(pool, obj_table, n);
+			return -1;
+		}
+		/* assigning mbuf from the acquired objects */
+		for (i = 0; (i < ret) && bufs[i].addr; i++) {
+			/* TODO-errata - objerved that bufs may be null
+			 * i.e. first buffer is valid, remaining 6 buffers
+			 * may be null.
+			 */
+			bufaddr = (void *)rte_dpaa_mem_ptov(bufs[i].addr);
+			m[n] = (struct rte_mbuf *)((char *)bufaddr
+						- bp_info->meta_data_size);
+			rte_mbuf_refcnt_set(m[n], 1);
+			PMD_DRV_LOG(DEBUG, "Acquired %p address %p from BMAN",
+				    (void *)bufaddr, (void *)m[n]);
+			n++;
+		}
+	}
+
+	PMD_RX_LOG(DEBUG, " allocated %d buffers from bpid =%d",
+		    n, bp_info->bpid);
+	return 0;
+}
+
+static unsigned int
+dpaa_mbuf_get_count(const struct rte_mempool *mp)
+{
+	struct pool_info_entry *bp_info;
+
+	PMD_INIT_FUNC_TRACE();
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	return bman_query_free_buffers(bp_info->bp);
+}
+
+
+struct rte_mempool_ops dpaa_mpool_ops = {
+	.name = "dpaa",
+	.alloc = dpaa_mbuf_create_pool,
+	.free = dpaa_mbuf_free_pool,
+	.enqueue = dpaa_mbuf_free_bulk,
+	.dequeue = dpaa_mbuf_alloc_bulk,
+	.get_count = dpaa_mbuf_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
diff --git a/drivers/mempool/dpaa/dpaa_mempool.h b/drivers/mempool/dpaa/dpaa_mempool.h
new file mode 100644
index 0000000..b097667
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_MEMPOOL_H__
+#define __DPAA_MEMPOOL_H__
+
+/* System headers */
+#include <stdio.h>
+#include <stdbool.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <unistd.h>
+
+#include <rte_mempool.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <fsl_usd.h>
+#include <fsl_bman.h>
+
+#define CPU_SPIN_BACKOFF_CYCLES               512
+
+/* total number of bpools on SoC */
+#define DPAA_MAX_BPOOLS	256
+
+/* Maximum release/acquire from BMAN */
+#define DPAA_MBUF_MAX_ACQ_REL  8
+
+struct pool_info_entry {
+	struct rte_mempool *mp;
+	struct bman_pool *bp;
+	uint32_t bpid;
+	uint32_t size;
+	uint32_t meta_data_size;
+	int32_t dpaa_ops_index;
+};
+
+#define DPAA_MEMPOOL_TO_POOL_INFO(__mp) \
+	(struct pool_info_entry *)__mp->pool_data
+
+#define DPAA_MEMPOOL_TO_BPID(__mp) \
+	((struct pool_info_entry *)__mp->pool_data)->bpid
+
+extern struct pool_info_entry rte_dpaa_pool_table[DPAA_MAX_BPOOLS];
+
+#define DPAA_BPID_TO_POOL_INFO(__bpid) (&rte_dpaa_pool_table[__bpid])
+
+#endif
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
new file mode 100644
index 0000000..5be8f56
--- /dev/null
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -0,0 +1,6 @@
+DPDK_17.08 {
+	global:
+
+	rte_dpaa_pool_table;
+
+};
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 5a5d6aa..60dd1c0 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -294,7 +294,7 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 			/* Releasing all buffers allocated */
 			rte_dpaa2_mbuf_release(pool, obj_table, bpid,
 					   bp_info->meta_data_size, n);
-			return ret;
+			return -1;
 		}
 		/* assigning mbuf from the acquired objects */
 		for (i = 0; (i < ret) && bufs[i]; i++) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 20/38] maintainers: claim ownership of DPAA Mempool driver
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (18 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 19/38] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 21/38] drivers: enable compilation " Shreyansh Jain
                   ` (18 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index c14b7b3..ec5eb00 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -392,6 +392,7 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: drivers/mempool/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 21/38] drivers: enable compilation of DPAA Mempool driver
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (19 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 20/38] maintainers: claim ownership of DPAA Mempool driver Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 22/38] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
                   ` (17 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch also adds configuration necessary for compilation of DPAA
Mempool driver into the DPAA specific config file.
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=dpaa is also configured to allow
applications to use DPAA mempool as default.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       | 1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 3 +++
 drivers/mempool/Makefile                 | 2 ++
 3 files changed, 6 insertions(+)

diff --git a/config/common_base b/config/common_base
index 56bd27c..62a59af 100644
--- a/config/common_base
+++ b/config/common_base
@@ -303,6 +303,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index a189ad2..50901f4 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -45,3 +45,6 @@ CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_TX=n
 
+# NXP DPAA Mempool
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index 8fd40e1..595f717 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -33,6 +33,8 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mempool librte_ring
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 22/38] net/dpaa: add NXP DPAA PMD driver skeleton
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (20 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 21/38] drivers: enable compilation " Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-28 15:41   ` Ferruh Yigit
  2017-06-16  5:40 ` [PATCH 23/38] config: enable NXP DPAA PMD compilation Shreyansh Jain
                   ` (16 subsequent siblings)
  38 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

A skeleton which would be called after bus device scan. It currently
fails to identify the device.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                               |   1 +
 drivers/net/dpaa/Makefile                 |  64 +++++++++
 drivers/net/dpaa/dpaa_ethdev.c            | 222 ++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_ethdev.h            | 128 +++++++++++++++++
 drivers/net/dpaa/rte_pmd_dpaa_version.map |   4 +
 5 files changed, 419 insertions(+)
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index ec5eb00..02cc4c0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -393,6 +393,7 @@ M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
 F: drivers/mempool/dpaa/
+F: drivers/net/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
new file mode 100644
index 0000000..8fcde26
--- /dev/null
+++ b/drivers/net/dpaa/Makefile
@@ -0,0 +1,64 @@
+#   BSD LICENSE
+#
+#   Copyright 2017 NXP.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_SDK_DPAA=$(RTE_SDK)/drivers/net/dpaa
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+
+CFLAGS += -I$(RTE_SDK_DPAA)/
+CFLAGS += -I$(RTE_SDK_DPAA)/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
+
+EXPORT_MAP := rte_pmd_dpaa_version.map
+
+LIBABIVER := 1
+
+# Interfaces with DPDK
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
new file mode 100644
index 0000000..2401058
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -0,0 +1,222 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <dpaa_ethdev.h>
+
+/* Keep track of whether QMAN and BMAN have been globally initialized */
+static int is_global_init;
+
+static int
+dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+
+static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	/* Change tx callback to the real one */
+	dev->tx_pkt_burst = NULL;
+
+	return 0;
+}
+
+static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = NULL;
+}
+
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static struct eth_dev_ops dpaa_devops = {
+	.dev_configure		  = dpaa_eth_dev_configure,
+	.dev_start		  = dpaa_eth_dev_start,
+	.dev_stop		  = dpaa_eth_dev_stop,
+	.dev_close		  = dpaa_eth_dev_close,
+};
+
+/* Initialise a network interface */
+static int dpaa_eth_dev_init(struct rte_eth_dev *eth_dev __rte_unused)
+{
+	int dev_id;
+	struct rte_dpaa_device *dpaa_device;
+	struct dpaa_if *dpaa_intf;
+
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
+	dev_id = dpaa_device->id.dev_id;
+	dpaa_intf = eth_dev->data->dev_private;
+
+	dpaa_intf->name = dpaa_device->name;
+
+	dpaa_intf->ifid = dev_id;
+
+	eth_dev->dev_ops = &dpaa_devops;
+
+	return -1;
+}
+
+static int
+rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
+			   struct rte_dpaa_device *dpaa_dev)
+{
+	int diag;
+	int ret;
+	struct rte_eth_dev *eth_dev;
+	char ethdev_name[RTE_ETH_NAME_MAX_LEN];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!is_global_init) {
+		/* One time load of Qman/Bman drivers */
+		ret = qman_global_init();
+		if (ret) {
+			PMD_DRV_LOG(ERR, "QMAN initialization failed: %d",
+				    ret);
+			return ret;
+		}
+		ret = bman_global_init();
+		if (ret) {
+			PMD_DRV_LOG(ERR, "BMAN initialization failed: %d",
+				    ret);
+			return ret;
+		}
+
+		is_global_init = 1;
+	}
+
+	sprintf(ethdev_name, "%s", dpaa_dev->name);
+
+	ret = rte_dpaa_portal_init((void *)1);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Unable to initialize portal");
+		return ret;
+	}
+
+	eth_dev = rte_eth_dev_allocate(ethdev_name);
+	if (eth_dev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		eth_dev->data->dev_private = rte_zmalloc(
+						"ethdev private structure",
+						sizeof(struct dpaa_if),
+						RTE_CACHE_LINE_SIZE);
+		if (!eth_dev->data->dev_private) {
+			PMD_INIT_LOG(CRIT, "Cannot allocate memzone for"
+				     " private port data\n");
+			rte_eth_dev_release_port(eth_dev);
+			return -ENOMEM;
+		}
+	}
+
+	eth_dev->device = &dpaa_dev->device;
+	dpaa_dev->eth_dev = eth_dev;
+	eth_dev->data->rx_mbuf_alloc_failed = 0;
+
+	/* Invoke PMD device initialization function */
+	diag = dpaa_eth_dev_init(eth_dev);
+	if (diag) {
+		PMD_DRV_LOG(ERR, "Eth dev initialization failed: %d", ret);
+		return diag;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Eth dev initialized: %d\n", diag);
+
+	return 0;
+}
+
+static int
+rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
+{
+	struct rte_eth_dev *eth_dev;
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev = dpaa_dev->eth_dev;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eth_dev->data->dev_private);
+
+	rte_eth_dev_release_port(eth_dev);
+
+	return 0;
+}
+
+static struct rte_dpaa_driver rte_dpaa_pmd = {
+	.driver_type = FSL_DPAA_ETH,
+	.probe = rte_dpaa_probe,
+	.remove = rte_dpaa_remove,
+};
+
+RTE_PMD_REGISTER_DPAA(net_dpaa, rte_dpaa_pmd);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
new file mode 100644
index 0000000..8aeaebf
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -0,0 +1,128 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2014-2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_ETHDEV_H__
+#define __DPAA_ETHDEV_H__
+
+/* System headers */
+#include <stdbool.h>
+#include <rte_ethdev.h>
+
+#include <rte_dpaa_logs.h>
+
+
+#define DPAA_MBUF_HW_ANNOTATION		64
+#define DPAA_FD_PTA_SIZE		64
+
+#if (DPAA_MBUF_HW_ANNOTATION + DPAA_FD_PTA_SIZE) > RTE_PKTMBUF_HEADROOM
+#error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM"
+#endif
+
+/* we will re-use the HEADROOM for annotation in RX */
+#define DPAA_HW_BUF_RESERVE	0
+#define DPAA_PACKET_LAYOUT_ALIGN	64
+
+/* Alignment to use for cpu-local structs to avoid coherency problems. */
+#define MAX_CACHELINE			64
+
+#define DPAA_MIN_RX_BUF_SIZE 512
+#define DPAA_MAX_RX_PKT_LEN  10240
+
+/* RX queue tail drop threshold
+ * currently considering 32 KB packets.
+ */
+#define CONG_THRESHOLD_RX_Q  (32 * 1024)
+
+/*max mac filter for memac(8) including primary mac addr*/
+#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
+
+/*Maximum number of slots available in TX ring*/
+#define MAX_TX_RING_SLOTS	8
+
+/* PCD frame queues */
+#define DPAA_PCD_FQID_START		0x400
+#define DPAA_PCD_FQID_MULTIPLIER	0x100
+#define DPAA_DEFAULT_NUM_PCD_QUEUES	1
+
+#define DPAA_IF_TX_PRIORITY		3
+#define DPAA_IF_RX_PRIORITY		4
+#define DPAA_IF_DEBUG_PRIORITY		7
+
+#define DPAA_IF_RX_ANNOTATION_STASH	1
+#define DPAA_IF_RX_DATA_STASH		1
+#define DPAA_IF_RX_CONTEXT_STASH		0
+
+/* Each "debug" FQ is represented by one of these */
+#define DPAA_DEBUG_FQ_RX_ERROR   0
+#define DPAA_DEBUG_FQ_TX_ERROR   1
+
+#define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
+		PKT_TX_IP_CKSUM |                \
+		PKT_TX_TCP_CKSUM |               \
+		PKT_TX_UDP_CKSUM)
+
+
+/* DPAA Frame descriptor macros */
+
+#define DPAA_FD_CMD_FCO			0x80000000
+/**< Frame queue Context Override */
+#define DPAA_FD_CMD_RPD			0x40000000
+/**< Read Prepended Data */
+#define DPAA_FD_CMD_UPD			0x20000000
+/**< Update Prepended Data */
+#define DPAA_FD_CMD_DTC			0x10000000
+/**< Do IP/TCP/UDP Checksum */
+#define DPAA_FD_CMD_DCL4C		0x10000000
+/**< Didn't calculate L4 Checksum */
+#define DPAA_FD_CMD_CFQ			0x00ffffff
+/**< Confirmation Frame Queue */
+
+/* Configuration variables exported from DPAA bus */
+extern struct netcfg_info *dpaa_netcfg;
+
+/* Each network interface is represented by one of these */
+struct dpaa_if {
+	int valid;
+	char *name;
+	const struct fm_eth_port_cfg *cfg;
+	struct qman_fq *rx_queues;
+	struct qman_fq *tx_queues;
+	struct qman_fq debug_queues[2];
+	uint16_t nb_rx_queues;
+	uint16_t nb_tx_queues;
+	uint32_t ifid;
+	struct fman_if *fif;
+	struct pool_info_entry *bp_info;
+	struct rte_eth_fc_conf *fc_conf;
+};
+
+#endif
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
new file mode 100644
index 0000000..b6d2840
--- /dev/null
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -0,0 +1,4 @@
+DPDK_17.08 {
+
+	local: *;
+};
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 23/38] config: enable NXP DPAA PMD compilation
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (21 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 22/38] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
                   ` (15 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       |  1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 10 ++++++++++
 drivers/net/Makefile                     |  2 ++
 mk/rte.app.mk                            |  5 +++++
 4 files changed, 18 insertions(+)

diff --git a/config/common_base b/config/common_base
index 62a59af..9fd8e7b 100644
--- a/config/common_base
+++ b/config/common_base
@@ -304,6 +304,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
+CONFIG_RTE_LIBRTE_DPAA_PMD=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 50901f4..4530e18 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -37,6 +37,13 @@
 CONFIG_RTE_MACHINE="dpaa"
 CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
 
+#
+# Compile Environment Abstraction Layer
+#
+CONFIG_RTE_MAX_LCORE=4
+CONFIG_RTE_MAX_NUMA_NODES=1
+CONFIG_RTE_PKTMBUF_HEADROOM=128
+
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
@@ -48,3 +55,6 @@ CONFIG_RTE_LIBRTE_DPAA_DEBUG_TX=n
 # NXP DPAA Mempool
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
 CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
+
+# Compile software NXP DPAA PMD
+CONFIG_RTE_LIBRTE_DPAA_PMD=y
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 35ed813..efd1a34 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -51,6 +51,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DEPDIRS-bonding = $(core-libs) librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
 DEPDIRS-cxgbe = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += e1000
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index bcaf1b3..80e5530 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -115,6 +115,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD)      += -lrte_pmd_cxgbe
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_pmd_dpaa
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_pmd_dpaa2
 _LDLIBS-$(CONFIG_RTE_LIBRTE_E1000_PMD)      += -lrte_pmd_e1000
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ENA_PMD)        += -lrte_pmd_ena
@@ -178,6 +179,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_bus_fslmc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_mempool_dpaa2
 endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
 
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+endif
+
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
 
 _LDLIBS-y += --no-whole-archive
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (22 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 23/38] config: enable NXP DPAA PMD compilation Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-28 15:45   ` Ferruh Yigit
  2017-06-16  5:40 ` [PATCH 25/38] net/dpaa: add support for MTU update Shreyansh Jain
                   ` (14 subsequent siblings)
  38 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/Makefile         |   4 +
 drivers/net/dpaa/dpaa_ethdev.c    | 279 ++++++++++++++++++++++++++++++++-
 drivers/net/dpaa/dpaa_ethdev.h    |   6 +
 drivers/net/dpaa/dpaa_rxtx.c      | 313 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      |  61 ++++++++
 mk/rte.app.mk                     |   1 +
 7 files changed, 660 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 9e8befc..29ba47e 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,5 +4,6 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Queue start/stop     = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index 8fcde26..06b63fc 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -44,11 +44,13 @@ else
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 endif
+CFLAGS +=-Wno-pointer-arith
 
 CFLAGS += -I$(RTE_SDK_DPAA)/
 CFLAGS += -I$(RTE_SDK_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
 
@@ -58,7 +60,9 @@ LIBABIVER := 1
 
 # Interfaces with DPDK
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_rxtx.c
 
 LDLIBS += -lrte_bus_dpaa
+LDLIBS += -lrte_mempool_dpaa
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 2401058..5a8d8af 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -62,8 +62,15 @@
 
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
+#include <dpaa_mempool.h>
 
 #include <dpaa_ethdev.h>
+#include <dpaa_rxtx.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <fsl_fman.h>
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -79,20 +86,104 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
 	PMD_INIT_FUNC_TRACE();
 
 	/* Change tx callback to the real one */
-	dev->tx_pkt_burst = NULL;
+	dev->tx_pkt_burst = dpaa_eth_queue_tx;
+	fman_if_enable_rx(dpaa_intf->fif);
 
 	return 0;
 }
 
 static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = NULL;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_disable_rx(dpaa_intf->fif);
+	dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
+}
+
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+}
+
+static
+int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+			    unsigned int socket_id __rte_unused,
+			    const struct rte_eth_rxconf *rx_conf __rte_unused,
+			    struct rte_mempool *mp)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	PMD_DRV_LOG(INFO, "Rx queue setup for queue index: %d", queue_idx);
+
+	if (!dpaa_intf->bp_info || dpaa_intf->bp_info->mp != mp) {
+		struct fman_if_ic_params icp;
+		uint32_t fd_offset;
+		uint32_t bp_size;
+
+		if (!mp->pool_data) {
+			PMD_DRV_LOG(ERR, "not an offloaded buffer pool");
+			return -1;
+		}
+		dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+		memset(&icp, 0, sizeof(icp));
+		/* set ICEOF for to the default value , which is 0*/
+		icp.iciof = DEFAULT_ICIOF;
+		icp.iceof = DEFAULT_RX_ICEOF;
+		icp.icsz = DEFAULT_ICSZ;
+		fman_if_set_ic_params(dpaa_intf->fif, &icp);
+
+		fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
+		fman_if_set_fdoff(dpaa_intf->fif, fd_offset);
+
+		/* Buffer pool size should be equal to Dataroom Size*/
+		bp_size = rte_pktmbuf_data_room_size(mp);
+		fman_if_set_bp(dpaa_intf->fif, mp->size,
+			       dpaa_intf->bp_info->bpid, bp_size);
+		dpaa_intf->valid = 1;
+		PMD_DRV_LOG(INFO, "if =%s - fd_offset = %d offset = %d",
+			    dpaa_intf->name, fd_offset,
+			fman_if_get_fdoff(dpaa_intf->fif));
+	}
+	dev->data->rx_queues[queue_idx] = &dpaa_intf->rx_queues[queue_idx];
+
+	return 0;
+}
+
+static
+void dpaa_eth_rx_queue_release(void *rxq __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
 }
 
-static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+static
+int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+		unsigned int socket_id __rte_unused,
+		const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	PMD_DRV_LOG(INFO, "Tx queue setup for queue index: %d", queue_idx);
+	dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx];
+	return 0;
+}
+
+static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 }
@@ -102,28 +193,206 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+
+	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
+	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
+	.rx_queue_release	  = dpaa_eth_rx_queue_release,
+	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 };
 
+/* Initialise an Rx FQ */
+static int dpaa_rx_queue_init(struct qman_fq *fq,
+			      uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "reserve rx fqid %d failed with ret: %d",
+			fqid, ret);
+		return -EINVAL;
+	}
+	PMD_DRV_LOG(DEBUG, "creating rx fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "create rx fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTA;
+
+	opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
+			   QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_a.stashing.exclusive = 0;
+	opts.fqd.context_a.stashing.annotation_cl = DPAA_IF_RX_ANNOTATION_STASH;
+	opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
+	opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
+
+	/*Enable tail drop */
+	opts.we_mask = opts.we_mask | QM_INITFQ_WE_TDTHRESH;
+	opts.fqd.fq_ctrl = opts.fqd.fq_ctrl | QM_FQCTRL_TDE;
+	qm_fqd_taildrop_set(&opts.fqd.td, CONG_THRESHOLD_RX_Q, 1);
+
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		PMD_DRV_LOG(ERR, "init rx fqid %d failed with ret: %d",
+			fqid, ret);
+	return ret;
+}
+
+/* Initialise a Tx FQ */
+static int dpaa_tx_queue_init(struct qman_fq *fq,
+			      struct fman_if *fman_intf)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID |
+			     QMAN_FQ_FLAG_TO_DCPORTAL, fq);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "create tx fq failed with ret: %d", ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTB | QM_INITFQ_WE_CONTEXTA;
+	opts.fqd.dest.channel = fman_intf->tx_channel_id;
+	opts.fqd.dest.wq = DPAA_IF_TX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_b = 0;
+	/* no tx-confirmation */
+	opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+	opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
+	PMD_DRV_LOG(DEBUG, "init tx fq %p, fqid %d", fq, fq->fqid);
+	ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts);
+	if (ret)
+		PMD_DRV_LOG(ERR, "init tx fqid %d failed %d", fq->fqid, ret);
+	return ret;
+}
+
 /* Initialise a network interface */
-static int dpaa_eth_dev_init(struct rte_eth_dev *eth_dev __rte_unused)
+static int dpaa_eth_dev_init(struct rte_eth_dev *eth_dev)
 {
+	int num_cores, num_rx_fqs, fqid;
+	int loop, ret = 0;
 	int dev_id;
 	struct rte_dpaa_device *dpaa_device;
 	struct dpaa_if *dpaa_intf;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+	struct fman_if_bpool *bp, *tmp_bp;
 
 	PMD_INIT_FUNC_TRACE();
 
 	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
 	dev_id = dpaa_device->id.dev_id;
 	dpaa_intf = eth_dev->data->dev_private;
+	cfg = &dpaa_netcfg->port_cfg[dev_id];
+	fman_intf = cfg->fman_if;
 
 	dpaa_intf->name = dpaa_device->name;
 
+	/* save fman_if & cfg in the interface struture */
+	dpaa_intf->fif = fman_intf;
 	dpaa_intf->ifid = dev_id;
+	dpaa_intf->cfg = cfg;
+
+	/* Initialize Rx FQ's */
+	if (getenv("DPAA_NUM_RX_QUEUES"))
+		num_rx_fqs = atoi(getenv("DPAA_NUM_RX_QUEUES"));
+	else
+		num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
+
+	/* Each device can not have more than DPAA_PCD_FQID_MULTIPLIER RX queues */
+	if (num_rx_fqs <= 0 || num_rx_fqs > DPAA_PCD_FQID_MULTIPLIER) {
+		PMD_INIT_LOG(ERR, "Invalid number of RX queues\n");
+		return -EINVAL;
+	}
+
+	dpaa_intf->rx_queues = rte_zmalloc(NULL,
+		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
+	for (loop = 0; loop < num_rx_fqs; loop++) {
+		fqid = DPAA_PCD_FQID_START + dpaa_intf->ifid *
+			DPAA_PCD_FQID_MULTIPLIER + loop;
+		ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop], fqid);
+		if (ret)
+			return ret;
+		dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_rx_queues = num_rx_fqs;
+
+	/* Initialise Tx FQs. Have as many Tx FQ's as number of cores */
+	num_cores = rte_lcore_count();
+	dpaa_intf->tx_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
+		num_cores, MAX_CACHELINE);
+	if (!dpaa_intf->tx_queues)
+		return -ENOMEM;
+
+	for (loop = 0; loop < num_cores; loop++) {
+		ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
+					 fman_intf);
+		if (ret)
+			return ret;
+		dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_tx_queues = num_cores;
 
+	PMD_DRV_LOG(DEBUG, "all fqs created");
+
+	/* reset bpool list, initialize bpool dynamically */
+	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+
+	/* Populate ethdev structure */
 	eth_dev->dev_ops = &dpaa_devops;
+	eth_dev->data->nb_rx_queues = dpaa_intf->nb_rx_queues;
+	eth_dev->data->nb_tx_queues = dpaa_intf->nb_tx_queues;
+	eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
+	eth_dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
+		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to "
+						"store MAC addresses",
+				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
+		return -ENOMEM;
+	}
 
-	return -1;
+	/* copy the primary mac address */
+	memcpy(eth_dev->data->mac_addrs[0].addr_bytes,
+		fman_intf->mac_addr.addr_bytes,
+		ETHER_ADDR_LEN);
+
+	PMD_DRV_LOG(DEBUG, "interface %s macaddr:", dpaa_device->name);
+	for (loop = 0; loop < ETHER_ADDR_LEN; loop++) {
+		if (loop != (ETHER_ADDR_LEN - 1))
+			printf("%02x:", fman_intf->mac_addr.addr_bytes[loop]);
+		else
+			printf("%02x\n", fman_intf->mac_addr.addr_bytes[loop]);
+	}
+
+	/* Disable RX mode */
+	fman_if_discard_rx_errors(fman_intf);
+	fman_if_disable_rx(fman_intf);
+	/* Disable promiscuous mode */
+	fman_if_promiscuous_disable(fman_intf);
+	/* Disable multicast */
+	fman_if_reset_mcast_filter_table(fman_intf);
+	/* Reset interface statistics */
+	fman_if_stats_reset(fman_intf);
+
+	return 0;
 }
 
 static int
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 8aeaebf..da7f3be 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -38,7 +38,13 @@
 #include <rte_ethdev.h>
 
 #include <rte_dpaa_logs.h>
+#include <dpaa_mempool.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
 
 #define DPAA_MBUF_HW_ANNOTATION		64
 #define DPAA_FD_PTA_SIZE		64
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
new file mode 100644
index 0000000..d2ef513
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -0,0 +1,313 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <limits.h>
+#include <sched.h>
+#include <pthread.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_udp.h>
+
+#include "dpaa_ethdev.h"
+#include "dpaa_rxtx.h"
+#include <rte_dpaa_bus.h>
+#include <dpaa_mempool.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
+#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
+	do { \
+		(_fd)->cmd = 0; \
+		(_fd)->opaque_addr = 0; \
+		(_fd)->opaque = QM_FD_CONTIG << DPAA_FD_FORMAT_SHIFT; \
+		(_fd)->opaque |= ((_mbuf)->data_off) << DPAA_FD_OFFSET_SHIFT; \
+		(_fd)->opaque |= (_mbuf)->pkt_len; \
+		(_fd)->addr = (_mbuf)->buf_physaddr; \
+		(_fd)->bpid = _bpid; \
+	} while (0)
+
+static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
+							uint32_t ifid)
+{
+	struct pool_info_entry *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *mbuf;
+	void *ptr;
+	uint16_t offset =
+		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
+	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
+
+	PMD_RX_LOG(DEBUG, " FD--->MBUF");
+
+	/* Ignoring case when format != qm_fd_contig */
+	ptr = rte_dpaa_mem_ptov(fd->addr);
+	/* Ignoring case when ptr would be NULL. That is only possible incase
+	 * of a corrupted packet
+	 */
+
+	mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
+	/* Prefetch the Parse results and packet data to L1 */
+	rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
+	rte_prefetch0((void *)((uint8_t *)ptr + offset));
+
+	mbuf->data_off = offset;
+	mbuf->data_len = length;
+	mbuf->pkt_len = length;
+
+	mbuf->port = ifid;
+	mbuf->nb_segs = 1;
+	mbuf->ol_flags = 0;
+	mbuf->next = NULL;
+	rte_mbuf_refcnt_set(mbuf, 1);
+
+	return mbuf;
+}
+
+uint16_t dpaa_eth_queue_rx(void *q,
+			   struct rte_mbuf **bufs,
+			   uint16_t nb_bufs)
+{
+	struct qman_fq *fq = q;
+	struct qm_dqrr_entry *dq;
+	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failure in affining portal");
+		return 0;
+	}
+
+	ret = qman_set_vdq(fq, (nb_bufs > DPAA_MAX_DEQUEUE_NUM_FRAMES) ?
+				DPAA_MAX_DEQUEUE_NUM_FRAMES : nb_bufs);
+	if (ret)
+		return 0;
+
+	do {
+		dq = qman_dequeue(fq);
+		if (!dq)
+			continue;
+		bufs[num_rx++] = dpaa_eth_fd_to_mbuf(&dq->fd, ifid);
+		qman_dqrr_consume(fq, dq);
+	} while (fq->flags & QMAN_FQ_STATE_VDQCR);
+
+	return num_rx;
+}
+
+static void *dpaa_get_pktbuf(struct pool_info_entry *bp_info)
+{
+	int ret;
+	uint64_t buf = 0;
+	struct bm_buffer bufs;
+
+	ret = bman_acquire(bp_info->bp, &bufs, 1, 0);
+	if (ret <= 0) {
+		PMD_DRV_LOG(WARNING, "Failed to allocate buffers %d", ret);
+		return (void *)buf;
+	}
+
+	PMD_RX_LOG(DEBUG, "got buffer 0x%llx from pool %d",
+		    bufs.addr, bufs.bpid);
+
+	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
+	if (!buf)
+		goto out;
+
+out:
+	return (void *)buf;
+}
+
+static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
+					     struct dpaa_if *dpaa_intf)
+{
+	struct rte_mbuf *dpaa_mbuf;
+
+	/* allocate pktbuffer on bpid for dpaa port */
+	dpaa_mbuf = dpaa_get_pktbuf(dpaa_intf->bp_info);
+	if (!dpaa_mbuf)
+		return NULL;
+
+	memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + mbuf->data_off, (void *)
+		((uint8_t *)(mbuf->buf_addr) + mbuf->data_off), mbuf->pkt_len);
+
+	/* Copy only the required fields */
+	dpaa_mbuf->data_off = mbuf->data_off;
+	dpaa_mbuf->pkt_len = mbuf->pkt_len;
+	dpaa_mbuf->ol_flags = mbuf->ol_flags;
+	dpaa_mbuf->packet_type = mbuf->packet_type;
+	dpaa_mbuf->tx_offload = mbuf->tx_offload;
+	rte_pktmbuf_free(mbuf);
+	return dpaa_mbuf;
+}
+
+uint16_t
+dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+	struct rte_mbuf *mbuf, *mi = NULL;
+	struct rte_mempool *mp;
+	struct pool_info_entry *bp_info;
+	struct qm_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send, loop, i = 0;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failure in affining portal");
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
+
+	while (nb_bufs) {
+		frames_to_send = (nb_bufs >> 3) ? MAX_TX_RING_SLOTS : nb_bufs;
+		for (loop = 0; loop < frames_to_send; loop++, i++) {
+			mbuf = bufs[i];
+			if (RTE_MBUF_DIRECT(mbuf)) {
+				mp = mbuf->pool;
+			} else {
+				mi = rte_mbuf_from_indirect(mbuf);
+				mp = mi->pool;
+			}
+
+			bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+			if (mp->ops_index == bp_info->dpaa_ops_index) {
+				PMD_TX_LOG(DEBUG, "BMAN offloaded buffer, "
+					"mbuf: %p", mbuf);
+				if (mbuf->nb_segs == 1) {
+					if (RTE_MBUF_DIRECT(mbuf)) {
+						if (rte_mbuf_refcnt_read(mbuf) > 1) {
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], 0xff);
+							rte_mbuf_refcnt_update(mbuf, -1);
+						} else {
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], bp_info->bpid);
+						}
+					} else {
+						if (rte_mbuf_refcnt_read(mi) > 1) {
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], 0xff);
+						} else {
+							rte_mbuf_refcnt_update(mi, 1);
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], bp_info->bpid);
+						}
+						rte_pktmbuf_free(mbuf);
+					}
+				} else {
+					PMD_DRV_LOG(DEBUG, "Number of Segments not supported");
+					/* Set frames_to_send & nb_bufs so that
+					 * packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+			} else {
+				struct qman_fq *txq = q;
+				struct dpaa_if *dpaa_intf = txq->dpaa_intf;
+
+				PMD_TX_LOG(DEBUG, "Non-BMAN offloaded buffer."
+					"Allocating an offloaded buffer");
+				mbuf = dpaa_get_dmable_mbuf(mbuf, dpaa_intf);
+				if (!mbuf) {
+					PMD_DRV_LOG(DEBUG, "no dpaa buffers.");
+					/* Set frames_to_send & nb_bufs so that
+					 * packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+
+				DPAA_MBUF_TO_CONTIG_FD(mbuf, &fd_arr[loop],
+						dpaa_intf->bp_info->bpid);
+			}
+		}
+
+send_pkts:
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qman_enqueue_multi(q, &fd_arr[loop],
+					frames_to_send - loop);
+		}
+		nb_bufs -= frames_to_send;
+	}
+
+	PMD_TX_LOG(DEBUG, "Transmitted %d buffers on queue: %p", i, q);
+
+	return i;
+}
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+		uint16_t nb_bufs __rte_unused)
+{
+	PMD_TX_LOG(DEBUG, "Drop all packets");
+
+	/* Drop all incoming packets. No need to free packets here
+	 * because the rte_eth f/w frees up the packets through tx_buffer
+	 * callback in case this functions returns count less than nb_bufs
+	 */
+	return 0;
+}
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
new file mode 100644
index 0000000..09f1aa4
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -0,0 +1,61 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPDK_RXTX_H__
+#define __DPDK_RXTX_H__
+
+/* internal offset from where IC is copied to packet buffer*/
+#define DEFAULT_ICIOF          32
+/* IC transfer size */
+#define DEFAULT_ICSZ	48
+
+/* IC offsets from buffer header address */
+#define DEFAULT_RX_ICEOF	16
+
+#define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
+	/** <Maximum number of frames to be dequeued in a single rx call*/
+/* FD structure masks and offset */
+#define DPAA_FD_FORMAT_MASK 0xE0000000
+#define DPAA_FD_OFFSET_MASK 0x1FF00000
+#define DPAA_FD_LENGTH_MASK 0xFFFFF
+#define DPAA_FD_FORMAT_SHIFT 29
+#define DPAA_FD_OFFSET_SHIFT 20
+
+uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+			      uint16_t nb_bufs __rte_unused);
+#endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 80e5530..6939bc5 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -181,6 +181,7 @@ endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
 
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_mempool_dpaa
 endif
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 25/38] net/dpaa: add support for MTU update
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (23 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-28 15:45   ` Ferruh Yigit
  2017-06-16  5:40 ` [PATCH 26/38] net/dpaa: add support for jumbo frames Shreyansh Jain
                   ` (13 subsequent siblings)
  38 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 29ba47e..0b992fd 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -5,5 +5,6 @@
 ;
 [Features]
 Queue start/stop     = Y
+MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5a8d8af..6d33ff8 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -76,6 +76,26 @@
 static int is_global_init;
 
 static int
+dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (mtu < ETHER_MIN_MTU)
+		return -EINVAL;
+
+	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
+
+	if (mtu > ETHER_MAX_LEN)
+		return -1
+	dev->data->dev_conf.rxmode.jumbo_frame = 0;
+
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
+	return 0;
+}
+
+static int
 dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
@@ -198,6 +218,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+	.mtu_set		  = dpaa_mtu_set,
 };
 
 /* Initialise an Rx FQ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 26/38] net/dpaa: add support for jumbo frames
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (24 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 25/38] net/dpaa: add support for MTU update Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 27/38] net/dpaa: add support for link status update Shreyansh Jain
                   ` (12 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 15 ++++++++++++---
 2 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 0b992fd..0e7493e 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -5,6 +5,7 @@
 ;
 [Features]
 Queue start/stop     = Y
+Jumbo frame          = Y
 MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 6d33ff8..bb47a08 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -88,18 +88,27 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
 
 	if (mtu > ETHER_MAX_LEN)
-		return -1
-	dev->data->dev_conf.rxmode.jumbo_frame = 0;
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
 	return 0;
 }
 
 static int
-dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
+dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 {
 	PMD_INIT_FUNC_TRACE();
 
+	if (dev->data->dev_conf.rxmode.jumbo_frame == 1) {
+		if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
+		    DPAA_MAX_RX_PKT_LEN)
+			return dpaa_mtu_set(dev,
+				dev->data->dev_conf.rxmode.max_rx_pkt_len);
+		else
+			return -1;
+	}
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 27/38] net/dpaa: add support for link status update
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (25 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 26/38] net/dpaa: add support for jumbo frames Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-28 15:46   ` Ferruh Yigit
  2017-06-16  5:40 ` [PATCH 28/38] net/dpaa: add support for device info Shreyansh Jain
                   ` (11 subsequent siblings)
  38 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  2 ++
 drivers/net/dpaa/dpaa_ethdev.c    | 42 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 0e7493e..cfc76f7 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,8 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = P
+Link status          = Y
 Queue start/stop     = Y
 Jumbo frame          = Y
 MTU update           = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bb47a08..f3de967 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -143,6 +143,28 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static int dpaa_eth_link_update(struct rte_eth_dev *dev,
+				int wait_to_complete __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_link *link = &dev->data->dev_link;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dpaa_intf->fif->mac_type == fman_mac_1g)
+		link->link_speed = 1000;
+	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
+		link->link_speed = 10000;
+	else
+		PMD_DRV_LOG(ERR, "invalid link_speed: %s, %d",
+			    dpaa_intf->name, dpaa_intf->fif->mac_type);
+
+	link->link_status = dpaa_intf->valid;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = ETH_LINK_AUTONEG;
+	return 0;
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -217,6 +239,22 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 	PMD_INIT_FUNC_TRACE();
 }
 
+static int dpaa_link_down(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+	return 0;
+}
+
+static int dpaa_link_up(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_start(dev);
+	return 0;
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -227,7 +265,11 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+
+	.link_update		  = dpaa_eth_link_update,
 	.mtu_set		  = dpaa_mtu_set,
+	.dev_set_link_down	  = dpaa_link_down,
+	.dev_set_link_up	  = dpaa_link_up,
 };
 
 /* Initialise an Rx FQ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 28/38] net/dpaa: add support for device info
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (26 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 27/38] net/dpaa: add support for link status update Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:40 ` [PATCH 29/38] net/dpaa: add support for promiscuous toggle Shreyansh Jain
                   ` (10 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f3de967..0cceffa 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -143,6 +143,23 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
+			      struct rte_eth_dev_info *dev_info)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->max_rx_queues = dpaa_intf->nb_rx_queues;
+	dev_info->max_tx_queues = dpaa_intf->nb_tx_queues;
+	dev_info->min_rx_bufsize = DPAA_MIN_RX_BUF_SIZE;
+	dev_info->max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
+	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
+	dev_info->max_hash_mac_addrs = 0;
+	dev_info->max_vfs = 0;
+	dev_info->max_vmdq_pools = ETH_16_POOLS;
+}
+
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 				int wait_to_complete __rte_unused)
 {
@@ -260,6 +277,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+	.dev_infos_get		  = dpaa_eth_dev_info,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 29/38] net/dpaa: add support for promiscuous toggle
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (27 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 28/38] net/dpaa: add support for device info Shreyansh Jain
@ 2017-06-16  5:40 ` Shreyansh Jain
  2017-06-16  5:41 ` [PATCH 30/38] net/dpaa: add support for multicast toggle Shreyansh Jain
                   ` (9 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:40 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index cfc76f7..a6984a4 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -9,5 +9,6 @@ Link status          = Y
 Queue start/stop     = Y
 Jumbo frame          = Y
 MTU update           = Y
+Promiscuous mode     = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 0cceffa..b3e6437 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -182,6 +182,25 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+
+static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_enable(dpaa_intf->fif);
+}
+
+static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_disable(dpaa_intf->fif);
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -285,6 +304,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
+	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 30/38] net/dpaa: add support for multicast toggle
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (28 preceding siblings ...)
  2017-06-16  5:40 ` [PATCH 29/38] net/dpaa: add support for promiscuous toggle Shreyansh Jain
@ 2017-06-16  5:41 ` Shreyansh Jain
  2017-06-28 15:47   ` Ferruh Yigit
  2017-06-16  5:41 ` [PATCH 31/38] net/dpaa: add support for basic stats Shreyansh Jain
                   ` (8 subsequent siblings)
  38 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:41 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  2 ++
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index a6984a4..80dd3ca 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -10,5 +10,7 @@ Queue start/stop     = Y
 Jumbo frame          = Y
 MTU update           = Y
 Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index b3e6437..b0c60bb 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -201,6 +201,25 @@ static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
 	fman_if_promiscuous_disable(dpaa_intf->fif);
 }
 
+static void dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_set_mcast_filter_table(dpaa_intf->fif);
+}
+
+static void dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_reset_mcast_filter_table(dpaa_intf->fif);
+
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -306,6 +325,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.link_update		  = dpaa_eth_link_update,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
+	.allmulticast_enable	  = dpaa_eth_multicast_enable,
+	.allmulticast_disable	  = dpaa_eth_multicast_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 31/38] net/dpaa: add support for basic stats
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (29 preceding siblings ...)
  2017-06-16  5:41 ` [PATCH 30/38] net/dpaa: add support for multicast toggle Shreyansh Jain
@ 2017-06-16  5:41 ` Shreyansh Jain
  2017-06-16  5:41 ` [PATCH 32/38] net/dpaa: add support for MAC address update Shreyansh Jain
                   ` (7 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:41 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 80dd3ca..54eb85c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -12,5 +12,6 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index b0c60bb..649b67f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -182,6 +182,24 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void dpaa_eth_stats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_stats *stats)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_get(dpaa_intf->fif, stats);
+}
+
+static void dpaa_eth_stats_reset(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_reset(dpaa_intf->fif);
+}
 
 static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
 {
@@ -323,6 +341,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.stats_get		  = dpaa_eth_stats_get,
+	.stats_reset		  = dpaa_eth_stats_reset,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.allmulticast_enable	  = dpaa_eth_multicast_enable,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 32/38] net/dpaa: add support for MAC address update
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (30 preceding siblings ...)
  2017-06-16  5:41 ` [PATCH 31/38] net/dpaa: add support for basic stats Shreyansh Jain
@ 2017-06-16  5:41 ` Shreyansh Jain
  2017-06-16  5:41 ` [PATCH 33/38] net/dpaa: add support for flow control Shreyansh Jain
                   ` (6 subsequent siblings)
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:41 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c | 55 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 649b67f..0eb5b71 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -328,6 +328,57 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *addr,
+			     uint32_t index,
+			     __rte_unused uint32_t pool)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fm_mac_add_exact_match_mac_addr(dpaa_intf->fif,
+					      addr->addr_bytes, index);
+
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Adding the MAC ADDR failed:"
+			" err = %d", ret);
+	return 0;
+}
+
+static void
+dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
+			  uint32_t index)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fm_mac_rem_exact_match_mac_addr(dpaa_intf->fif, index);
+
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Removing the MAC ADDR failed:"
+			" err = %d", ret);
+}
+
+static void
+dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
+		       struct ether_addr *addr)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fm_mac_add_exact_match_mac_addr(dpaa_intf->fif,
+					      addr->addr_bytes, 0);
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Setting the MAC ADDR failed %d", ret);
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -350,6 +401,10 @@ static struct eth_dev_ops dpaa_devops = {
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
+	.mac_addr_add		  = dpaa_dev_add_mac_addr,
+	.mac_addr_remove	  = dpaa_dev_remove_mac_addr,
+	.mac_addr_set		  = dpaa_dev_set_mac_addr,
+
 };
 
 /* Initialise an Rx FQ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 33/38] net/dpaa: add support for flow control
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (31 preceding siblings ...)
  2017-06-16  5:41 ` [PATCH 32/38] net/dpaa: add support for MAC address update Shreyansh Jain
@ 2017-06-16  5:41 ` Shreyansh Jain
  2017-06-28 15:47   ` Ferruh Yigit
  2017-06-16  5:41 ` [PATCH 34/38] net/dpaa: add support for hashed RSS Shreyansh Jain
                   ` (5 subsequent siblings)
  38 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:41 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 112 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 54eb85c..ea4c2fe 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -12,6 +12,7 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Flow control         = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 0eb5b71..3cfbae0 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -329,6 +329,85 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 }
 
 static int
+dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			PMD_DRV_LOG(ERR, "unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	net_fc = dpaa_intf->fc_conf;
+
+	if (fc_conf->high_water < fc_conf->low_water) {
+		PMD_DRV_LOG(ERR, "Incorrect Flow Control Configuration");
+		return -EINVAL;
+	}
+
+	if (fc_conf->mode == RTE_FC_NONE) {
+		return 0;
+	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_FC_FULL) {
+		fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water,
+					 fc_conf->low_water,
+				dpaa_intf->bp_info->bpid);
+		if (fc_conf->pause_time)
+			fman_if_set_fc_quanta(dpaa_intf->fif,
+					      fc_conf->pause_time);
+	}
+
+	/* Save the information in dpaa device */
+	net_fc->pause_time = fc_conf->pause_time;
+	net_fc->high_water = fc_conf->high_water;
+	net_fc->low_water = fc_conf->low_water;
+	net_fc->send_xon = fc_conf->send_xon;
+	net_fc->mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd;
+	net_fc->mode = fc_conf->mode;
+	net_fc->autoneg = fc_conf->autoneg;
+
+	return 0;
+}
+
+static int
+dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc = dpaa_intf->fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (net_fc) {
+		fc_conf->pause_time = net_fc->pause_time;
+		fc_conf->high_water = net_fc->high_water;
+		fc_conf->low_water = net_fc->low_water;
+		fc_conf->send_xon = net_fc->send_xon;
+		fc_conf->mac_ctrl_frame_fwd = net_fc->mac_ctrl_frame_fwd;
+		fc_conf->mode = net_fc->mode;
+		fc_conf->autoneg = net_fc->autoneg;
+		return 0;
+	}
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
+static int
 dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *addr,
 			     uint32_t index,
@@ -391,6 +470,9 @@ static struct eth_dev_ops dpaa_devops = {
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
+	.flow_ctrl_get		  = dpaa_flow_ctrl_get,
+	.flow_ctrl_set		  = dpaa_flow_ctrl_set,
+
 	.link_update		  = dpaa_eth_link_update,
 	.stats_get		  = dpaa_eth_stats_get,
 	.stats_reset		  = dpaa_eth_stats_reset,
@@ -407,6 +489,33 @@ static struct eth_dev_ops dpaa_devops = {
 
 };
 
+static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
+{
+	struct rte_eth_fc_conf *fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			PMD_DRV_LOG(ERR, "unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	fc_conf = dpaa_intf->fc_conf;
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
 /* Initialise an Rx FQ */
 static int dpaa_rx_queue_init(struct qman_fq *fq,
 			      uint32_t fqid)
@@ -553,6 +662,9 @@ static int dpaa_eth_dev_init(struct rte_eth_dev *eth_dev)
 
 	PMD_DRV_LOG(DEBUG, "all fqs created");
 
+	/* Get the initial configuration for flow control */
+	dpaa_fc_set_default(dpaa_intf);
+
 	/* reset bpool list, initialize bpool dynamically */
 	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
 		list_del(&bp->node);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 34/38] net/dpaa: add support for hashed RSS
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (32 preceding siblings ...)
  2017-06-16  5:41 ` [PATCH 33/38] net/dpaa: add support for flow control Shreyansh Jain
@ 2017-06-16  5:41 ` Shreyansh Jain
  2017-06-28 15:48   ` Ferruh Yigit
  2017-06-16  5:41 ` [PATCH 35/38] net/dpaa: add support for packet type parsing Shreyansh Jain
                   ` (4 subsequent siblings)
  38 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:41 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    |  1 +
 drivers/net/dpaa/dpaa_ethdev.h    | 10 ++++++++++
 3 files changed, 12 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index ea4c2fe..adb8458 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -12,6 +12,7 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+RSS hash             = Y
 Flow control         = Y
 Basic stats          = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3cfbae0..fa664d8 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -158,6 +158,7 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index da7f3be..a9d1c2c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -91,6 +91,16 @@
 #define DPAA_DEBUG_FQ_RX_ERROR   0
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
+#define DPAA_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP)
+
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
 		PKT_TX_TCP_CKSUM |               \
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 35/38] net/dpaa: add support for packet type parsing
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (33 preceding siblings ...)
  2017-06-16  5:41 ` [PATCH 34/38] net/dpaa: add support for hashed RSS Shreyansh Jain
@ 2017-06-16  5:41 ` Shreyansh Jain
  2017-06-28 15:50   ` Ferruh Yigit
  2017-06-16  5:41 ` [PATCH 36/38] net/dpaa: add support for checksum offload Shreyansh Jain
                   ` (3 subsequent siblings)
  38 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:41 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_ethdev.c    |  26 ++++++
 drivers/net/dpaa/dpaa_rxtx.c      | 111 ++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 174 ++++++++++++++++++++++++++++++++++++++
 4 files changed, 312 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index adb8458..2e19664 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -14,6 +14,7 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 RSS hash             = Y
 Flow control         = Y
+Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index fa664d8..4d2bae0 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -112,6 +112,27 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static const uint32_t *
+dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		/*todo -= add more types */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_SCTP
+	};
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dev->rx_pkt_burst == dpaa_eth_queue_rx)
+		return ptypes;
+	return NULL;
+}
 
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
@@ -159,6 +180,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_vfs = 0;
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
+	dev_info->rx_offload_capa =
+		(DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM  |
+		DEV_RX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
@@ -465,6 +490,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
 	.dev_infos_get		  = dpaa_eth_dev_info,
+	.dev_supported_ptypes_get = dpaa_supported_ptypes_get,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index d2ef513..e2db3cc 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,116 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
+				     uint64_t prs __rte_unused)
+{
+	PMD_RX_LOG(DEBUG, " Slow parsing");
+	/*TBD:XXX: to be implemented*/
+}
+
+static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
+					uint64_t fd_virt_addr)
+{
+	struct annotations_t *annot = GET_ANNOTATIONS(fd_virt_addr);
+	uint64_t prs = *((uint64_t *)(&annot->parse)) & DPAA_PARSE_MASK;
+
+	PMD_RX_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
+
+	switch (prs) {
+	case DPAA_PKT_TYPE_NONE:
+		m->packet_type = 0;
+		break;
+	case DPAA_PKT_TYPE_ETHER:
+		m->packet_type = RTE_PTYPE_L2_ETHER;
+		break;
+	case DPAA_PKT_TYPE_IPV4:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4;
+		break;
+	case DPAA_PKT_TYPE_IPV6:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6;
+		break;
+	case DPAA_PKT_TYPE_IPV4_FRAG:
+	case DPAA_PKT_TYPE_IPV4_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV6_FRAG:
+	case DPAA_PKT_TYPE_IPV6_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV4_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP;
+		break;
+	/* More switch cases can be added */
+	default:
+		dpaa_slow_parsing(m, prs);
+	}
+
+	m->tx_offload = annot->parse.ip_off[0];
+	m->tx_offload |= (annot->parse.l4_off - annot->parse.ip_off[0])
+					<< DPAA_PKT_L3_LEN_SHIFT;
+
+	/* Set the hash values */
+	m->hash.rss = (uint32_t)(rte_be_to_cpu_64(annot->hash));
+	m->ol_flags = PKT_RX_RSS_HASH;
+
+	/* Check if Vlan is present */
+	if (prs & DPAA_PARSE_VLAN_MASK)
+		m->ol_flags |= PKT_RX_VLAN_PKT;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -117,6 +227,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 	mbuf->ol_flags = 0;
 	mbuf->next = NULL;
 	rte_mbuf_refcnt_set(mbuf, 1);
+	dpaa_eth_packet_info(mbuf, (uint64_t)mbuf->buf_addr);
 
 	return mbuf;
 }
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 09f1aa4..f688934 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -44,6 +44,7 @@
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
+
 /* FD structure masks and offset */
 #define DPAA_FD_FORMAT_MASK 0xE0000000
 #define DPAA_FD_OFFSET_MASK 0x1FF00000
@@ -51,6 +52,179 @@
 #define DPAA_FD_FORMAT_SHIFT 29
 #define DPAA_FD_OFFSET_SHIFT 20
 
+/* Parsing mask (Little Endian) - 0x00E044ED00800000
+ *	Classification Plan ID 0x00
+ *	L4R 0xE0 -
+ *		0x20 - TCP
+ *		0x40 - UDP
+ *		0x80 - SCTP
+ *	L3R 0xEDC4 (in Big Endian) -
+ *		0x8000 - IPv4
+ *		0x4000 - IPv6
+ *		0x8140 - IPv4 Ext + Frag
+ *		0x8040 - IPv4 Frag
+ *		0x8100 - IPv4 Ext
+ *		0x4140 - IPv6 Ext + Frag
+ *		0x4040 - IPv6 Frag
+ *		0x4100 - IPv6 Ext
+ *	L2R 0x8000 (in Big Endian) -
+ *		0x8000 - Ethernet type
+ *	ShimR & Logical Port ID 0x0000
+ */
+#define DPAA_PARSE_MASK			0x00E044ED00800000
+#define DPAA_PARSE_VLAN_MASK		0x0000000000700000
+
+/* Parsed values (Little Endian) */
+#define DPAA_PKT_TYPE_NONE		0x0000000000000000
+#define DPAA_PKT_TYPE_ETHER		0x0000000000800000
+#define DPAA_PKT_TYPE_IPV4	(0x0000008000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV6	(0x0000004000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_GRE	(0x0000002000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV4_FRAG	(0x0000400000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_FRAG	(0x0000400000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_EXT	(0x0000000100000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_EXT	(0x0000000100000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_TCP	(0x0020000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_TCP	(0x0020000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_UDP	(0x0040000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_UDP	(0x0040000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_SCTP	(0x0080000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_SCTP	(0x0080000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_FRAG_TCP (0x0020000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_TCP (0x0020000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_UDP (0x0040000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_UDP (0x0040000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_SCTP (0x0080000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_SCTP (0x0080000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_EXT_UDP (0x0040000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_UDP (0x0040000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_IPV4_EXT_TCP (0x0020000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_TCP (0x0020000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_TUNNEL_4_4 (0x0000000800000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6 (0x0000000400000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6 (0x0000000400000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_4 (0x0000000800000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_UDP (0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_UDP (0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_UDP (0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_UDP (0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_TCP (0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_TCP (0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_TCP (0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_TCP (0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_L3_LEN_SHIFT	7
+
+/**
+ * FMan parse result array
+ */
+struct dpaa_eth_parse_results_t {
+	 uint8_t     lpid;		 /**< Logical port id */
+	 uint8_t     shimr;		 /**< Shim header result  */
+	 union {
+		uint16_t              l2r;	/**< Layer 2 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      ethernet:1;
+			uint16_t      vlan:1;
+			uint16_t      llc_snap:1;
+			uint16_t      mpls:1;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      unused_1:3;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      eth_frame_type:2;
+			uint16_t      l2r_err:5;
+			/*00-unicast, 01-multicast, 11-broadcast*/
+#else
+			uint16_t      l2r_err:5;
+			uint16_t      eth_frame_type:2;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      unused_1:3;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      mpls:1;
+			uint16_t      llc_snap:1;
+			uint16_t      vlan:1;
+			uint16_t      ethernet:1;
+#endif
+		}__attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint16_t              l3r;	/**< Layer 3 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      first_ipv4:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      gre:1;
+			uint16_t      min_enc:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      last_ipv6:1;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      last_ip_err_code:3;
+#else
+			uint16_t      last_ip_err_code:3;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      last_ipv6:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      min_enc:1;
+			uint16_t      gre:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      first_ipv4:1;
+#endif
+#define first_ip_option        first_ip_err_code & 0x01
+#define first_unknown_ip_proto first_ip_err_code & 0x02
+#define first_fragmented       first_ip_err_code & 0x04
+#define first_ip_type          first_ip_err_code & 0x18
+
+		}__attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint8_t               l4r;	/**< Layer 4 result */
+		struct{
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint8_t	       l4_type:3;
+			uint8_t	       l4_info_err:1;
+			uint8_t	       l4_result:4; /*if type IPSec: 1 ESP, 2 AH*/
+#else
+			uint8_t        l4_result:4; /*if type IPSec: 1 ESP, 2 AH*/
+			uint8_t        l4_info_err:1;
+			uint8_t        l4_type:3;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 uint8_t     cplan;		 /**< Classification plan id */
+	 uint16_t    nxthdr;		 /**< Next Header  */
+	 uint16_t    cksum;		 /**< Checksum */
+	 uint32_t    lcv;		 /**< LCV */
+	 uint8_t     shim_off[3];	 /**< Shim offset */
+	 uint8_t     eth_off;		 /**< ETH offset */
+	 uint8_t     llc_snap_off;	 /**< LLC_SNAP offset */
+	 uint8_t     vlan_off[2];	 /**< VLAN offset */
+	 uint8_t     etype_off;		 /**< ETYPE offset */
+	 uint8_t     pppoe_off;		 /**< PPP offset */
+	 uint8_t     mpls_off[2];	 /**< MPLS offset */
+	 uint8_t     ip_off[2];		 /**< IP offset */
+	 uint8_t     gre_off;		 /**< GRE offset */
+	 uint8_t     l4_off;		 /**< Layer 4 offset */
+	 uint8_t     nxthdr_off;	 /**< Parser end point */
+} __attribute__ ((__packed__));
+
+/* The structure is the Prepended Data to the Frame which is used by FMAN */
+struct annotations_t {
+	uint8_t reserved[DEFAULT_RX_ICEOF];
+	struct dpaa_eth_parse_results_t parse;	/**< Pointer to Parsed result*/
+	uint64_t reserved1;
+	uint64_t hash;			/**< Hash Result */
+};
+
+#define GET_ANNOTATIONS(_buf) \
+	(struct annotations_t *)(_buf)
+
+#define GET_RX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
+
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
 uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 36/38] net/dpaa: add support for checksum offload
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (34 preceding siblings ...)
  2017-06-16  5:41 ` [PATCH 35/38] net/dpaa: add support for packet type parsing Shreyansh Jain
@ 2017-06-16  5:41 ` Shreyansh Jain
  2017-06-28 15:50   ` Ferruh Yigit
  2017-06-16  5:41 ` [PATCH 37/38] net/dpaa: add support for Scattered Rx Shreyansh Jain
                   ` (2 subsequent siblings)
  38 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:41 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  2 +
 drivers/net/dpaa/dpaa_ethdev.c    |  4 ++
 drivers/net/dpaa/dpaa_rxtx.c      | 88 +++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 19 +++++++++
 4 files changed, 113 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 2e19664..c8e3561 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -14,6 +14,8 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 RSS hash             = Y
 Flow control         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4d2bae0..da14a1c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -184,6 +184,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 		(DEV_RX_OFFLOAD_IPV4_CKSUM |
 		DEV_RX_OFFLOAD_UDP_CKSUM  |
 		DEV_RX_OFFLOAD_TCP_CKSUM);
+	dev_info->tx_offload_capa =
+		(DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index e2db3cc..eef0d49 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -195,6 +195,82 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
 		m->ol_flags |= PKT_RX_VLAN_PKT;
 }
 
+static inline void dpaa_checksum(struct rte_mbuf *mbuf)
+{
+	struct ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf, struct ether_hdr *);
+	char *l3_hdr = (char *)eth_hdr + mbuf->l2_len;
+	struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+	struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	PMD_TX_LOG(DEBUG, "Calculating checksum for mbuf: %p", mbuf);
+
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	    ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	    RTE_PTYPE_L3_IPV4_EXT)) {
+		ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+		ipv4_hdr->hdr_checksum = 0;
+		ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
+	} else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6_EXT))
+		ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP) {
+		struct tcp_hdr *tcp_hdr = (struct tcp_hdr *)(l3_hdr +
+					  mbuf->l3_len);
+		tcp_hdr->cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			tcp_hdr->cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+							       tcp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			tcp_hdr->cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+							       tcp_hdr);
+	} else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) ==
+		   RTE_PTYPE_L4_UDP) {
+		struct udp_hdr *udp_hdr = (struct udp_hdr *)(l3_hdr +
+							     mbuf->l3_len);
+		udp_hdr->dgram_cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			udp_hdr->dgram_cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+								     udp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			udp_hdr->dgram_cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+								     udp_hdr);
+	}
+}
+
+static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
+					 struct qm_fd *fd, char *prs_buf)
+{
+	struct dpaa_eth_parse_results_t *prs;
+
+	PMD_TX_LOG(DEBUG, " Offloading checksum for mbuf: %p", mbuf);
+
+	prs = GET_TX_PRS(prs_buf);
+	prs->l3r = 0;
+	prs->l4r = 0;
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	   RTE_PTYPE_L3_IPV4_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV4;
+	else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		 ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		RTE_PTYPE_L3_IPV6_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV6;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_TCP;
+	else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_UDP;
+
+	prs->ip_off[0] = mbuf->l2_len;
+	prs->l4_off = mbuf->l3_len + mbuf->l2_len;
+	/* Enable L3 (and L4, if TCP or UDP) HW checksum*/
+	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -363,6 +439,18 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 						}
 						rte_pktmbuf_free(mbuf);
 					}
+					if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+						if (mbuf->data_off < DEFAULT_TX_ICEOF +
+							sizeof(struct dpaa_eth_parse_results_t)) {
+							PMD_DRV_LOG(DEBUG, "Checksum offload Err: "
+								"Not enough Headroom "
+								"space for correct Checksum offload."
+								"So Calculating checksum in Software.");
+							dpaa_checksum(mbuf);
+						} else
+							dpaa_checksum_offload(mbuf, &fd_arr[loop],
+								mbuf->buf_addr);
+					}
 				} else {
 					PMD_DRV_LOG(DEBUG, "Number of Segments not supported");
 					/* Set frames_to_send & nb_bufs so that
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index f688934..b1c292b 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -41,6 +41,22 @@
 
 /* IC offsets from buffer header address */
 #define DEFAULT_RX_ICEOF	16
+#define DEFAULT_TX_ICEOF	16
+
+/*
+ * Values for the L3R field of the FM Parse Results
+ */
+/* L3 Type field: First IP Present IPv4 */
+#define DPAA_L3_PARSE_RESULT_IPV4 0x80
+/* L3 Type field: First IP Present IPv6 */
+#define DPAA_L3_PARSE_RESULT_IPV6	0x40
+/* Values for the L4R field of the FM Parse Results
+ * See $8.8.4.7.20 - L4 HXS - L4 Results from DPAA-Rev2 Reference Manual.
+ */
+/* L4 Type field: UDP */
+#define DPAA_L4_PARSE_RESULT_UDP	0x40
+/* L4 Type field: TCP */
+#define DPAA_L4_PARSE_RESULT_TCP	0x20
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
@@ -225,6 +241,9 @@ struct annotations_t {
 #define GET_RX_PRS(_buf) \
 	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
 
+#define GET_TX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_TX_ICEOF)
+
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
 uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 37/38] net/dpaa: add support for Scattered Rx
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (35 preceding siblings ...)
  2017-06-16  5:41 ` [PATCH 36/38] net/dpaa: add support for checksum offload Shreyansh Jain
@ 2017-06-16  5:41 ` Shreyansh Jain
  2017-06-16  5:41 ` [PATCH 38/38] net/dpaa: add packet dump for debugging Shreyansh Jain
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  38 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:41 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_rxtx.c      | 160 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      |   2 +
 3 files changed, 163 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index c8e3561..d86e495 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -9,6 +9,7 @@ Link status          = Y
 Queue start/stop     = Y
 Jumbo frame          = Y
 MTU update           = Y
+Scattered Rx         = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index eef0d49..9af3732 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -271,18 +271,82 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
 	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
 }
 
+struct rte_mbuf *dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
+{
+	struct pool_info_entry *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+	struct qm_sg_entry *sgt, *sg_temp;
+	void *vaddr, *sg_vaddr;
+	int i = 0;
+	uint8_t fd_offset = fd->offset;
+
+	PMD_RX_LOG(DEBUG, "Received an SG frame");
+
+	vaddr = rte_dpaa_mem_ptov(qm_fd_addr(fd));
+	if (!vaddr) {
+		PMD_DRV_LOG(ERR, "unable to convert physical address");
+		return NULL;
+	}
+	sgt = vaddr + fd_offset;
+	sg_temp = &sgt[i++];
+	hw_sg_to_cpu(sg_temp);
+	temp = (struct rte_mbuf *)((char *)vaddr - bp_info->meta_data_size);
+	sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+
+	first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						bp_info->meta_data_size);
+	first_seg->data_off = sg_temp->offset;
+	first_seg->data_len = sg_temp->length;
+	first_seg->pkt_len = sg_temp->length;
+	rte_mbuf_refcnt_set(first_seg, 1);
+
+	first_seg->port = ifid;
+	first_seg->nb_segs = 1;
+	first_seg->ol_flags = 0;
+	prev_seg = first_seg;
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		hw_sg_to_cpu(sg_temp);
+		sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+		cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						      bp_info->meta_data_size);
+		cur_seg->data_off = sg_temp->offset;
+		cur_seg->data_len = sg_temp->length;
+		first_seg->pkt_len += sg_temp->length;
+		first_seg->nb_segs += 1;
+		rte_mbuf_refcnt_set(cur_seg, 1);
+		prev_seg->next = cur_seg;
+		if (sg_temp->final) {
+			cur_seg->next = NULL;
+			break;
+		} else {
+			prev_seg = cur_seg;
+		}
+	}
+
+	dpaa_eth_packet_info(first_seg, (uint64_t)vaddr);
+	rte_pktmbuf_free_seg(temp);
+
+	return first_seg;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
 	struct pool_info_entry *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
 	struct rte_mbuf *mbuf;
 	void *ptr;
+	uint8_t format =
+		(fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
 	uint16_t offset =
 		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
 	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
 
 	PMD_RX_LOG(DEBUG, " FD--->MBUF");
 
+	if (unlikely(format == qm_fd_sg))
+		return dpaa_eth_sg_to_mbuf(fd, ifid);
+
 	/* Ignoring case when format != qm_fd_contig */
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
@@ -385,6 +449,94 @@ static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
 	return dpaa_mbuf;
 }
 
+int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
+		struct qm_fd *fd,
+		uint32_t bpid)
+{
+	struct rte_mbuf *cur_seg = mbuf, *prev_seg = NULL;
+	struct pool_info_entry *bp_info = DPAA_BPID_TO_POOL_INFO(bpid);
+	struct rte_mbuf *temp, *mi;
+	struct qm_sg_entry *sg_temp, *sgt;
+	int i = 0;
+
+	PMD_TX_LOG(DEBUG, "Creating SG FD to transmit");
+
+	temp = rte_pktmbuf_alloc(bp_info->mp);
+	if (!temp) {
+		PMD_DRV_LOG(ERR, "Failure in allocation mbuf");
+		return -1;
+	}
+	if (temp->buf_len < ((mbuf->nb_segs * sizeof(struct qm_sg_entry))
+				+ temp->data_off)) {
+		PMD_DRV_LOG(ERR, "Insufficient space in mbuf for SG entries");
+		return -1;
+	}
+
+	fd->cmd = 0;
+	fd->opaque_addr = 0;
+
+	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+		if (temp->data_off < DEFAULT_TX_ICEOF
+			+ sizeof(struct dpaa_eth_parse_results_t))
+			temp->data_off = DEFAULT_TX_ICEOF
+				+ sizeof(struct dpaa_eth_parse_results_t);
+		dcbz_64(temp->buf_addr);
+		dpaa_checksum_offload(mbuf, fd, temp->buf_addr);
+	}
+
+	sgt = temp->buf_addr + temp->data_off;
+	fd->format = QM_FD_SG;
+	fd->addr = temp->buf_physaddr;
+	fd->offset = temp->data_off;
+	fd->bpid = bpid;
+	fd->length20 = mbuf->pkt_len;
+
+
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		sg_temp->opaque = 0;
+		sg_temp->val = 0;
+		sg_temp->addr = cur_seg->buf_physaddr;
+		sg_temp->offset = cur_seg->data_off;
+		sg_temp->length = cur_seg->data_len;
+		if (RTE_MBUF_DIRECT(cur_seg)) {
+			if (rte_mbuf_refcnt_read(cur_seg) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+				rte_mbuf_refcnt_update(cur_seg, -1);
+			} else
+				sg_temp->bpid =
+					DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
+			cur_seg = cur_seg->next;
+		} else {
+			/* Get owner MBUF from indirect buffer */
+			mi = rte_mbuf_from_indirect(cur_seg);
+			if (rte_mbuf_refcnt_read(mi) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * owner buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+			} else {
+				sg_temp->bpid = DPAA_MEMPOOL_TO_BPID(mi->pool);
+				rte_mbuf_refcnt_update(mi, 1);
+			}
+			prev_seg = cur_seg;
+			cur_seg = cur_seg->next;
+			prev_seg->next = NULL;
+			rte_pktmbuf_free(prev_seg);
+		}
+		if (cur_seg == NULL) {
+			sg_temp->final = 1;
+			cpu_to_hw_sg(sg_temp);
+			break;
+		}
+		cpu_to_hw_sg(sg_temp);
+	}
+	return 0;
+}
+
 uint16_t
 dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 {
@@ -451,6 +603,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 							dpaa_checksum_offload(mbuf, &fd_arr[loop],
 								mbuf->buf_addr);
 					}
+				} else if (mbuf->nb_segs > 1 && mbuf->nb_segs <= DPAA_SGT_MAX_ENTRIES) {
+					if (dpaa_eth_mbuf_to_sg_fd(mbuf,
+						&fd_arr[loop], bp_info->bpid)) {
+						PMD_DRV_LOG(DEBUG, "Unable to create Scatter Gather FD");
+						frames_to_send = loop;
+						nb_bufs = loop;
+						goto send_pkts;
+					}
 				} else {
 					PMD_DRV_LOG(DEBUG, "Number of Segments not supported");
 					/* Set frames_to_send & nb_bufs so that
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index b1c292b..4d89f32 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -58,6 +58,8 @@
 /* L4 Type field: TCP */
 #define DPAA_L4_PARSE_RESULT_TCP	0x20
 
+#define DPAA_SGT_MAX_ENTRIES 16 /* maximum number of entries in SG Table */
+
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 38/38] net/dpaa: add packet dump for debugging
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (36 preceding siblings ...)
  2017-06-16  5:41 ` [PATCH 37/38] net/dpaa: add support for Scattered Rx Shreyansh Jain
@ 2017-06-16  5:41 ` Shreyansh Jain
  2017-06-28 15:51   ` Ferruh Yigit
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  38 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  5:41 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc |  2 ++
 drivers/net/dpaa/dpaa_ethdev.c           | 42 ++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.c             | 27 +++++++++++++++++++-
 3 files changed, 70 insertions(+), 1 deletion(-)

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 4530e18..d74af89 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -51,6 +51,8 @@ CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER_DISPLAY=n
+CONFIG_RTE_LIBRTE_DPAA_CHECKING=n
 
 # NXP DPAA Mempool
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index da14a1c..e34f891 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -624,6 +624,39 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
 	return ret;
 }
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
+static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "reserve debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return -EINVAL;
+	}
+	/* "map" this Rx FQ to one of the interfaces Tx FQID */
+	PMD_DRV_LOG(DEBUG, "creating debug fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "create debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL;
+	opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		PMD_DRV_LOG(ERR, "init debug fqid %d failed with ret: %d",
+			    fqid, ret);
+	return ret;
+}
+#endif
+
 /* Initialise a network interface */
 static int dpaa_eth_dev_init(struct rte_eth_dev *eth_dev)
 {
@@ -691,6 +724,15 @@ static int dpaa_eth_dev_init(struct rte_eth_dev *eth_dev)
 	}
 	dpaa_intf->nb_tx_queues = num_cores;
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+#endif
+
 	PMD_DRV_LOG(DEBUG, "all fqs created");
 
 	/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 9af3732..ee82766 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,31 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+#if (defined RTE_LIBRTE_DPAA_DEBUG_DRIVER_DISPLAY)
+void dpaa_display_frame(const struct qm_fd *fd)
+{
+	int ii;
+	char *ptr;
+
+	printf("%s::bpid %x addr %08x%08x, format %d off %d, len %d stat %x\n",
+	       __func__, fd->bpid, fd->addr_hi, fd->addr_lo, fd->format,
+		fd->offset, fd->length20, fd->status);
+
+	ptr = (char *)rte_dpaa_mem_ptov(fd->addr);
+	ptr += fd->offset;
+	printf("%02x ", *ptr);
+	for (ii = 1; ii < fd->length20; ii++) {
+		printf("%02x ", *ptr);
+		if ((ii % 16) == 0)
+			printf("\n");
+		ptr++;
+	}
+	printf("\n");
+}
+#else
+#define dpaa_display_frame(a)
+#endif
+
 static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
 				     uint64_t prs __rte_unused)
 {
@@ -348,6 +373,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 		return dpaa_eth_sg_to_mbuf(fd, ifid);
 
 	/* Ignoring case when format != qm_fd_contig */
+	dpaa_display_frame(fd);
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
 	 * of a corrupted packet
@@ -491,7 +517,6 @@ int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 	fd->bpid = bpid;
 	fd->length20 = mbuf->pkt_len;
 
-
 	while (i < DPAA_SGT_MAX_ENTRIES) {
 		sg_temp = &sgt[i++];
 		sg_temp->opaque = 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-16  5:40 ` [PATCH 01/38] eal: add support for 24 40 and 48 bit operations Shreyansh Jain
@ 2017-06-16  8:57   ` Bruce Richardson
  2017-06-16  9:21     ` Shreyansh Jain
  2017-10-02 10:16   ` Avi Kivity
  1 sibling, 1 reply; 367+ messages in thread
From: Bruce Richardson @ 2017-06-16  8:57 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

On Fri, Jun 16, 2017 at 11:10:31AM +0530, Shreyansh Jain wrote:
> From: Hemant Agrawal <hemant.agrawal@nxp.com>
> 
> Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  .../common/include/generic/rte_byteorder.h         | 78 ++++++++++++++++++++++
>  1 file changed, 78 insertions(+)
> 
Are these really common enough for inclusion in an generic EAL file?
Would they be better being driver specific, so that we don't end up with
lots of extra byte-swap routines for each possible size used by a
driver.

/Bruce

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-16  8:57   ` Bruce Richardson
@ 2017-06-16  9:21     ` Shreyansh Jain
  2017-06-16  9:42       ` Thomas Monjalon
  2017-06-16 10:34       ` Adrien Mazarguil
  0 siblings, 2 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-16  9:21 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, ferruh.yigit, Hemant Agrawal

Hi Bruce,

> -----Original Message-----
> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> Sent: Friday, June 16, 2017 2:27 PM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>
> Cc: dev@dpdk.org; ferruh.yigit@intel.com; Hemant Agrawal
> <hemant.agrawal@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH 01/38] eal: add support for 24 40 and 48 bit
> operations
> 
> On Fri, Jun 16, 2017 at 11:10:31AM +0530, Shreyansh Jain wrote:
> > From: Hemant Agrawal <hemant.agrawal@nxp.com>
> >
> > Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width
> >
> > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > ---
> >  .../common/include/generic/rte_byteorder.h         | 78
> ++++++++++++++++++++++
> >  1 file changed, 78 insertions(+)
> >
> Are these really common enough for inclusion in an generic EAL file?
> Would they be better being driver specific, so that we don't end up with
> lots of extra byte-swap routines for each possible size used by a
> driver.
 
Reasoning was to keep all bit/byte swap at a single place and if it is
useful for others.

>From DPAA perspective, these macro can be anywhere. In case someone else too
has use of this (now or in near-future), probably then we can consider this
in EAL.
Else, if I don't get much responses in a few days, I will shift them to
DPAA driver in next version of this series.

-
Shreyansh

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-16  9:21     ` Shreyansh Jain
@ 2017-06-16  9:42       ` Thomas Monjalon
  2017-06-16 10:34       ` Adrien Mazarguil
  1 sibling, 0 replies; 367+ messages in thread
From: Thomas Monjalon @ 2017-06-16  9:42 UTC (permalink / raw)
  To: Shreyansh Jain, Bruce Richardson; +Cc: dev, ferruh.yigit, Hemant Agrawal

16/06/2017 11:21, Shreyansh Jain:
> Hi Bruce,
> 
> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > On Fri, Jun 16, 2017 at 11:10:31AM +0530, Shreyansh Jain wrote:
> > > From: Hemant Agrawal <hemant.agrawal@nxp.com>
> > >
> > > Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width
> > >
> > > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > > ---
> > >  .../common/include/generic/rte_byteorder.h         | 78
> > ++++++++++++++++++++++
> > >  1 file changed, 78 insertions(+)
> > >
> > Are these really common enough for inclusion in an generic EAL file?
> > Would they be better being driver specific, so that we don't end up with
> > lots of extra byte-swap routines for each possible size used by a
> > driver.
>  
> Reasoning was to keep all bit/byte swap at a single place and if it is
> useful for others.
> 
> From DPAA perspective, these macro can be anywhere. In case someone else too
> has use of this (now or in near-future), probably then we can consider this
> in EAL.
> Else, if I don't get much responses in a few days, I will shift them to
> DPAA driver in next version of this series.

I prefer keeping common functions in a common place.
So I like this patch.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-16  9:21     ` Shreyansh Jain
  2017-06-16  9:42       ` Thomas Monjalon
@ 2017-06-16 10:34       ` Adrien Mazarguil
  2017-06-19 11:00         ` Shreyansh Jain
  1 sibling, 1 reply; 367+ messages in thread
From: Adrien Mazarguil @ 2017-06-16 10:34 UTC (permalink / raw)
  To: Shreyansh Jain
  Cc: Bruce Richardson, dev, ferruh.yigit, Hemant Agrawal, Thomas Monjalon

Hi Shreyansh,

On Fri, Jun 16, 2017 at 09:21:35AM +0000, Shreyansh Jain wrote:
> Hi Bruce,
> 
> > -----Original Message-----
> > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > Sent: Friday, June 16, 2017 2:27 PM
> > To: Shreyansh Jain <shreyansh.jain@nxp.com>
> > Cc: dev@dpdk.org; ferruh.yigit@intel.com; Hemant Agrawal
> > <hemant.agrawal@nxp.com>
> > Subject: Re: [dpdk-dev] [PATCH 01/38] eal: add support for 24 40 and 48 bit
> > operations
> > 
> > On Fri, Jun 16, 2017 at 11:10:31AM +0530, Shreyansh Jain wrote:
> > > From: Hemant Agrawal <hemant.agrawal@nxp.com>
> > >
> > > Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width
> > >
> > > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > > ---
> > >  .../common/include/generic/rte_byteorder.h         | 78
> > ++++++++++++++++++++++
> > >  1 file changed, 78 insertions(+)
> > >
> > Are these really common enough for inclusion in an generic EAL file?
> > Would they be better being driver specific, so that we don't end up with
> > lots of extra byte-swap routines for each possible size used by a
> > driver.
>  
> Reasoning was to keep all bit/byte swap at a single place and if it is
> useful for others.
> 
> From DPAA perspective, these macro can be anywhere. In case someone else too
> has use of this (now or in near-future), probably then we can consider this
> in EAL.
> Else, if I don't get much responses in a few days, I will shift them to
> DPAA driver in next version of this series.

While I'm not against exposing exotic byte swapping functions, they are not
completely safe and I'm not sure they should be part of public header files
on that basis.

Problem is their storage size is larger than the number of bytes they deal
with, which raises the question: are filler bytes prepended or appended to
the converted value? How about input values in non-native order? Answering
that is not so easy as it depends on the use case. We actually had a similar
issue when defining VXLAN's VNI field for rte_flow, which is 24-bit in
network order.

Take rte_constant_bswap48() for instance, assuming input value is
little-endian, output is supposed to be big-endian. While the shifts are
correct, filler bytes are not in the right place for a big-endian system,
and the resulting value stored on uint64_t cannot be used as-is. Again, that
depends on the use case, it could be correct if the resulting value was to
be used as is on a little-endian system.

I think the only safe way to deal with that is by defining specific types of
the proper size, e.g.:

 typedef uint8_t uint48_t[6];

These are cumbersome and cannot be used like normal integers though. With
such types, byte-swapping functions become meaningless.

Since these are supposed to be rather simple functions, I'm not sure
handling/documenting all this complexity in rte_byteorder.h makes sense.

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-16 10:34       ` Adrien Mazarguil
@ 2017-06-19 11:00         ` Shreyansh Jain
  2017-06-19 13:52           ` Wiles, Keith
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-19 11:00 UTC (permalink / raw)
  To: Adrien Mazarguil
  Cc: Bruce Richardson, dev, ferruh.yigit, Hemant Agrawal, Thomas Monjalon

Hello Adrien,

On Friday 16 June 2017 04:04 PM, Adrien Mazarguil wrote:
> Hi Shreyansh,
> 
> On Fri, Jun 16, 2017 at 09:21:35AM +0000, Shreyansh Jain wrote:
>> Hi Bruce,
>>
>>> -----Original Message-----
>>> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
>>> Sent: Friday, June 16, 2017 2:27 PM
>>> To: Shreyansh Jain <shreyansh.jain@nxp.com>
>>> Cc: dev@dpdk.org; ferruh.yigit@intel.com; Hemant Agrawal
>>> <hemant.agrawal@nxp.com>
>>> Subject: Re: [dpdk-dev] [PATCH 01/38] eal: add support for 24 40 and 48 bit
>>> operations
>>>
>>> On Fri, Jun 16, 2017 at 11:10:31AM +0530, Shreyansh Jain wrote:
>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>
>>>> Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width
>>>>
>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>> ---
>>>>   .../common/include/generic/rte_byteorder.h         | 78
>>> ++++++++++++++++++++++
>>>>   1 file changed, 78 insertions(+)
>>>>
>>> Are these really common enough for inclusion in an generic EAL file?
>>> Would they be better being driver specific, so that we don't end up with
>>> lots of extra byte-swap routines for each possible size used by a
>>> driver.
>>   
>> Reasoning was to keep all bit/byte swap at a single place and if it is
>> useful for others.
>>
>>  From DPAA perspective, these macro can be anywhere. In case someone else too
>> has use of this (now or in near-future), probably then we can consider this
>> in EAL.
>> Else, if I don't get much responses in a few days, I will shift them to
>> DPAA driver in next version of this series.
> 
> While I'm not against exposing exotic byte swapping functions, they are not
> completely safe and I'm not sure they should be part of public header files
> on that basis.
> 
> Problem is their storage size is larger than the number of bytes they deal
> with, which raises the question: are filler bytes prepended or appended to
> the converted value? How about input values in non-native order? Answering
> that is not so easy as it depends on the use case. We actually had a similar
> issue when defining VXLAN's VNI field for rte_flow, which is 24-bit in
> network order.
> 
> Take rte_constant_bswap48() for instance, assuming input value is
> little-endian, output is supposed to be big-endian. While the shifts are
> correct, filler bytes are not in the right place for a big-endian system,
> and the resulting value stored on uint64_t cannot be used as-is. Again, that
> depends on the use case, it could be correct if the resulting value was to
> be used as is on a little-endian system.

I understand what you have stated - the application or any user needs to 
be context aware about what they are using and the side-effect of such 
conversions.

> 
> I think the only safe way to deal with that is by defining specific types of
> the proper size, e.g.:
> 
>   typedef uint8_t uint48_t[6];
> 
> These are cumbersome and cannot be used like normal integers though. With
> such types, byte-swapping functions become meaningless.
> 
> Since these are supposed to be rather simple functions, I'm not sure
> handling/documenting all this complexity in rte_byteorder.h makes sense.
> 

I have no issues moving these into DPAA specific code. Hemant added them 
in generic just in case they would be of use to others.

-
Shreyansh

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-19 11:00         ` Shreyansh Jain
@ 2017-06-19 13:52           ` Wiles, Keith
  2017-06-20 10:43             ` Hemant Agrawal
  0 siblings, 1 reply; 367+ messages in thread
From: Wiles, Keith @ 2017-06-19 13:52 UTC (permalink / raw)
  To: Shreyansh Jain
  Cc: Adrien Mazarguil, Richardson, Bruce, dev, Yigit, Ferruh,
	Hemant Agrawal, Thomas Monjalon


> On Jun 19, 2017, at 6:00 AM, Shreyansh Jain <shreyansh.jain@nxp.com> wrote:
> 
> Hello Adrien,
> 
> On Friday 16 June 2017 04:04 PM, Adrien Mazarguil wrote:
>> Hi Shreyansh,
>> On Fri, Jun 16, 2017 at 09:21:35AM +0000, Shreyansh Jain wrote:
>>> Hi Bruce,
>>> 
>>>> -----Original Message-----
>>>> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
>>>> Sent: Friday, June 16, 2017 2:27 PM
>>>> To: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>> Cc: dev@dpdk.org; ferruh.yigit@intel.com; Hemant Agrawal
>>>> <hemant.agrawal@nxp.com>
>>>> Subject: Re: [dpdk-dev] [PATCH 01/38] eal: add support for 24 40 and 48 bit
>>>> operations
>>>> 
>>>> On Fri, Jun 16, 2017 at 11:10:31AM +0530, Shreyansh Jain wrote:
>>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>> 
>>>>> Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width
>>>>> 
>>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>> ---
>>>>>  .../common/include/generic/rte_byteorder.h         | 78
>>>> ++++++++++++++++++++++
>>>>>  1 file changed, 78 insertions(+)
>>>>> 
>>>> Are these really common enough for inclusion in an generic EAL file?
>>>> Would they be better being driver specific, so that we don't end up with
>>>> lots of extra byte-swap routines for each possible size used by a
>>>> driver.
>>>  Reasoning was to keep all bit/byte swap at a single place and if it is
>>> useful for others.
>>> 
>>> From DPAA perspective, these macro can be anywhere. In case someone else too
>>> has use of this (now or in near-future), probably then we can consider this
>>> in EAL.
>>> Else, if I don't get much responses in a few days, I will shift them to
>>> DPAA driver in next version of this series.
>> While I'm not against exposing exotic byte swapping functions, they are not
>> completely safe and I'm not sure they should be part of public header files
>> on that basis.
>> Problem is their storage size is larger than the number of bytes they deal
>> with, which raises the question: are filler bytes prepended or appended to
>> the converted value? How about input values in non-native order? Answering
>> that is not so easy as it depends on the use case. We actually had a similar
>> issue when defining VXLAN's VNI field for rte_flow, which is 24-bit in
>> network order.
>> Take rte_constant_bswap48() for instance, assuming input value is
>> little-endian, output is supposed to be big-endian. While the shifts are
>> correct, filler bytes are not in the right place for a big-endian system,
>> and the resulting value stored on uint64_t cannot be used as-is. Again, that
>> depends on the use case, it could be correct if the resulting value was to
>> be used as is on a little-endian system.
> 
> I understand what you have stated - the application or any user needs to be context aware about what they are using and the side-effect of such conversions.
> 
>> I think the only safe way to deal with that is by defining specific types of
>> the proper size, e.g.:
>>  typedef uint8_t uint48_t[6];
>> These are cumbersome and cannot be used like normal integers though. With
>> such types, byte-swapping functions become meaningless.
>> Since these are supposed to be rather simple functions, I'm not sure
>> handling/documenting all this complexity in rte_byteorder.h makes sense.
> 
> I have no issues moving these into DPAA specific code. Hemant added them in generic just in case they would be of use to others.
> 
> -
> Shreyansh

These are all static inline functions, so no real code increase unless used and having them in one spot is the best place IMO.


Regards,
Keith

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-19 13:52           ` Wiles, Keith
@ 2017-06-20 10:43             ` Hemant Agrawal
  2017-06-20 14:34               ` Wiles, Keith
  0 siblings, 1 reply; 367+ messages in thread
From: Hemant Agrawal @ 2017-06-20 10:43 UTC (permalink / raw)
  To: Wiles, Keith, Shreyansh Jain
  Cc: Adrien Mazarguil, Richardson, Bruce, dev, Yigit, Ferruh, Thomas Monjalon

On 6/19/2017 7:22 PM, Wiles, Keith wrote:
>
>> On Jun 19, 2017, at 6:00 AM, Shreyansh Jain <shreyansh.jain@nxp.com> wrote:
>>
>> Hello Adrien,
>>
>> On Friday 16 June 2017 04:04 PM, Adrien Mazarguil wrote:
>>> Hi Shreyansh,
>>> On Fri, Jun 16, 2017 at 09:21:35AM +0000, Shreyansh Jain wrote:
>>>> Hi Bruce,
>>>>
>>>>> -----Original Message-----
>>>>> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
>>>>> Sent: Friday, June 16, 2017 2:27 PM
>>>>> To: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>>> Cc: dev@dpdk.org; ferruh.yigit@intel.com; Hemant Agrawal
>>>>> <hemant.agrawal@nxp.com>
>>>>> Subject: Re: [dpdk-dev] [PATCH 01/38] eal: add support for 24 40 and 48 bit
>>>>> operations
>>>>>
>>>>> On Fri, Jun 16, 2017 at 11:10:31AM +0530, Shreyansh Jain wrote:
>>>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>>
>>>>>> Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width
>>>>>>
>>>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>> ---
>>>>>>  .../common/include/generic/rte_byteorder.h         | 78
>>>>> ++++++++++++++++++++++
>>>>>>  1 file changed, 78 insertions(+)
>>>>>>
>>>>> Are these really common enough for inclusion in an generic EAL file?
>>>>> Would they be better being driver specific, so that we don't end up with
>>>>> lots of extra byte-swap routines for each possible size used by a
>>>>> driver.
>>>>  Reasoning was to keep all bit/byte swap at a single place and if it is
>>>> useful for others.
>>>>
>>>> From DPAA perspective, these macro can be anywhere. In case someone else too
>>>> has use of this (now or in near-future), probably then we can consider this
>>>> in EAL.
>>>> Else, if I don't get much responses in a few days, I will shift them to
>>>> DPAA driver in next version of this series.
>>> While I'm not against exposing exotic byte swapping functions, they are not
>>> completely safe and I'm not sure they should be part of public header files
>>> on that basis.
>>> Problem is their storage size is larger than the number of bytes they deal
>>> with, which raises the question: are filler bytes prepended or appended to
>>> the converted value? How about input values in non-native order? Answering
>>> that is not so easy as it depends on the use case. We actually had a similar
>>> issue when defining VXLAN's VNI field for rte_flow, which is 24-bit in
>>> network order.
>>> Take rte_constant_bswap48() for instance, assuming input value is
>>> little-endian, output is supposed to be big-endian. While the shifts are
>>> correct, filler bytes are not in the right place for a big-endian system,
>>> and the resulting value stored on uint64_t cannot be used as-is. Again, that
>>> depends on the use case, it could be correct if the resulting value was to
>>> be used as is on a little-endian system.
>>
>> I understand what you have stated - the application or any user needs to be context aware about what they are using and the side-effect of such conversions.
>>
>>> I think the only safe way to deal with that is by defining specific types of
>>> the proper size, e.g.:
>>>  typedef uint8_t uint48_t[6];
>>> These are cumbersome and cannot be used like normal integers though. With
>>> such types, byte-swapping functions become meaningless.
>>> Since these are supposed to be rather simple functions, I'm not sure
>>> handling/documenting all this complexity in rte_byteorder.h makes sense.
>>
>> I have no issues moving these into DPAA specific code. Hemant added them in generic just in case they would be of use to others.
>>
>> -
>> Shreyansh
>
> These are all static inline functions, so no real code increase unless used and having them in one spot is the best place IMO.
>
>
> Regards,
> Keith

Yes! these are simple static inline functions with no core unless used.
Many hardware accelerators usages 40 bit & 48 bits data. we thought, it 
can be usable by others as well.

currently we are seeing a mixed opinion.

In next revision, We will move them within our driver code. If someone 
need them in future, we can always bring them to eal.

Regards,
Hemant

>
>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-20 10:43             ` Hemant Agrawal
@ 2017-06-20 14:34               ` Wiles, Keith
  2017-06-21  8:17                 ` Hemant Agrawal
  0 siblings, 1 reply; 367+ messages in thread
From: Wiles, Keith @ 2017-06-20 14:34 UTC (permalink / raw)
  To: Hemant Agrawal
  Cc: Shreyansh Jain, Adrien Mazarguil, Richardson, Bruce, dev, Yigit,
	Ferruh, Thomas Monjalon


> On Jun 20, 2017, at 5:43 AM, Hemant Agrawal <hemant.agrawal@nxp.com> wrote:
> 
> On 6/19/2017 7:22 PM, Wiles, Keith wrote:
>> 
>>> On Jun 19, 2017, at 6:00 AM, Shreyansh Jain <shreyansh.jain@nxp.com> wrote:
>>> 
>>> Hello Adrien,
>>> 
>>> On Friday 16 June 2017 04:04 PM, Adrien Mazarguil wrote:
>>>> Hi Shreyansh,
>>>> On Fri, Jun 16, 2017 at 09:21:35AM +0000, Shreyansh Jain wrote:
>>>>> Hi Bruce,
>>>>> 
>>>>>> -----Original Message-----
>>>>>> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
>>>>>> Sent: Friday, June 16, 2017 2:27 PM
>>>>>> To: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>>>> Cc: dev@dpdk.org; ferruh.yigit@intel.com; Hemant Agrawal
>>>>>> <hemant.agrawal@nxp.com>
>>>>>> Subject: Re: [dpdk-dev] [PATCH 01/38] eal: add support for 24 40 and 48 bit
>>>>>> operations
>>>>>> 
>>>>>> On Fri, Jun 16, 2017 at 11:10:31AM +0530, Shreyansh Jain wrote:
>>>>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>>> 
>>>>>>> Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width
>>>>>>> 
>>>>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>>> ---
>>>>>>> .../common/include/generic/rte_byteorder.h         | 78
>>>>>> ++++++++++++++++++++++
>>>>>>> 1 file changed, 78 insertions(+)
>>>>>>> 
>>>>>> Are these really common enough for inclusion in an generic EAL file?
>>>>>> Would they be better being driver specific, so that we don't end up with
>>>>>> lots of extra byte-swap routines for each possible size used by a
>>>>>> driver.
>>>>> Reasoning was to keep all bit/byte swap at a single place and if it is
>>>>> useful for others.
>>>>> 
>>>>> From DPAA perspective, these macro can be anywhere. In case someone else too
>>>>> has use of this (now or in near-future), probably then we can consider this
>>>>> in EAL.
>>>>> Else, if I don't get much responses in a few days, I will shift them to
>>>>> DPAA driver in next version of this series.
>>>> While I'm not against exposing exotic byte swapping functions, they are not
>>>> completely safe and I'm not sure they should be part of public header files
>>>> on that basis.
>>>> Problem is their storage size is larger than the number of bytes they deal
>>>> with, which raises the question: are filler bytes prepended or appended to
>>>> the converted value? How about input values in non-native order? Answering
>>>> that is not so easy as it depends on the use case. We actually had a similar
>>>> issue when defining VXLAN's VNI field for rte_flow, which is 24-bit in
>>>> network order.
>>>> Take rte_constant_bswap48() for instance, assuming input value is
>>>> little-endian, output is supposed to be big-endian. While the shifts are
>>>> correct, filler bytes are not in the right place for a big-endian system,
>>>> and the resulting value stored on uint64_t cannot be used as-is. Again, that
>>>> depends on the use case, it could be correct if the resulting value was to
>>>> be used as is on a little-endian system.
>>> 
>>> I understand what you have stated - the application or any user needs to be context aware about what they are using and the side-effect of such conversions.
>>> 
>>>> I think the only safe way to deal with that is by defining specific types of
>>>> the proper size, e.g.:
>>>> typedef uint8_t uint48_t[6];
>>>> These are cumbersome and cannot be used like normal integers though. With
>>>> such types, byte-swapping functions become meaningless.
>>>> Since these are supposed to be rather simple functions, I'm not sure
>>>> handling/documenting all this complexity in rte_byteorder.h makes sense.
>>> 
>>> I have no issues moving these into DPAA specific code. Hemant added them in generic just in case they would be of use to others.
>>> 
>>> -
>>> Shreyansh
>> 
>> These are all static inline functions, so no real code increase unless used and having them in one spot is the best place IMO.
>> 
>> 
>> Regards,
>> Keith
> 
> Yes! these are simple static inline functions with no core unless used.
> Many hardware accelerators usages 40 bit & 48 bits data. we thought, it can be usable by others as well.
> 
> currently we are seeing a mixed opinion.
> 
> In next revision, We will move them within our driver code. If someone need them in future, we can always bring them to eal.

Is there really a big objection to allowing this patch to be accepted?

> 
> Regards,
> Hemant
> 
>> 
>> 
> 
> 

Regards,
Keith

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-20 14:34               ` Wiles, Keith
@ 2017-06-21  8:17                 ` Hemant Agrawal
  2017-06-21  8:32                   ` Bruce Richardson
  2017-06-21  9:02                   ` Adrien Mazarguil
  0 siblings, 2 replies; 367+ messages in thread
From: Hemant Agrawal @ 2017-06-21  8:17 UTC (permalink / raw)
  To: Wiles, Keith
  Cc: Shreyansh Jain, Adrien Mazarguil, Richardson, Bruce, dev, Yigit,
	Ferruh, Thomas Monjalon

On 6/20/2017 8:04 PM, Wiles, Keith wrote:
>
>> On Jun 20, 2017, at 5:43 AM, Hemant Agrawal <hemant.agrawal@nxp.com> wrote:
>>
>> On 6/19/2017 7:22 PM, Wiles, Keith wrote:
>>>
>>>> On Jun 19, 2017, at 6:00 AM, Shreyansh Jain <shreyansh.jain@nxp.com> wrote:
>>>>
>>>> Hello Adrien,
>>>>
>>>> On Friday 16 June 2017 04:04 PM, Adrien Mazarguil wrote:
>>>>> Hi Shreyansh,
>>>>> On Fri, Jun 16, 2017 at 09:21:35AM +0000, Shreyansh Jain wrote:
>>>>>> Hi Bruce,
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
>>>>>>> Sent: Friday, June 16, 2017 2:27 PM
>>>>>>> To: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>>>>> Cc: dev@dpdk.org; ferruh.yigit@intel.com; Hemant Agrawal
>>>>>>> <hemant.agrawal@nxp.com>
>>>>>>> Subject: Re: [dpdk-dev] [PATCH 01/38] eal: add support for 24 40 and 48 bit
>>>>>>> operations
>>>>>>>
>>>>>>> On Fri, Jun 16, 2017 at 11:10:31AM +0530, Shreyansh Jain wrote:
>>>>>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>>>>
>>>>>>>> Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width
>>>>>>>>
>>>>>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>>>> ---
>>>>>>>> .../common/include/generic/rte_byteorder.h         | 78
>>>>>>> ++++++++++++++++++++++
>>>>>>>> 1 file changed, 78 insertions(+)
>>>>>>>>
>>>>>>> Are these really common enough for inclusion in an generic EAL file?
>>>>>>> Would they be better being driver specific, so that we don't end up with
>>>>>>> lots of extra byte-swap routines for each possible size used by a
>>>>>>> driver.
>>>>>> Reasoning was to keep all bit/byte swap at a single place and if it is
>>>>>> useful for others.
>>>>>>
>>>>>> From DPAA perspective, these macro can be anywhere. In case someone else too
>>>>>> has use of this (now or in near-future), probably then we can consider this
>>>>>> in EAL.
>>>>>> Else, if I don't get much responses in a few days, I will shift them to
>>>>>> DPAA driver in next version of this series.
>>>>> While I'm not against exposing exotic byte swapping functions, they are not
>>>>> completely safe and I'm not sure they should be part of public header files
>>>>> on that basis.
>>>>> Problem is their storage size is larger than the number of bytes they deal
>>>>> with, which raises the question: are filler bytes prepended or appended to
>>>>> the converted value? How about input values in non-native order? Answering
>>>>> that is not so easy as it depends on the use case. We actually had a similar
>>>>> issue when defining VXLAN's VNI field for rte_flow, which is 24-bit in
>>>>> network order.
>>>>> Take rte_constant_bswap48() for instance, assuming input value is
>>>>> little-endian, output is supposed to be big-endian. While the shifts are
>>>>> correct, filler bytes are not in the right place for a big-endian system,
>>>>> and the resulting value stored on uint64_t cannot be used as-is. Again, that
>>>>> depends on the use case, it could be correct if the resulting value was to
>>>>> be used as is on a little-endian system.
>>>>
>>>> I understand what you have stated - the application or any user needs to be context aware about what they are using and the side-effect of such conversions.
>>>>
>>>>> I think the only safe way to deal with that is by defining specific types of
>>>>> the proper size, e.g.:
>>>>> typedef uint8_t uint48_t[6];
>>>>> These are cumbersome and cannot be used like normal integers though. With
>>>>> such types, byte-swapping functions become meaningless.
>>>>> Since these are supposed to be rather simple functions, I'm not sure
>>>>> handling/documenting all this complexity in rte_byteorder.h makes sense.
>>>>
>>>> I have no issues moving these into DPAA specific code. Hemant added them in generic just in case they would be of use to others.
>>>>
>>>> -
>>>> Shreyansh
>>>
>>> These are all static inline functions, so no real code increase unless used and having them in one spot is the best place IMO.
>>>
>>>
>>> Regards,
>>> Keith
>>
>> Yes! these are simple static inline functions with no core unless used.
>> Many hardware accelerators usages 40 bit & 48 bits data. we thought, it can be usable by others as well.
>>
>> currently we are seeing a mixed opinion.
>>
>> In next revision, We will move them within our driver code. If someone need them in future, we can always bring them to eal.
>
> Is there really a big objection to allowing this patch to be accepted?

Bruce, Adrien
	Any opinion?

Regards,
Hemant
>
>>
>> Regards,
>> Hemant
>>
>>>
>>>
>>
>>
>
> Regards,
> Keith
>
>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-21  8:17                 ` Hemant Agrawal
@ 2017-06-21  8:32                   ` Bruce Richardson
  2017-06-21  9:02                   ` Adrien Mazarguil
  1 sibling, 0 replies; 367+ messages in thread
From: Bruce Richardson @ 2017-06-21  8:32 UTC (permalink / raw)
  To: Hemant Agrawal
  Cc: Wiles, Keith, Shreyansh Jain, Adrien Mazarguil, dev, Yigit,
	Ferruh, Thomas Monjalon

On Wed, Jun 21, 2017 at 01:47:52PM +0530, Hemant Agrawal wrote:
> On 6/20/2017 8:04 PM, Wiles, Keith wrote:
> > 
> > > On Jun 20, 2017, at 5:43 AM, Hemant Agrawal
> > > <hemant.agrawal@nxp.com> wrote:
> > > 
> > > On 6/19/2017 7:22 PM, Wiles, Keith wrote:
> > > > 
> > > > > On Jun 19, 2017, at 6:00 AM, Shreyansh Jain
> > > > > <shreyansh.jain@nxp.com> wrote:
> > > > > 
> > > > > Hello Adrien,
> > > > > 
> > > > > On Friday 16 June 2017 04:04 PM, Adrien Mazarguil wrote:
> > > > > > Hi Shreyansh, On Fri, Jun 16, 2017 at 09:21:35AM +0000,
> > > > > > Shreyansh Jain wrote:
> > > > > > > Hi Bruce,
> > > > > > > 
> > > > > > > > -----Original Message----- From: Bruce Richardson
> > > > > > > > [mailto:bruce.richardson@intel.com] Sent: Friday, June
> > > > > > > > 16, 2017 2:27 PM To: Shreyansh Jain
> > > > > > > > <shreyansh.jain@nxp.com> Cc: dev@dpdk.org;
> > > > > > > > ferruh.yigit@intel.com; Hemant Agrawal
> > > > > > > > <hemant.agrawal@nxp.com> Subject: Re: [dpdk-dev] [PATCH
> > > > > > > > 01/38] eal: add support for 24 40 and 48 bit operations
> > > > > > > > 
> > > > > > > > On Fri, Jun 16, 2017 at 11:10:31AM +0530, Shreyansh Jain
> > > > > > > > wrote:
> > > > > > > > > From: Hemant Agrawal <hemant.agrawal@nxp.com>
> > > > > > > > > 
> > > > > > > > > Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit
> > > > > > > > > width
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > > > > > > > > --- .../common/include/generic/rte_byteorder.h
> > > > > > > > > | 78
> > > > > > > > ++++++++++++++++++++++
> > > > > > > > > 1 file changed, 78 insertions(+)
> > > > > > > > > 
> > > > > > > > Are these really common enough for inclusion in an
> > > > > > > > generic EAL file?  Would they be better being driver
> > > > > > > > specific, so that we don't end up with lots of extra
> > > > > > > > byte-swap routines for each possible size used by a
> > > > > > > > driver.
> > > > > > > Reasoning was to keep all bit/byte swap at a single place
> > > > > > > and if it is useful for others.
> > > > > > > 
> > > > > > > From DPAA perspective, these macro can be anywhere. In
> > > > > > > case someone else too has use of this (now or in
> > > > > > > near-future), probably then we can consider this in EAL.
> > > > > > > Else, if I don't get much responses in a few days, I will
> > > > > > > shift them to DPAA driver in next version of this series.
> > > > > > While I'm not against exposing exotic byte swapping
> > > > > > functions, they are not completely safe and I'm not sure
> > > > > > they should be part of public header files on that basis.
> > > > > > Problem is their storage size is larger than the number of
> > > > > > bytes they deal with, which raises the question: are filler
> > > > > > bytes prepended or appended to the converted value? How
> > > > > > about input values in non-native order? Answering that is
> > > > > > not so easy as it depends on the use case. We actually had a
> > > > > > similar issue when defining VXLAN's VNI field for rte_flow,
> > > > > > which is 24-bit in network order.  Take
> > > > > > rte_constant_bswap48() for instance, assuming input value is
> > > > > > little-endian, output is supposed to be big-endian. While
> > > > > > the shifts are correct, filler bytes are not in the right
> > > > > > place for a big-endian system, and the resulting value
> > > > > > stored on uint64_t cannot be used as-is. Again, that depends
> > > > > > on the use case, it could be correct if the resulting value
> > > > > > was to be used as is on a little-endian system.
> > > > > 
> > > > > I understand what you have stated - the application or any
> > > > > user needs to be context aware about what they are using and
> > > > > the side-effect of such conversions.
> > > > > 
> > > > > > I think the only safe way to deal with that is by defining
> > > > > > specific types of the proper size, e.g.: typedef uint8_t
> > > > > > uint48_t[6]; These are cumbersome and cannot be used like
> > > > > > normal integers though. With such types, byte-swapping
> > > > > > functions become meaningless.  Since these are supposed to
> > > > > > be rather simple functions, I'm not sure
> > > > > > handling/documenting all this complexity in rte_byteorder.h
> > > > > > makes sense.
> > > > > 
> > > > > I have no issues moving these into DPAA specific code. Hemant
> > > > > added them in generic just in case they would be of use to
> > > > > others.
> > > > > 
> > > > > - Shreyansh
> > > > 
> > > > These are all static inline functions, so no real code increase
> > > > unless used and having them in one spot is the best place IMO.
> > > > 
> > > > 
> > > > Regards, Keith
> > > 
> > > Yes! these are simple static inline functions with no core unless
> > > used.  Many hardware accelerators usages 40 bit & 48 bits data. we
> > > thought, it can be usable by others as well.
> > > 
> > > currently we are seeing a mixed opinion.
> > > 
> > > In next revision, We will move them within our driver code. If
> > > someone need them in future, we can always bring them to eal.
> > 
> > Is there really a big objection to allowing this patch to be
> > accepted?
> 
> Bruce, Adrien Any opinion?
> 
I don't have strong feelings either way. However, if there is only one
user of these functions right now, I'd normally wait till there is a
second user before moving them to a common area. If others feel
differently, I'm ok with having them as common right now.

/Bruce

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-21  8:17                 ` Hemant Agrawal
  2017-06-21  8:32                   ` Bruce Richardson
@ 2017-06-21  9:02                   ` Adrien Mazarguil
  1 sibling, 0 replies; 367+ messages in thread
From: Adrien Mazarguil @ 2017-06-21  9:02 UTC (permalink / raw)
  To: Hemant Agrawal
  Cc: Wiles, Keith, Shreyansh Jain, Richardson, Bruce, dev, Yigit,
	Ferruh, Thomas Monjalon

On Wed, Jun 21, 2017 at 01:47:52PM +0530, Hemant Agrawal wrote:
> On 6/20/2017 8:04 PM, Wiles, Keith wrote:
> >
> >>On Jun 20, 2017, at 5:43 AM, Hemant Agrawal <hemant.agrawal@nxp.com> wrote:
> >>
> >>On 6/19/2017 7:22 PM, Wiles, Keith wrote:
> >>>
> >>>>On Jun 19, 2017, at 6:00 AM, Shreyansh Jain <shreyansh.jain@nxp.com> wrote:
> >>>>
> >>>>Hello Adrien,
> >>>>
> >>>>On Friday 16 June 2017 04:04 PM, Adrien Mazarguil wrote:
> >>>>>Hi Shreyansh,
> >>>>>On Fri, Jun 16, 2017 at 09:21:35AM +0000, Shreyansh Jain wrote:
> >>>>>>Hi Bruce,
> >>>>>>
> >>>>>>>-----Original Message-----
> >>>>>>>From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> >>>>>>>Sent: Friday, June 16, 2017 2:27 PM
> >>>>>>>To: Shreyansh Jain <shreyansh.jain@nxp.com>
> >>>>>>>Cc: dev@dpdk.org; ferruh.yigit@intel.com; Hemant Agrawal
> >>>>>>><hemant.agrawal@nxp.com>
> >>>>>>>Subject: Re: [dpdk-dev] [PATCH 01/38] eal: add support for 24 40 and 48 bit
> >>>>>>>operations
> >>>>>>>
> >>>>>>>On Fri, Jun 16, 2017 at 11:10:31AM +0530, Shreyansh Jain wrote:
> >>>>>>>>From: Hemant Agrawal <hemant.agrawal@nxp.com>
> >>>>>>>>
> >>>>>>>>Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width
> >>>>>>>>
> >>>>>>>>Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> >>>>>>>>---
> >>>>>>>>.../common/include/generic/rte_byteorder.h         | 78
> >>>>>>>++++++++++++++++++++++
> >>>>>>>>1 file changed, 78 insertions(+)
> >>>>>>>>
> >>>>>>>Are these really common enough for inclusion in an generic EAL file?
> >>>>>>>Would they be better being driver specific, so that we don't end up with
> >>>>>>>lots of extra byte-swap routines for each possible size used by a
> >>>>>>>driver.
> >>>>>>Reasoning was to keep all bit/byte swap at a single place and if it is
> >>>>>>useful for others.
> >>>>>>
> >>>>>>From DPAA perspective, these macro can be anywhere. In case someone else too
> >>>>>>has use of this (now or in near-future), probably then we can consider this
> >>>>>>in EAL.
> >>>>>>Else, if I don't get much responses in a few days, I will shift them to
> >>>>>>DPAA driver in next version of this series.
> >>>>>While I'm not against exposing exotic byte swapping functions, they are not
> >>>>>completely safe and I'm not sure they should be part of public header files
> >>>>>on that basis.
> >>>>>Problem is their storage size is larger than the number of bytes they deal
> >>>>>with, which raises the question: are filler bytes prepended or appended to
> >>>>>the converted value? How about input values in non-native order? Answering
> >>>>>that is not so easy as it depends on the use case. We actually had a similar
> >>>>>issue when defining VXLAN's VNI field for rte_flow, which is 24-bit in
> >>>>>network order.
> >>>>>Take rte_constant_bswap48() for instance, assuming input value is
> >>>>>little-endian, output is supposed to be big-endian. While the shifts are
> >>>>>correct, filler bytes are not in the right place for a big-endian system,
> >>>>>and the resulting value stored on uint64_t cannot be used as-is. Again, that
> >>>>>depends on the use case, it could be correct if the resulting value was to
> >>>>>be used as is on a little-endian system.
> >>>>
> >>>>I understand what you have stated - the application or any user needs to be context aware about what they are using and the side-effect of such conversions.
> >>>>
> >>>>>I think the only safe way to deal with that is by defining specific types of
> >>>>>the proper size, e.g.:
> >>>>>typedef uint8_t uint48_t[6];
> >>>>>These are cumbersome and cannot be used like normal integers though. With
> >>>>>such types, byte-swapping functions become meaningless.
> >>>>>Since these are supposed to be rather simple functions, I'm not sure
> >>>>>handling/documenting all this complexity in rte_byteorder.h makes sense.
> >>>>
> >>>>I have no issues moving these into DPAA specific code. Hemant added them in generic just in case they would be of use to others.
> >>>>
> >>>>-
> >>>>Shreyansh
> >>>
> >>>These are all static inline functions, so no real code increase unless used and having them in one spot is the best place IMO.
> >>>
> >>>
> >>>Regards,
> >>>Keith
> >>
> >>Yes! these are simple static inline functions with no core unless used.
> >>Many hardware accelerators usages 40 bit & 48 bits data. we thought, it can be usable by others as well.
> >>
> >>currently we are seeing a mixed opinion.
> >>
> >>In next revision, We will move them within our driver code. If someone need them in future, we can always bring them to eal.
> >
> >Is there really a big objection to allowing this patch to be accepted?
> 
> Bruce, Adrien
> 	Any opinion?

Well, I'm not against adding them, however as described in my previous
reply, the fact input/output values are not necessarily aligned correctly
due to their type width must be documented then to avoid issues later (a
couple of ascii art diagrams should clear that up).

To summarize, this is the same issue as storing the result of a 32-bit
conversion macro as a 64-bit value:

 uint32_t be64 = htonl(0x12345678);

On little endian systems, be64 looks like:

 12 34 56 78 00 00 00 00

While on big endian systems:

 00 00 00 00 12 34 56 78

Therefore if this value was written as is to a 64-bit field of a packet sent
over the network, it would correctly work only if the host CPU was big
endian. It's not really "be64" but "be32 stored inside uint64_t and padded
in host CPU order", which could be error prone if left undocumented.

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 22/38] net/dpaa: add NXP DPAA PMD driver skeleton
  2017-06-16  5:40 ` [PATCH 22/38] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
@ 2017-06-28 15:41   ` Ferruh Yigit
  2017-06-29 14:29     ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-28 15:41 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
> A skeleton which would be called after bus device scan. It currently
> fails to identify the device>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

> +
> +/* Initialise a network interface */
> +static int dpaa_eth_dev_init(struct rte_eth_dev *eth_dev __rte_unused)

__rte_unused can be removed

<...>

> +
> +static int
> +rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
> +			   struct rte_dpaa_device *dpaa_dev)
> +{
> +	int diag;
> +	int ret;
> +	struct rte_eth_dev *eth_dev;
> +	char ethdev_name[RTE_ETH_NAME_MAX_LEN];
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	if (!is_global_init) {
> +		/* One time load of Qman/Bman drivers */
> +		ret = qman_global_init();
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "QMAN initialization failed: %d",
> +				    ret);
> +			return ret;
> +		}
> +		ret = bman_global_init();
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "BMAN initialization failed: %d",
> +				    ret);
> +			return ret;
> +		}
> +
> +		is_global_init = 1;
> +	}
> +
> +	sprintf(ethdev_name, "%s", dpaa_dev->name);

snprintf can be preferred

> +
> +	ret = rte_dpaa_portal_init((void *)1);
> +	if (ret) {
> +		PMD_DRV_LOG(ERR, "Unable to initialize portal");
> +		return ret;
> +	}
> +
> +	eth_dev = rte_eth_dev_allocate(ethdev_name);

If this is done without RTE_PROC_PRIMARY check, this will cause
secondary to memset all device data.

I am adding this because of below check, it multi process support is
intended, this also be protected.

> +	if (eth_dev == NULL)
> +		return -ENOMEM;
> +
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +		eth_dev->data->dev_private = rte_zmalloc(
> +						"ethdev private structure",
> +						sizeof(struct dpaa_if),
> +						RTE_CACHE_LINE_SIZE);
> +		if (!eth_dev->data->dev_private) {
> +			PMD_INIT_LOG(CRIT, "Cannot allocate memzone for"
> +				     " private port data\n");
> +			rte_eth_dev_release_port(eth_dev);
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	eth_dev->device = &dpaa_dev->device;
> +	dpaa_dev->eth_dev = eth_dev;

I thought "struct rte_dpaa_device" is bus device, like "struct
rte_pci_device", if so why it has link to the eth_dev?

> +	eth_dev->data->rx_mbuf_alloc_failed = 0;

not required, data already memset via rte_eth_dev_allocate()

> +
> +	/* Invoke PMD device initialization function */
> +	diag = dpaa_eth_dev_init(eth_dev);
> +	if (diag) {
> +		PMD_DRV_LOG(ERR, "Eth dev initialization failed: %d", ret);
> +		return diag;
> +	}
> +
> +	PMD_DRV_LOG(DEBUG, "Eth dev initialized: %d\n", diag);
> +
> +	return 0;
> +}
> +
> +static int
> +rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
> +{
> +	struct rte_eth_dev *eth_dev;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	eth_dev = dpaa_dev->eth_dev;

can be:
eth_dev = rte_eth_dev_allocated(dpaa_dev->device.name);

> +
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> +		rte_free(eth_dev->data->dev_private);
> +

no pmd uninit() ?

> +	rte_eth_dev_release_port(eth_dev);
> +
> +	return 0;
> +}

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup
  2017-06-16  5:40 ` [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
@ 2017-06-28 15:45   ` Ferruh Yigit
  2017-06-29 14:55     ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-28 15:45 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> ---
>  doc/guides/nics/features/dpaa.ini |   1 +
>  drivers/net/dpaa/Makefile         |   4 +
>  drivers/net/dpaa/dpaa_ethdev.c    | 279 ++++++++++++++++++++++++++++++++-
>  drivers/net/dpaa/dpaa_ethdev.h    |   6 +
>  drivers/net/dpaa/dpaa_rxtx.c      | 313 ++++++++++++++++++++++++++++++++++++++
>  drivers/net/dpaa/dpaa_rxtx.h      |  61 ++++++++

This patch adds initial rx/tx support, as well as rx/tx queue support as
mentioned in patch subject.

I would be for splitting patch, but even if patch not splitted, I would
suggest updating patch suject and commit log to cover patch content.

<...>
> --- a/doc/guides/nics/features/dpaa.ini
> +++ b/doc/guides/nics/features/dpaa.ini
> @@ -4,5 +4,6 @@
>  ; Refer to default.ini for the full list of available PMD features.
>  ;
>  [Features]
> +Queue start/stop     = Y

This requires following dev_ops implemented:
rx_queue_start, rx_queue_stop, tx_queue_start, tx_queue_stop

>  ARMv8                = Y
>  Usage doc            = Y

<...>

> +
> +	/* Initialize Rx FQ's */
> +	if (getenv("DPAA_NUM_RX_QUEUES"))

I think this was disscussed before, should a PMD get config options from
enviroment variable? Altough this works, I am for a more explicit
method, like dev_args.

<...>
> +
> +	dpaa_intf->rx_queues = rte_zmalloc(NULL,
> +		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);

A NULL check perhaps?

And if multi-process support desired, this should be done only for
primary process.

<...>
> +	/* Allocate memory for storing MAC addresses */
> +	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
> +		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
> +	if (eth_dev->data->mac_addrs == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to "
> +						"store MAC addresses",
> +				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);

Anything to cleanup before exit?

> +		return -ENOMEM;
> +	}

<...>
> +uint16_t dpaa_eth_queue_rx(void *q,
> +			   struct rte_mbuf **bufs,
> +			   uint16_t nb_bufs)
> +{
> +	struct qman_fq *fq = q;
> +	struct qm_dqrr_entry *dq;
> +	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
> +	int ret;
> +
> +	ret = rte_dpaa_portal_init((void *)0);
> +	if (ret) {
> +		PMD_DRV_LOG(ERR, "Failure in affining portal");
> +		return 0;
> +	}

This is rx_pkt_burst function, right? Is it Ok to call
rte_dpaa_portal_init() in Rx data path?

<...>
> +	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
> +	if (!buf)
> +		goto out;

goto is not required here.

> +
> +out:
> +	return (void *)buf;
> +}
> +

<...>
> +uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
> +			      struct rte_mbuf **bufs __rte_unused,
> +		uint16_t nb_bufs __rte_unused)
> +{
> +	PMD_TX_LOG(DEBUG, "Drop all packets");

Should mbufs freed here?

> +
> +	/* Drop all incoming packets. No need to free packets here
> +	 * because the rte_eth f/w frees up the packets through tx_buffer
> +	 * callback in case this functions returns count less than nb_bufs
> +	 */
> +	return 0;
> +}

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 25/38] net/dpaa: add support for MTU update
  2017-06-16  5:40 ` [PATCH 25/38] net/dpaa: add support for MTU update Shreyansh Jain
@ 2017-06-28 15:45   ` Ferruh Yigit
  2017-06-29 14:56     ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-28 15:45 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

>  static int
> +dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> +{
> +	struct dpaa_if *dpaa_intf = dev->data->dev_private;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	if (mtu < ETHER_MIN_MTU)
> +		return -EINVAL;
> +
> +	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
> +
> +	if (mtu > ETHER_MAX_LEN)
> +		return -1

Is it OK to have this check after fman_if_set_maxfrm() ?

> +	dev->data->dev_conf.rxmode.jumbo_frame = 0;
> +
> +	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;

I think this only makes sense when jumbo_frame is 1, although not hurts
to set...

> +	return 0;
> +}
<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 27/38] net/dpaa: add support for link status update
  2017-06-16  5:40 ` [PATCH 27/38] net/dpaa: add support for link status update Shreyansh Jain
@ 2017-06-28 15:46   ` Ferruh Yigit
  2017-06-29 14:57     ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-28 15:46 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

> --- a/doc/guides/nics/features/dpaa.ini
> +++ b/doc/guides/nics/features/dpaa.ini
> @@ -4,6 +4,8 @@
>  ; Refer to default.ini for the full list of available PMD features.
>  ;
>  [Features]
> +Speed capabilities   = P

Speed capabilities feature is not "link->link_speed", this feature means
providing "dev_info->speed_capa" (in dpaa_eth_dev_info())

> +Link status          = Y
>  Queue start/stop     = Y
>  Jumbo frame          = Y
>  MTU update           = Y
<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 30/38] net/dpaa: add support for multicast toggle
  2017-06-16  5:41 ` [PATCH 30/38] net/dpaa: add support for multicast toggle Shreyansh Jain
@ 2017-06-28 15:47   ` Ferruh Yigit
  2017-06-29 14:58     ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-28 15:47 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

> diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
> index a6984a4..80dd3ca 100644
> --- a/doc/guides/nics/features/dpaa.ini
> +++ b/doc/guides/nics/features/dpaa.ini
> @@ -10,5 +10,7 @@ Queue start/stop     = Y
>  Jumbo frame          = Y
>  MTU update           = Y
>  Promiscuous mode     = Y
> +Allmulticast mode    = Y
> +Unicast MAC filter   = Y

"Unicast MAC filter" means implementing "mac_addr_set, mac_addr_add,
mac_addr_remove" dev_ops

>  ARMv8                = Y
>  Usage doc            = Y

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 33/38] net/dpaa: add support for flow control
  2017-06-16  5:41 ` [PATCH 33/38] net/dpaa: add support for flow control Shreyansh Jain
@ 2017-06-28 15:47   ` Ferruh Yigit
  2017-06-30  9:37     ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-28 15:47 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

>  static int
> +dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
> +		   struct rte_eth_fc_conf *fc_conf)
> +{
> +	struct dpaa_if *dpaa_intf = dev->data->dev_private;
> +	struct rte_eth_fc_conf *net_fc;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	if (!(dpaa_intf->fc_conf)) {
> +		dpaa_intf->fc_conf = rte_zmalloc(NULL,
> +			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);

Should this be freed in rte_dpaa_remove()

> +		if (!dpaa_intf->fc_conf) {
> +			PMD_DRV_LOG(ERR, "unable to save flow control info");
> +			return -ENOMEM;
> +		}
> +	}
> +	net_fc = dpaa_intf->fc_conf;
> +
<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 34/38] net/dpaa: add support for hashed RSS
  2017-06-16  5:41 ` [PATCH 34/38] net/dpaa: add support for hashed RSS Shreyansh Jain
@ 2017-06-28 15:48   ` Ferruh Yigit
  2017-06-30 10:31     ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-28 15:48 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

Just to confirm:

Is no HW configuration required to enable RSS?
Is HW updates mbuf->rss automatically, without driver involvement?

<...>

>  Promiscuous mode     = Y
>  Allmulticast mode    = Y
>  Unicast MAC filter   = Y
> +RSS hash             = Y
>  Flow control         = Y
>  Basic stats          = Y
>  ARMv8                = Y

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 35/38] net/dpaa: add support for packet type parsing
  2017-06-16  5:41 ` [PATCH 35/38] net/dpaa: add support for packet type parsing Shreyansh Jain
@ 2017-06-28 15:50   ` Ferruh Yigit
  2017-06-30 11:40     ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-28 15:50 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

> +static const uint32_t *
> +dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
> +{
> +	static const uint32_t ptypes[] = {
> +		/*todo -= add more types */
> +		RTE_PTYPE_L2_ETHER,
> +		RTE_PTYPE_L3_IPV4,
> +		RTE_PTYPE_L3_IPV4_EXT,
> +		RTE_PTYPE_L3_IPV6,
> +		RTE_PTYPE_L3_IPV6_EXT,
> +		RTE_PTYPE_L4_TCP,
> +		RTE_PTYPE_L4_UDP,
> +		RTE_PTYPE_L4_SCTP
> +	};
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	if (dev->rx_pkt_burst == dpaa_eth_queue_rx)

Isn't this only rx function exists? Is this check required?

> +		return ptypes;
> +	return NULL;
> +}
>  
>  static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
>  {
> @@ -159,6 +180,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
>  	dev_info->max_vfs = 0;
>  	dev_info->max_vmdq_pools = ETH_16_POOLS;
>  	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
> +	dev_info->rx_offload_capa =
> +		(DEV_RX_OFFLOAD_IPV4_CKSUM |
> +		DEV_RX_OFFLOAD_UDP_CKSUM  |
> +		DEV_RX_OFFLOAD_TCP_CKSUM);

I guess this patch also enable L3/L4 Rx checksum offload, can you please
update commit log.

And should ol_flags set with one of the PKT_RX_IP_CKSUM_BAD,
PKT_RX_IP_CKSUM_GOOD, PKT_RX_IP_CKSUM_NONE? Also with L4 versions of these?

<...>

> +
> +	m->tx_offload = annot->parse.ip_off[0];
> +	m->tx_offload |= (annot->parse.l4_off - annot->parse.ip_off[0])
> +					<< DPAA_PKT_L3_LEN_SHIFT;

This is a received mbuf right? Is it required to set tx_offload flag?

> +
> +	/* Set the hash values */
> +	m->hash.rss = (uint32_t)(rte_be_to_cpu_64(annot->hash));> +	m->ol_flags = PKT_RX_RSS_HASH;
> +
> +	/* Check if Vlan is present */
> +	if (prs & DPAA_PARSE_VLAN_MASK)
> +		m->ol_flags |= PKT_RX_VLAN_PKT;

I guess PKT_RX_VLAN_STRIPPED is the preferred flag now.

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 36/38] net/dpaa: add support for checksum offload
  2017-06-16  5:41 ` [PATCH 36/38] net/dpaa: add support for checksum offload Shreyansh Jain
@ 2017-06-28 15:50   ` Ferruh Yigit
  2017-07-04 14:48     ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-28 15:50 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

> @@ -363,6 +439,18 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
>  						}
>  						rte_pktmbuf_free(mbuf);
>  					}
> +					if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
> +						if (mbuf->data_off < DEFAULT_TX_ICEOF +
> +							sizeof(struct dpaa_eth_parse_results_t)) {
> +							PMD_DRV_LOG(DEBUG, "Checksum offload Err: "
> +								"Not enough Headroom "
> +								"space for correct Checksum offload."
> +								"So Calculating checksum in Software.");
> +							dpaa_checksum(mbuf);
> +						} else
> +							dpaa_checksum_offload(mbuf, &fd_arr[loop],
> +								mbuf->buf_addr);
> +					}

There is a tx_pkt_prepare() dev_ops.
Does it make sense to move this calculations to that function?

>  				} else {
>  					PMD_DRV_LOG(DEBUG, "Number of Segments not supported");
>  					/* Set frames_to_send & nb_bufs so that

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 38/38] net/dpaa: add packet dump for debugging
  2017-06-16  5:41 ` [PATCH 38/38] net/dpaa: add packet dump for debugging Shreyansh Jain
@ 2017-06-28 15:51   ` Ferruh Yigit
  2017-06-30 11:47     ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-28 15:51 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

Is there a driver documentation, I haven't see any in net/dpaa patches?

<...>
> +CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER_DISPLAY=n
> +CONFIG_RTE_LIBRTE_DPAA_CHECKING=n

This config option is not used at all, can be removed.

<...>
> +#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
> +	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
> +		DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);

Out of curiosity, what exactly done here. Is this a special queue, what
is does? It can be useful if documented more in commit log.

> +	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
> +	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
> +		DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
> +	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
> +#endif
> +
<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 18/38] doc: add NXP DPAA PMD documentation
  2017-06-16  5:40 ` [PATCH 18/38] doc: add NXP DPAA PMD documentation Shreyansh Jain
@ 2017-06-28 15:51   ` Ferruh Yigit
  2017-06-29 14:17     ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-28 15:51 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> ---
>  MAINTAINERS                       |   2 +
>  doc/guides/nics/dpaa.rst          | 360 ++++++++++++++++++++++++++++++++++++++

As a reminder, you may need to send a web page patch to add dpaa as
supported nic:
http://dpdk.org/doc/nics

>  doc/guides/nics/features/dpaa.ini |   8 +
>  doc/guides/nics/index.rst         |   1 +

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 18/38] doc: add NXP DPAA PMD documentation
  2017-06-28 15:51   ` Ferruh Yigit
@ 2017-06-29 14:17     ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-29 14:17 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Wednesday 28 June 2017 09:21 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>> ---
>>   MAINTAINERS                       |   2 +
>>   doc/guides/nics/dpaa.rst          | 360 ++++++++++++++++++++++++++++++++++++++
> 
> As a reminder, you may need to send a web page patch to add dpaa as
> supported nic:
> http://dpdk.org/doc/nics

Yes, thanks for reminding.
once the patches move ahead (next etc), I will send that as well.

> 
>>   doc/guides/nics/features/dpaa.ini |   8 +
>>   doc/guides/nics/index.rst         |   1 +
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 22/38] net/dpaa: add NXP DPAA PMD driver skeleton
  2017-06-28 15:41   ` Ferruh Yigit
@ 2017-06-29 14:29     ` Shreyansh Jain
  2017-07-02  6:47       ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-29 14:29 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

Hello Ferruh,

I was almost wondering if this series has been forgotten. Thanks for
the comprehensive review.

My comments inline (and in some other mails):

On Wednesday 28 June 2017 09:11 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>> A skeleton which would be called after bus device scan. It currently
>> fails to identify the device>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>> +
>> +/* Initialise a network interface */
>> +static int dpaa_eth_dev_init(struct rte_eth_dev *eth_dev __rte_unused)
> 
> __rte_unused can be removed

I will correct this.


> 
> <...>
> 
>> +
>> +static int
>> +rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
>> +			   struct rte_dpaa_device *dpaa_dev)
>> +{
>> +	int diag;
>> +	int ret;
>> +	struct rte_eth_dev *eth_dev;
>> +	char ethdev_name[RTE_ETH_NAME_MAX_LEN];
>> +
>> +	PMD_INIT_FUNC_TRACE();
>> +
>> +	if (!is_global_init) {
>> +		/* One time load of Qman/Bman drivers */
>> +		ret = qman_global_init();
>> +		if (ret) {
>> +			PMD_DRV_LOG(ERR, "QMAN initialization failed: %d",
>> +				    ret);
>> +			return ret;
>> +		}
>> +		ret = bman_global_init();
>> +		if (ret) {
>> +			PMD_DRV_LOG(ERR, "BMAN initialization failed: %d",
>> +				    ret);
>> +			return ret;
>> +		}
>> +
>> +		is_global_init = 1;
>> +	}
>> +
>> +	sprintf(ethdev_name, "%s", dpaa_dev->name);
> 
> snprintf can be preferred

Ok. Will fix this.

> 
>> +
>> +	ret = rte_dpaa_portal_init((void *)1);
>> +	if (ret) {
>> +		PMD_DRV_LOG(ERR, "Unable to initialize portal");
>> +		return ret;
>> +	}
>> +
>> +	eth_dev = rte_eth_dev_allocate(ethdev_name);
> 
> If this is done without RTE_PROC_PRIMARY check, this will cause
> secondary to memset all device data.

Agree. I will correct this.

> 
> I am adding this because of below check, it multi process support is
> intended, this also be protected.
> 
>> +	if (eth_dev == NULL)
>> +		return -ENOMEM;
>> +
>> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>> +		eth_dev->data->dev_private = rte_zmalloc(
>> +						"ethdev private structure",
>> +						sizeof(struct dpaa_if),
>> +						RTE_CACHE_LINE_SIZE);
>> +		if (!eth_dev->data->dev_private) {
>> +			PMD_INIT_LOG(CRIT, "Cannot allocate memzone for"
>> +				     " private port data\n");
>> +			rte_eth_dev_release_port(eth_dev);
>> +			return -ENOMEM;
>> +		}
>> +	}
>> +
>> +	eth_dev->device = &dpaa_dev->device;
>> +	dpaa_dev->eth_dev = eth_dev;
> 
> I thought "struct rte_dpaa_device" is bus device, like "struct
> rte_pci_device", if so why it has link to the eth_dev?

Yes, rte_dpaa_device ~ rte_pci_device.
This is used to extract the eth_dev back while de-initializing
the device.

  driver->remove = rte_dpaa_remove(rte_dpaa_device)
  // fetch rte_eth_dev from rte_dpaa_device
   `-> .eth_dev_stop(eth_dev)

So, essentially reusing the internal eth_ops for cleaning up the
device.

> 
>> +	eth_dev->data->rx_mbuf_alloc_failed = 0;
> 
> not required, data already memset via rte_eth_dev_allocate()

Ok. I will remove this.

> 
>> +
>> +	/* Invoke PMD device initialization function */
>> +	diag = dpaa_eth_dev_init(eth_dev);
>> +	if (diag) {
>> +		PMD_DRV_LOG(ERR, "Eth dev initialization failed: %d", ret);
>> +		return diag;
>> +	}
>> +
>> +	PMD_DRV_LOG(DEBUG, "Eth dev initialized: %d\n", diag);
>> +
>> +	return 0;
>> +}
>> +
>> +static int
>> +rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
>> +{
>> +	struct rte_eth_dev *eth_dev;
>> +
>> +	PMD_INIT_FUNC_TRACE();
>> +
>> +	eth_dev = dpaa_dev->eth_dev;
> 
> can be:
> eth_dev = rte_eth_dev_allocated(dpaa_dev->device.name);

Hmm, ok, now I understand why you are inquiring about
eth_dev being assigned in rte_dpaa_device. I will have a
relook at this and fix if required.

> 
>> +
>> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>> +		rte_free(eth_dev->data->dev_private);
>> +
> 
> no pmd uninit() ?

I will fix this. There is an internal commit that we had very
recently (a miss in previous series).

> 
>> +	rte_eth_dev_release_port(eth_dev);
>> +
>> +	return 0;
>> +}
> 
> <...>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup
  2017-06-28 15:45   ` Ferruh Yigit
@ 2017-06-29 14:55     ` Shreyansh Jain
  2017-06-29 15:41       ` Ferruh Yigit
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-29 14:55 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Wednesday 28 June 2017 09:15 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>> ---
>>   doc/guides/nics/features/dpaa.ini |   1 +
>>   drivers/net/dpaa/Makefile         |   4 +
>>   drivers/net/dpaa/dpaa_ethdev.c    | 279 ++++++++++++++++++++++++++++++++-
>>   drivers/net/dpaa/dpaa_ethdev.h    |   6 +
>>   drivers/net/dpaa/dpaa_rxtx.c      | 313 ++++++++++++++++++++++++++++++++++++++
>>   drivers/net/dpaa/dpaa_rxtx.h      |  61 ++++++++
> 
> This patch adds initial rx/tx support, as well as rx/tx queue support as
> mentioned in patch subject.
> 
> I would be for splitting patch, but even if patch not splitted, I would
> suggest updating patch suject and commit log to cover patch content.

Ok. I will fix this (splitting if possible, else update to commit).

> 
> <...>
>> --- a/doc/guides/nics/features/dpaa.ini
>> +++ b/doc/guides/nics/features/dpaa.ini
>> @@ -4,5 +4,6 @@
>>   ; Refer to default.ini for the full list of available PMD features.
>>   ;
>>   [Features]
>> +Queue start/stop     = Y
> 
> This requires following dev_ops implemented:
> rx_queue_start, rx_queue_stop, tx_queue_start, tx_queue_stop

Ok. My understanding here was wrong - I incorrectly matched this
to queue setup/teardown. I will remove this feature listing. (and
a couple more as per your review comment on other patches).

> 
>>   ARMv8                = Y
>>   Usage doc            = Y
> 
> <...>
> 
>> +
>> +	/* Initialize Rx FQ's */
>> +	if (getenv("DPAA_NUM_RX_QUEUES"))
> 
> I think this was disscussed before, should a PMD get config options from
> enviroment variable? Altough this works, I am for a more explicit
> method, like dev_args.

Well, I do remember that discussion and still continued with it because 
1) I am not done with that dev_args changes and 2) I think this is more 
non-intrusive as this is specific to DPAA without need for expanding it 
towards dev_args (and impacting application arg list).
You think this is no-go? If so, I will fix this.

> 
> <...>
>> +
>> +	dpaa_intf->rx_queues = rte_zmalloc(NULL,
>> +		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
> 
> A NULL check perhaps?
> 
> And if multi-process support desired, this should be done only for
> primary process.

I will fix both the above.

> 
> <...>
>> +	/* Allocate memory for storing MAC addresses */
>> +	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
>> +		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
>> +	if (eth_dev->data->mac_addrs == NULL) {
>> +		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to "
>> +						"store MAC addresses",
>> +				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
> 
> Anything to cleanup before exit?

bad miss from my side. *_queues should be released - I will fix this. (I 
will run some static analyzer and fix
any other similar before next version)

> 
>> +		return -ENOMEM;
>> +	}
> 
> <...>
>> +uint16_t dpaa_eth_queue_rx(void *q,
>> +			   struct rte_mbuf **bufs,
>> +			   uint16_t nb_bufs)
>> +{
>> +	struct qman_fq *fq = q;
>> +	struct qm_dqrr_entry *dq;
>> +	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
>> +	int ret;
>> +
>> +	ret = rte_dpaa_portal_init((void *)0);
>> +	if (ret) {
>> +		PMD_DRV_LOG(ERR, "Failure in affining portal");
>> +		return 0;
>> +	}
> 
> This is rx_pkt_burst function, right? Is it Ok to call
> rte_dpaa_portal_init() in Rx data path?

Yes, actually, a portal needs to be initialized if not already - for all 
I/O operations to succeed.
rte_dpaa_portal_init segragates calls if multiple entries are made for 
initialization.

rte_dpaa_portal_init
  `-> _dpaa_portal_init() if not already initialized

> 
> <...>
>> +	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
>> +	if (!buf)
>> +		goto out;
> 
> goto is not required here.

:) yes, I will remove this stupid miss.

> 
>> +
>> +out:
>> +	return (void *)buf;
>> +}
>> +
> 
> <...>
>> +uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
>> +			      struct rte_mbuf **bufs __rte_unused,
>> +		uint16_t nb_bufs __rte_unused)
>> +{
>> +	PMD_TX_LOG(DEBUG, "Drop all packets");
> 
> Should mbufs freed here?
> 
>> +
>> +	/* Drop all incoming packets. No need to free packets here
>> +	 * because the rte_eth f/w frees up the packets through tx_buffer
>> +	 * callback in case this functions returns count less than nb_bufs
>> +	 */

Ah, actually I was banking on logic that in case a driver doesn't 
release memory, the API caller (on getting less than nb_bufs) would do 
that. This is case for stopped interface.

But, I agree, this is dirty fix. I will change this.

>> +	return 0;
>> +}
> 
> <...>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 25/38] net/dpaa: add support for MTU update
  2017-06-28 15:45   ` Ferruh Yigit
@ 2017-06-29 14:56     ` Shreyansh Jain
  2017-06-29 15:43       ` Ferruh Yigit
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-29 14:56 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Wednesday 28 June 2017 09:15 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>>   static int
>> +dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
>> +{
>> +	struct dpaa_if *dpaa_intf = dev->data->dev_private;
>> +
>> +	PMD_INIT_FUNC_TRACE();
>> +
>> +	if (mtu < ETHER_MIN_MTU)
>> +		return -EINVAL;
>> +
>> +	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
>> +
>> +	if (mtu > ETHER_MAX_LEN)
>> +		return -1
> 
> Is it OK to have this check after fman_if_set_maxfrm() ?

Indeed - bad code. I will fix this.

> 
>> +	dev->data->dev_conf.rxmode.jumbo_frame = 0;
>> +
>> +	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
> 
> I think this only makes sense when jumbo_frame is 1, although not hurts
> to set...

Yes, that is true. But, I though it would be better for debugging 
purpose. Does it hurt keeping it?

> 
>> +	return 0;
>> +}
> <...>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 27/38] net/dpaa: add support for link status update
  2017-06-28 15:46   ` Ferruh Yigit
@ 2017-06-29 14:57     ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-29 14:57 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Wednesday 28 June 2017 09:16 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>> --- a/doc/guides/nics/features/dpaa.ini
>> +++ b/doc/guides/nics/features/dpaa.ini
>> @@ -4,6 +4,8 @@
>>   ; Refer to default.ini for the full list of available PMD features.
>>   ;
>>   [Features]
>> +Speed capabilities   = P
> 
> Speed capabilities feature is not "link->link_speed", this feature means
> providing "dev_info->speed_capa" (in dpaa_eth_dev_info())

Ok. I will fix this and queue start/stop.

> 
>> +Link status          = Y
>>   Queue start/stop     = Y
>>   Jumbo frame          = Y
>>   MTU update           = Y
> <...>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 30/38] net/dpaa: add support for multicast toggle
  2017-06-28 15:47   ` Ferruh Yigit
@ 2017-06-29 14:58     ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-29 14:58 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: hemant.agrawal

On Wednesday 28 June 2017 09:17 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>> diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
>> index a6984a4..80dd3ca 100644
>> --- a/doc/guides/nics/features/dpaa.ini
>> +++ b/doc/guides/nics/features/dpaa.ini
>> @@ -10,5 +10,7 @@ Queue start/stop     = Y
>>   Jumbo frame          = Y
>>   MTU update           = Y
>>   Promiscuous mode     = Y
>> +Allmulticast mode    = Y
>> +Unicast MAC filter   = Y
> 
> "Unicast MAC filter" means implementing "mac_addr_set, mac_addr_add,
> mac_addr_remove" dev_ops

I will fix this as well - I am not sure why I set it this way (too less 
or too much coffee). Sorry.

> 
>>   ARMv8                = Y
>>   Usage doc            = Y
> 
> <...>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup
  2017-06-29 14:55     ` Shreyansh Jain
@ 2017-06-29 15:41       ` Ferruh Yigit
  2017-06-30 11:48         ` Shreyansh Jain
  2017-07-04 14:50         ` Shreyansh Jain
  0 siblings, 2 replies; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-29 15:41 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, hemant.agrawal

On 6/29/2017 3:55 PM, Shreyansh Jain wrote:
> On Wednesday 28 June 2017 09:15 PM, Ferruh Yigit wrote:
>> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>> ---

<...>

>>
>>> +
>>> +	/* Initialize Rx FQ's */
>>> +	if (getenv("DPAA_NUM_RX_QUEUES"))
>>
>> I think this was disscussed before, should a PMD get config options from
>> enviroment variable? Altough this works, I am for a more explicit
>> method, like dev_args.
> 
> Well, I do remember that discussion and still continued with it because 
> 1) I am not done with that dev_args changes and 2) I think this is more 
> non-intrusive as this is specific to DPAA without need for expanding it 
> towards dev_args (and impacting application arg list).
> You think this is no-go? If so, I will fix this.

Proving argument looks more clear to me, it is more visible, and for
example if multiple process will be run, environment variables can be
confusing.

But this is not no-go, I would like to hear other comments. Also I
recognized that mlx and ark drivers are also using this.

But however this is implemented, this should be clearly documented,
right now this is a hidden config.

<...>
>>> +uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
>>> +			      struct rte_mbuf **bufs __rte_unused,
>>> +		uint16_t nb_bufs __rte_unused)
>>> +{
>>> +	PMD_TX_LOG(DEBUG, "Drop all packets");
>>
>> Should mbufs freed here?
>>
>>> +
>>> +	/* Drop all incoming packets. No need to free packets here
>>> +	 * because the rte_eth f/w frees up the packets through tx_buffer
>>> +	 * callback in case this functions returns count less than nb_bufs
>>> +	 */
> 
> Ah, actually I was banking on logic that in case a driver doesn't 
> release memory, the API caller (on getting less than nb_bufs) would do 
> that. This is case for stopped interface.
> 
> But, I agree, this is dirty fix. I will change this.

I missed your logic here indeed, this looks a valid option too, its your
call.

> 
>>> +	return 0;
>>> +}
>>
>> <...>
>>
>>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 25/38] net/dpaa: add support for MTU update
  2017-06-29 14:56     ` Shreyansh Jain
@ 2017-06-29 15:43       ` Ferruh Yigit
  0 siblings, 0 replies; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-29 15:43 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, hemant.agrawal

On 6/29/2017 3:56 PM, Shreyansh Jain wrote:
> On Wednesday 28 June 2017 09:15 PM, Ferruh Yigit wrote:
>> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

>>> +	dev->data->dev_conf.rxmode.jumbo_frame = 0;
>>> +
>>> +	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
>>
>> I think this only makes sense when jumbo_frame is 1, although not hurts
>> to set...
> 
> Yes, that is true. But, I though it would be better for debugging 
> purpose. Does it hurt keeping it?

It is OK, specially since you are updating the code in jumbo frame patch
to use this variable..

> 
>>
>>> +	return 0;
>>> +}
>> <...>
>>
>>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 33/38] net/dpaa: add support for flow control
  2017-06-28 15:47   ` Ferruh Yigit
@ 2017-06-30  9:37     ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-30  9:37 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Wednesday 28 June 2017 09:17 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>>  static int
>> +dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
>> +		   struct rte_eth_fc_conf *fc_conf)
>> +{
>> +	struct dpaa_if *dpaa_intf = dev->data->dev_private;
>> +	struct rte_eth_fc_conf *net_fc;
>> +
>> +	PMD_INIT_FUNC_TRACE();
>> +
>> +	if (!(dpaa_intf->fc_conf)) {
>> +		dpaa_intf->fc_conf = rte_zmalloc(NULL,
>> +			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
> 
> Should this be freed in rte_dpaa_remove()

Will fix this in v2.

> 
>> +		if (!dpaa_intf->fc_conf) {
>> +			PMD_DRV_LOG(ERR, "unable to save flow control info");
>> +			return -ENOMEM;
>> +		}
>> +	}
>> +	net_fc = dpaa_intf->fc_conf;
>> +
> <...>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 34/38] net/dpaa: add support for hashed RSS
  2017-06-28 15:48   ` Ferruh Yigit
@ 2017-06-30 10:31     ` Shreyansh Jain
  2017-06-30 11:39       ` Ferruh Yigit
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-30 10:31 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: hemant.agrawal

On Wednesday 28 June 2017 09:18 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> Just to confirm:
> 
> Is no HW configuration required to enable RSS?
> Is HW updates mbuf->rss automatically, without driver involvement?
> 
> <...>

For DPAA platform, the configuration of queues and RSS on them is done using an external tool - just before executing DPDK application. This is part of application startup.
Though, I did notice that I have not documented this explicitly in the dpaa.rst. I will correct the documentation.

> 
>>  Promiscuous mode     = Y
>>  Allmulticast mode    = Y
>>  Unicast MAC filter   = Y
>> +RSS hash             = Y
>>  Flow control         = Y
>>  Basic stats          = Y
>>  ARMv8                = Y
> 
> <...>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 34/38] net/dpaa: add support for hashed RSS
  2017-06-30 10:31     ` Shreyansh Jain
@ 2017-06-30 11:39       ` Ferruh Yigit
  2017-07-04 14:49         ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-06-30 11:39 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 6/30/2017 11:31 AM, Shreyansh Jain wrote:
> On Wednesday 28 June 2017 09:18 PM, Ferruh Yigit wrote:
>> On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>
>> Just to confirm:
>>
>> Is no HW configuration required to enable RSS?
>> Is HW updates mbuf->rss automatically, without driver involvement?
>>
>> <...>
> 
> For DPAA platform, the configuration of queues and RSS on them is done using an external tool - just before executing DPDK application. This is part of application startup.
> Though, I did notice that I have not documented this explicitly in the dpaa.rst. I will correct the documentation.

For second question, I have seen next patch updates the mbuf->rss,
perhaps "RSS hash" support can be claimed with that patch.

> 
>>
>>>  Promiscuous mode     = Y
>>>  Allmulticast mode    = Y
>>>  Unicast MAC filter   = Y
>>> +RSS hash             = Y
>>>  Flow control         = Y
>>>  Basic stats          = Y
>>>  ARMv8                = Y
>>
>> <...>
>>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 35/38] net/dpaa: add support for packet type parsing
  2017-06-28 15:50   ` Ferruh Yigit
@ 2017-06-30 11:40     ` Shreyansh Jain
  2017-07-04 12:11       ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-30 11:40 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Wednesday 28 June 2017 09:20 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>> +static const uint32_t *
>> +dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
>> +{
>> +	static const uint32_t ptypes[] = {
>> +		/*todo -= add more types */
>> +		RTE_PTYPE_L2_ETHER,
>> +		RTE_PTYPE_L3_IPV4,
>> +		RTE_PTYPE_L3_IPV4_EXT,
>> +		RTE_PTYPE_L3_IPV6,
>> +		RTE_PTYPE_L3_IPV6_EXT,
>> +		RTE_PTYPE_L4_TCP,
>> +		RTE_PTYPE_L4_UDP,
>> +		RTE_PTYPE_L4_SCTP
>> +	};
>> +
>> +	PMD_INIT_FUNC_TRACE();
>> +
>> +	if (dev->rx_pkt_burst == dpaa_eth_queue_rx)
> 
> Isn't this only rx function exists? Is this check required?

Yes, for now we only have a single function. But, just like other driver, we can add more in near future based on some variation of RX.
In fact, this is more to be in sync with how other drivers implement this function (albeit, we only have a single Rx variant).

> 
>> +		return ptypes;
>> +	return NULL;
>> +}
>>  
>>  static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
>>  {
>> @@ -159,6 +180,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
>>  	dev_info->max_vfs = 0;
>>  	dev_info->max_vmdq_pools = ETH_16_POOLS;
>>  	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
>> +	dev_info->rx_offload_capa =
>> +		(DEV_RX_OFFLOAD_IPV4_CKSUM |
>> +		DEV_RX_OFFLOAD_UDP_CKSUM  |
>> +		DEV_RX_OFFLOAD_TCP_CKSUM);
> 
> I guess this patch also enable L3/L4 Rx checksum offload, can you please
> update commit log.

Ok. I will do that

> 
> And should ol_flags set with one of the PKT_RX_IP_CKSUM_BAD,
> PKT_RX_IP_CKSUM_GOOD, PKT_RX_IP_CKSUM_NONE? Also with L4 versions of these?

Yes. I will fix that.

> 
> <...>
> 
>> +
>> +	m->tx_offload = annot->parse.ip_off[0];
>> +	m->tx_offload |= (annot->parse.l4_off - annot->parse.ip_off[0])
>> +					<< DPAA_PKT_L3_LEN_SHIFT;
> 
> This is a received mbuf right? Is it required to set tx_offload flag?
> 
>> +
>> +	/* Set the hash values */
>> +	m->hash.rss = (uint32_t)(rte_be_to_cpu_64(annot->hash));> +	m->ol_flags = PKT_RX_RSS_HASH;
>> +
>> +	/* Check if Vlan is present */
>> +	if (prs & DPAA_PARSE_VLAN_MASK)
>> +		m->ol_flags |= PKT_RX_VLAN_PKT;
> 
> I guess PKT_RX_VLAN_STRIPPED is the preferred flag now.
> 
> <...>
> 

I will re-check the above (and fix).

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 38/38] net/dpaa: add packet dump for debugging
  2017-06-28 15:51   ` Ferruh Yigit
@ 2017-06-30 11:47     ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-30 11:47 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Wednesday 28 June 2017 09:21 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> Is there a driver documentation, I haven't see any in net/dpaa patches?
> 
> <...>
>> +CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER_DISPLAY=n
>> +CONFIG_RTE_LIBRTE_DPAA_CHECKING=n
> 
> This config option is not used at all, can be removed.

This is being used in the QMAN and BMAN driver in the bus.

> 
> <...>
>> +#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
>> +	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
>> +		DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
> 
> Out of curiosity, what exactly done here. Is this a special queue, what
> is does? It can be useful if documented more in commit log.
> 
>> +	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
>> +	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
>> +		DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);

Besides the normal Rx and Tx queues, for valid Rx/Tx, there are error queues which receive packet which have errors (like checksum).
This set enables those queues in for debugging purpose. For normal case (non debug), there is not much utility to check these, especially in our polling model.

>> +	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
>> +#endif
>> +
> <...>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup
  2017-06-29 15:41       ` Ferruh Yigit
@ 2017-06-30 11:48         ` Shreyansh Jain
  2017-07-04 14:50         ` Shreyansh Jain
  1 sibling, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-06-30 11:48 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Thursday 29 June 2017 09:11 PM, Ferruh Yigit wrote:
> On 6/29/2017 3:55 PM, Shreyansh Jain wrote:
>> On Wednesday 28 June 2017 09:15 PM, Ferruh Yigit wrote:
>>> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>> ---
> 
> <...>
> 
>>>
>>>> +
>>>> +	/* Initialize Rx FQ's */
>>>> +	if (getenv("DPAA_NUM_RX_QUEUES"))
>>>
>>> I think this was disscussed before, should a PMD get config options from
>>> enviroment variable? Altough this works, I am for a more explicit
>>> method, like dev_args.
>>
>> Well, I do remember that discussion and still continued with it because 
>> 1) I am not done with that dev_args changes and 2) I think this is more 
>> non-intrusive as this is specific to DPAA without need for expanding it 
>> towards dev_args (and impacting application arg list).
>> You think this is no-go? If so, I will fix this.
> 
> Proving argument looks more clear to me, it is more visible, and for
> example if multiple process will be run, environment variables can be
> confusing.
> 
> But this is not no-go, I would like to hear other comments. Also I
> recognized that mlx and ark drivers are also using this.
> 
> But however this is implemented, this should be clearly documented,
> right now this is a hidden config.

Agree, I will fix the documentation and add information about this.

> 
> <...>
>>>> +uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
>>>> +			      struct rte_mbuf **bufs __rte_unused,
>>>> +		uint16_t nb_bufs __rte_unused)
>>>> +{
>>>> +	PMD_TX_LOG(DEBUG, "Drop all packets");
>>>
>>> Should mbufs freed here?
>>>
>>>> +
>>>> +	/* Drop all incoming packets. No need to free packets here
>>>> +	 * because the rte_eth f/w frees up the packets through tx_buffer
>>>> +	 * callback in case this functions returns count less than nb_bufs
>>>> +	 */
>>
>> Ah, actually I was banking on logic that in case a driver doesn't 
>> release memory, the API caller (on getting less than nb_bufs) would do 
>> that. This is case for stopped interface.
>>
>> But, I agree, this is dirty fix. I will change this.
> 
> I missed your logic here indeed, this looks a valid option too, its your
> call.
> 
>>
>>>> +	return 0;
>>>> +}
>>>
>>> <...>
>>>
>>>
>>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 22/38] net/dpaa: add NXP DPAA PMD driver skeleton
  2017-06-29 14:29     ` Shreyansh Jain
@ 2017-07-02  6:47       ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-02  6:47 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Thursday 29 June 2017 07:59 PM, Shreyansh Jain wrote:
> Hello Ferruh,
> 
[...]
> 
>>
>>> +
>>> +    if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>>> +        rte_free(eth_dev->data->dev_private);
>>> +
>>
>> no pmd uninit() ?

Just to clarify, were you asking about uninit of the driver?

There is no such call from within normal flow - that is probably left to the application to perform. (rte_dpaa_driver_unregister)
I just cross checked, this is not being done even for the PCI functions (rte_pci_unregister) - probably because we never see a case (for sample applications) where driver is unregistered.

Am I missing something here?

> 
> I will fix this. There is an internal commit that we had very
> recently (a miss in previous series).

There was some device/queue cleanup which was missing which I have added and will push in the next version. I mixed that up with 'PMD uninit' comment when I was replying to your previous email.

> 
>>

-
Shreyansh

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 35/38] net/dpaa: add support for packet type parsing
  2017-06-30 11:40     ` Shreyansh Jain
@ 2017-07-04 12:11       ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 12:11 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Friday 30 June 2017 05:10 PM, Shreyansh Jain wrote:
> On Wednesday 28 June 2017 09:20 PM, Ferruh Yigit wrote:
>> On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>
[...]
>>
>>> +
>>> +	m->tx_offload = annot->parse.ip_off[0];
>>> +	m->tx_offload |= (annot->parse.l4_off - annot->parse.ip_off[0])
>>> +					<< DPAA_PKT_L3_LEN_SHIFT;
>>
>> This is a received mbuf right? Is it required to set tx_offload flag?
>>
[...]

I had not replied to this in my previous response.
DPAA hardware fills parsed information into the annotation (annot) area.
When a packet is Rx'd, it would contain information about where the IP
offset field it. Once we read the packet, the 'annot' area is
overwritten in subsequent cycles.

This packet received may be forwarded, in which case, this information
(preserved in m->tx_offload) would be useful for optimized performance.

Indeed, this is one of the cases, but at least some optimization is
achieved using this.

-
Shreyansh

^ permalink raw reply	[flat|nested] 367+ messages in thread

* [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD
  2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                   ` (37 preceding siblings ...)
  2017-06-16  5:41 ` [PATCH 38/38] net/dpaa: add packet dump for debugging Shreyansh Jain
@ 2017-07-04 14:43 ` Shreyansh Jain
  2017-07-04 14:43   ` [PATCH v2 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
                     ` (41 more replies)
  38 siblings, 42 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Series based on net-next/master (d8ee8d7)

Changes Log:
============

v2:
 - Fixing various comments from Ferruh, but broadly:
  -) Logging is been changed to reflect rte_log_register
  -) Logs across Bus, Mempool and PMD updated
  -) fixed incorrect feature claimed in dpaa.ini
 - Removed 24/40/48 bit swapping macro from EAL.
   These are defined in dpaa/bus now (compat.h)
 - Added missing memory cleanup operation
 - Updated documentation with some missing information

Introduction
============

RFC was posted here -> [R3]
V1 was posted here -> [R4]

This patch series adds NXP's QorIQ-Layerscape DPAA Architecture based
bus driver, mempool driver and PMD. This version of driver supports NXP
LS1043A/LS1023A, LS1046A/LS1026A family of network SoCs. [R1]

DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
components designed for high-speed network packet processing. This
architecture provides the infrastructure to support simplified sharing of
networking interfaces and accelerators by multiple CPU cores, and the
accelerators themselves.

This patchset introduces the following:
1. DPAA Bus (drivers/bus/dpaa)
 The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
 or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
 The patches introduce necessary layers to expose the DPAA hardware
 blocks for interfacing with RTE framework.

2. DPAA Mempool (drivers/mempool/dpaa)
 BMan, or Buffer Manager, block of DPAA features a hardware offloaded
 mempool. These patches add support for a driver to manage the BMan
 block. This driver allows for mempool creation, deletion, buffer
 acquire and release, as per the RTE APIs.

3. DPAA PMD (drivers/net/dpaa)
 The Poll Mode Driver for DPAA NIC Interfaces.

Patch Layout
============

01: Add DPAA SoC build configuration
02~16: Add DPAA Bus support and features, incrementally
17: Add Documentation
18~21: Add DPAA Mempool support
22~40: Add PMD and its various features, incrementally

Dependency
==========

This patch is dependent on:

[D1] Patch: http://dpdk.org/dev/patchwork/patch/24478/
     This patch adds macro for Bus logging to RTE logging framework

References
==========

[R1] http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-layerscape-arm-processors:QORIQ-ARM
[R2] http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf
[R3] RFC: http://dpdk.org/ml/archives/dev/2017-May/066675.html
[R4] v1: http://dpdk.org/ml/archives/dev/2017-June/068020.html

Hemant Agrawal (2):
  bus/dpaa: add compatibility and helper macros
  net/dpaa: support for firmware version get API

Shreyansh Jain (38):
  config: add NXP DPAA SoC build configuration
  bus/dpaa: introduce NXP DPAA Bus driver skeleton
  bus/dpaa: add OF parser for device scanning
  bus/dpaa: introducing FMan configurations
  bus/dpaa: add FMan hardware operations
  bus/dpaa: enable DPAA IOCTL portal driver
  bus/dpaa: add layer for interrupt emulation using pthread
  bus/dpaa: add routines for managing a RB tree
  bus/dpaa: add QMAN interface driver
  bus/dpaa: add QMan driver core routines
  bus/dpaa: add BMAN driver core
  bus/dpaa: add support for FMAN frame queue lookup
  bus/dpaa: add BMan hardware interfaces
  bus/dpaa: add fman flow control threshold setting
  bus/dpaa: integrate DPAA Bus with hardware blocks
  doc: add NXP DPAA PMD documentation
  bus/dpaa: add DPAA mempool logging macros
  mempool/dpaa: add support for NXP DPAA Mempool
  drivers: enable compilation of DPAA Mempool driver
  maintainers: claim ownership of DPAA Mempool driver
  bus/dpaa: add DPAA PMD logging macros
  net/dpaa: add NXP DPAA PMD driver skeleton
  config: enable NXP DPAA PMD compilation
  net/dpaa: add support for Tx and Rx queue setup
  net/dpaa: add support for MTU update
  net/dpaa: add support for jumbo frames
  net/dpaa: add support for link status update
  net/dpaa: add support for device info and speed capability
  net/dpaa: add support for promiscuous toggle
  net/dpaa: add support for multicast toggle
  net/dpaa: add support for MAC address update
  net/dpaa: add support for basic stats
  net/dpaa: add support for flow control
  net/dpaa: add support for hashed RSS
  net/dpaa: add support for packet type parsing
  net/dpaa: add support for checksum offload
  net/dpaa: add support for Scattered Rx
  net/dpaa: add packet dump for debugging

 MAINTAINERS                                       |    9 +
 config/common_base                                |    5 +
 config/defconfig_arm64-dpaa-linuxapp-gcc          |   64 +
 doc/guides/nics/dpaa.rst                          |  367 +++
 doc/guides/nics/features/dpaa.ini                 |   23 +
 doc/guides/nics/index.rst                         |    1 +
 drivers/bus/Makefile                              |    3 +
 drivers/bus/dpaa/Makefile                         |   84 +
 drivers/bus/dpaa/base/fman/fman.c                 |  540 +++++
 drivers/bus/dpaa/base/fman/fman_hw.c              |  634 ++++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c         |  205 ++
 drivers/bus/dpaa/base/fman/of.c                   |  576 +++++
 drivers/bus/dpaa/base/qbman/bman.c                |  394 ++++
 drivers/bus/dpaa/base/qbman/bman.h                |  550 +++++
 drivers/bus/dpaa/base/qbman/bman_driver.c         |  323 +++
 drivers/bus/dpaa/base/qbman/bman_priv.h           |  125 ++
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c          |  104 +
 drivers/bus/dpaa/base/qbman/dpaa_sys.c            |  136 ++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h            |   65 +
 drivers/bus/dpaa/base/qbman/process.c             |  331 +++
 drivers/bus/dpaa/base/qbman/qman.c                | 2497 +++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h                |  888 ++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c         |  288 +++
 drivers/bus/dpaa/base/qbman/qman_priv.h           |  314 +++
 drivers/bus/dpaa/dpaa_bus.c                       |  436 ++++
 drivers/bus/dpaa/include/compat.h                 |  383 ++++
 drivers/bus/dpaa/include/dpaa_bits.h              |   65 +
 drivers/bus/dpaa/include/dpaa_list.h              |  101 +
 drivers/bus/dpaa/include/dpaa_rbtree.h            |  143 ++
 drivers/bus/dpaa/include/fman.h                   |  474 ++++
 drivers/bus/dpaa/include/fsl_bman.h               |  375 ++++
 drivers/bus/dpaa/include/fsl_fman.h               |  189 ++
 drivers/bus/dpaa/include/fsl_fman_crc64.h         |  263 +++
 drivers/bus/dpaa/include/fsl_qman.h               | 2038 +++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h                |  107 +
 drivers/bus/dpaa/include/netcfg.h                 |   96 +
 drivers/bus/dpaa/include/of.h                     |  191 ++
 drivers/bus/dpaa/include/process.h                |  107 +
 drivers/bus/dpaa/rte_bus_dpaa_version.map         |   46 +
 drivers/bus/dpaa/rte_dpaa_bus.h                   |  169 ++
 drivers/bus/dpaa/rte_dpaa_logs.h                  |  129 ++
 drivers/mempool/Makefile                          |    2 +
 drivers/mempool/dpaa/Makefile                     |   65 +
 drivers/mempool/dpaa/dpaa_mempool.c               |  264 +++
 drivers/mempool/dpaa/dpaa_mempool.h               |   78 +
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map |    6 +
 drivers/net/Makefile                              |    2 +
 drivers/net/dpaa/Makefile                         |   68 +
 drivers/net/dpaa/dpaa_ethdev.c                    | 1017 +++++++++
 drivers/net/dpaa/dpaa_ethdev.h                    |  144 ++
 drivers/net/dpaa/dpaa_rxtx.c                      |  704 ++++++
 drivers/net/dpaa/dpaa_rxtx.h                      |  263 +++
 drivers/net/dpaa/rte_pmd_dpaa_version.map         |    4 +
 mk/machine/dpaa/rte.vars.mk                       |   61 +
 mk/rte.app.mk                                     |    6 +
 55 files changed, 16522 insertions(+)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h
 create mode 100644 drivers/bus/dpaa/include/of.h
 create mode 100644 drivers/bus/dpaa/include/process.h
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map
 create mode 100644 mk/machine/dpaa/rte.vars.mk

-- 
2.7.4

^ permalink raw reply	[flat|nested] 367+ messages in thread

* [PATCH v2 01/40] config: add NXP DPAA SoC build configuration
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
@ 2017-07-04 14:43   ` Shreyansh Jain
  2017-07-04 14:43   ` [PATCH v2 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
                     ` (40 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch adds skeleton build configuration for DPAA platform.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc | 40 +++++++++++++++++++++
 mk/machine/dpaa/rte.vars.mk              | 61 ++++++++++++++++++++++++++++++++
 2 files changed, 101 insertions(+)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 mk/machine/dpaa/rte.vars.mk

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
new file mode 100644
index 0000000..89e32ef
--- /dev/null
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -0,0 +1,40 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 Freescale Semiconductor, Inc.
+#   Copyright 2017 NXP.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+#include "defconfig_arm64-armv8a-linuxapp-gcc"
+
+# NXP (Freescale) - Soc Architecture with FMAN, QMAN & BMAN support
+CONFIG_RTE_MACHINE="dpaa"
+CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
+CONFIG_RTE_LIBRTE_VHOST_NUMA=n
+CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
diff --git a/mk/machine/dpaa/rte.vars.mk b/mk/machine/dpaa/rte.vars.mk
new file mode 100644
index 0000000..b24cedf
--- /dev/null
+++ b/mk/machine/dpaa/rte.vars.mk
@@ -0,0 +1,61 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright 2017 NXP. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# machine:
+#
+#   - can define ARCH variable (overridden by cmdline value)
+#   - can define CROSS variable (overridden by cmdline value)
+#   - define MACHINE_CFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+#   - can define CPU_CFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_LDFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_ASFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - may override any previously defined variable
+#
+
+# ARCH =
+# CROSS =
+# MACHINE_CFLAGS =
+# MACHINE_LDFLAGS =
+# MACHINE_ASFLAGS =
+# CPU_CFLAGS =
+# CPU_LDFLAGS =
+# CPU_ASFLAGS =
+MACHINE_CFLAGS += -march=armv8-a+crc
+
+ifdef CONFIG_RTE_ARCH_ARM_TUNE
+MACHINE_CFLAGS += -mtune=$(CONFIG_RTE_ARCH_ARM_TUNE:"%"=%)
+endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  2017-07-04 14:43   ` [PATCH v2 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
@ 2017-07-04 14:43   ` Shreyansh Jain
  2017-07-04 14:43   ` [PATCH v2 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
                     ` (39 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 MAINTAINERS                               |   5 +
 config/common_base                        |   3 +
 config/defconfig_arm64-dpaa-linuxapp-gcc  |   6 +
 drivers/bus/Makefile                      |   3 +
 drivers/bus/dpaa/Makefile                 |  63 ++++++++++
 drivers/bus/dpaa/dpaa_bus.c               | 187 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |   7 ++
 drivers/bus/dpaa/rte_dpaa_bus.h           | 163 ++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_dpaa_logs.h          |  64 ++++++++++
 9 files changed, 501 insertions(+)
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 00351ff..620d57a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -388,6 +388,11 @@ F: drivers/net/nfp/
 F: doc/guides/nics/nfp.rst
 F: doc/guides/nics/features/nfp.ini
 
+NXP dpaa
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+M: Shreyansh Jain <shreyansh.jain@nxp.com>
+F: drivers/bus/dpaa/
+
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/config/common_base b/config/common_base
index 660588a..8ea4967 100644
--- a/config/common_base
+++ b/config/common_base
@@ -302,6 +302,9 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_MBOX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=n
+
 #
 # Compile NXP DPAA2 FSL-MC Bus
 #
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 89e32ef..cf603f3 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -38,3 +38,9 @@ CONFIG_RTE_MACHINE="dpaa"
 CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
 CONFIG_RTE_LIBRTE_VHOST_NUMA=n
 CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
+
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=y
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 1e5b281..2dad392 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -33,6 +33,9 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mbuf librte_mempool librte_ring librte_ether
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += dpaa
+DEPDIRS-dpaa = $(core-libs)
+
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 DEPDIRS-fslmc = $(core-libs)
 
diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
new file mode 100644
index 0000000..f44f3c4
--- /dev/null
+++ b/drivers/bus/dpaa/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 NXP.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_BUS_DPAA=$(RTE_SDK)/drivers/bus/dpaa
+
+#
+# library name
+#
+LIB = librte_bus_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+
+CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+
+# versioning export map
+EXPORT_MAP := rte_bus_dpaa_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	dpaa_bus.c
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
new file mode 100644
index 0000000..c530c83
--- /dev/null
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -0,0 +1,187 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_bus.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+int dpaa_logtype_bus;
+
+struct rte_dpaa_bus rte_dpaa_bus;
+
+static inline void
+dpaa_add_to_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+
+static inline void
+dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+static int
+rte_dpaa_bus_scan(void)
+{
+	BUS_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/* register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
+{
+	RTE_VERIFY(driver);
+
+	BUS_INIT_FUNC_TRACE();
+
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = &rte_dpaa_bus;
+}
+
+/* un-register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
+{
+	struct rte_dpaa_bus *dpaa_bus;
+
+	BUS_INIT_FUNC_TRACE();
+
+	dpaa_bus = driver->dpaa_bus;
+
+	TAILQ_REMOVE(&dpaa_bus->driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = NULL;
+}
+
+static int
+rte_dpaa_device_match(struct rte_dpaa_driver *drv,
+		      struct rte_dpaa_device *dev)
+{
+	int ret = -1;
+
+	BUS_INIT_FUNC_TRACE();
+
+	if (!drv || !dev) {
+		DPAA_BUS_DEBUG("Invalid drv or dev received.");
+		return ret;
+	}
+
+	if (drv->drv_type == dev->id.device_type) {
+		DPAA_BUS_INFO("Device: %s matches for driver: %s",
+			    dev->name, drv->driver.name);
+		ret = 0; /* Found a match */
+	}
+
+	return ret;
+}
+
+static int
+rte_dpaa_bus_probe(void)
+{
+	int ret = -1;
+	struct rte_dpaa_device *dev;
+	struct rte_dpaa_driver *drv;
+
+	BUS_INIT_FUNC_TRACE();
+
+	/* For each registered driver, and device, call the driver->probe */
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		TAILQ_FOREACH(drv, &rte_dpaa_bus.driver_list, next) {
+			ret = rte_dpaa_device_match(drv, dev);
+			if (ret)
+				continue;
+
+			if (!drv->probe)
+				continue;
+
+			ret = drv->probe(drv, dev);
+			if (ret)
+				DPAA_BUS_ERR("Unable to probe.\n");
+			break;
+		}
+	}
+	return 0;
+}
+
+struct rte_dpaa_bus rte_dpaa_bus = {
+	.bus = {
+		.scan = rte_dpaa_bus_scan,
+		.probe = rte_dpaa_bus_probe,
+	},
+	.device_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.device_list),
+	.driver_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.driver_list),
+	.device_count = 0,
+};
+
+RTE_REGISTER_BUS(FSL_DPAA_BUS_NAME, rte_dpaa_bus.bus);
+
+RTE_INIT(dpaa_init_log);
+static void
+dpaa_init_log(void)
+{
+	dpaa_logtype_bus = rte_log_register("bus.dpaa");
+	if (dpaa_logtype_bus >= 0)
+		rte_log_set_level(dpaa_logtype_bus, RTE_LOG_NOTICE);
+}
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
new file mode 100644
index 0000000..8c1ea65
--- /dev/null
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -0,0 +1,7 @@
+DPDK_17.08 {
+	global:
+
+	rte_dpaa_driver_register;
+	rte_dpaa_driver_unregister;
+
+};
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
new file mode 100644
index 0000000..d1de6d3
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -0,0 +1,163 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __RTE_DPAA_BUS_H__
+#define __RTE_DPAA_BUS_H__
+
+#include <rte_bus.h>
+#include <rte_mempool.h>
+
+#define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
+
+#define DEV_TO_DPAA_DEVICE(ptr)	\
+		container_of(ptr, struct rte_dpaa_device, device)
+
+struct rte_dpaa_device;
+struct rte_dpaa_driver;
+
+/* DPAA Device and Driver lists for DPAA bus */
+TAILQ_HEAD(rte_dpaa_device_list, rte_dpaa_device);
+TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
+
+enum rte_dpaa_type {
+	FSL_DPAA_ETH = 1,
+	FSL_DPAA_CRYPTO,
+};
+
+struct rte_dpaa_bus {
+	struct rte_bus bus;
+	struct rte_dpaa_device_list device_list;
+	struct rte_dpaa_driver_list driver_list;
+	int device_count;
+};
+
+struct dpaa_device_id {
+	uint8_t fman_id; /**< Fman interface ID, for ETH type device */
+	uint8_t mac_id; /**< Fman MAC interface ID, for ETH type device */
+	uint16_t dev_id; /**< Device Identifier from DPDK */
+	enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
+};
+
+struct rte_dpaa_device {
+	TAILQ_ENTRY(rte_dpaa_device) next;
+	struct rte_device device;
+	struct rte_eth_dev *eth_dev;
+	struct rte_cryptodev *crypto_dev;
+	struct rte_dpaa_driver *driver;
+	struct dpaa_device_id id;
+	char name[RTE_ETH_NAME_MAX_LEN];
+};
+
+typedef int (*rte_dpaa_probe_t)(struct rte_dpaa_driver *dpaa_drv,
+				struct rte_dpaa_device *dpaa_dev);
+typedef int (*rte_dpaa_remove_t)(struct rte_dpaa_device *dpaa_dev);
+
+struct rte_dpaa_driver {
+	TAILQ_ENTRY(rte_dpaa_driver) next;
+	struct rte_driver driver;
+	struct rte_dpaa_bus *dpaa_bus;
+	enum rte_dpaa_type drv_type;
+	rte_dpaa_probe_t probe;
+	rte_dpaa_remove_t remove;
+};
+
+struct dpaa_portal {
+	uint32_t bman_idx; /**< BMAN Portal ID*/
+	uint32_t qman_idx; /**< QMAN Portal ID*/
+	uint64_t tid;/**< Parent Thread id for this portal */
+};
+
+/* TODO - this is costly, need to write a fast coversion routine */
+static inline void *rte_dpaa_mem_ptov(phys_addr_t paddr)
+{
+	const struct rte_memseg *memseg = rte_eal_get_physmem_layout();
+	int i;
+
+	for (i = 0; i < RTE_MAX_MEMSEG && memseg[i].addr != NULL; i++) {
+		if (paddr >= memseg[i].phys_addr && paddr <
+			memseg[i].phys_addr + memseg[i].len)
+			return (uint8_t *)(memseg[i].addr) +
+			       (paddr - memseg[i].phys_addr);
+	}
+
+	return NULL;
+}
+
+/**
+ * Register a DPAA driver.
+ *
+ * @param driver
+ *   A pointer to a rte_dpaa_driver structure describing the driver
+ *   to be registered.
+ */
+void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
+
+/**
+ * Unregister a DPAA driver.
+ *
+ * @param driver
+ *	A pointer to a rte_dpaa_driver structure describing the driver
+ *	to be unregistered.
+ */
+void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
+
+/**
+ * Initialize a DPAA portal
+ *
+ * @param arg
+ *	Per thread ID
+ *
+ * @return
+ *	0 in case of success, error otherwise
+ */
+int rte_dpaa_portal_init(void *arg);
+
+/**
+ * Cleanup a DPAA Portal
+ */
+void dpaa_portal_finish(void *arg);
+
+/** Helper for DPAA device registration from driver (eth, crypto) instance */
+#define RTE_PMD_REGISTER_DPAA(nm, dpaa_drv) \
+RTE_INIT(dpaainitfn_ ##nm); \
+static void dpaainitfn_ ##nm(void) \
+{\
+	(dpaa_drv).driver.name = RTE_STR(nm);\
+	rte_dpaa_driver_register(&dpaa_drv); \
+} \
+RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_DPAA_BUS_H__ */
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
new file mode 100644
index 0000000..54eda23
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -0,0 +1,64 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA_LOGS_H_
+#define _DPAA_LOGS_H_
+
+#include <rte_log.h>
+
+extern int dpaa_logtype_bus;
+
+#define DPAA_BUS_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define BUS_INIT_FUNC_TRACE() DPAA_BUS_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_BUS
+#define DPAA_BUS_WARN(cond, fmt, args...) \
+	do {\
+		if (cond) \
+			DPAA_BUS_LOG(DEBUG, "WARN: " fmt, ##args); \
+	} while (0)
+#else
+#define DPAA_BUS_WARN(cond, fmt, args...) do { } while (0)
+#endif
+
+#define DPAA_BUS_INFO(fmt, args...) \
+	DPAA_BUS_LOG(INFO, fmt, ## args)
+#define DPAA_BUS_DEBUG(fmt, args...) \
+	DPAA_BUS_LOG(DEBUG, fmt, ## args)
+#define DPAA_BUS_ERR(fmt, args...) \
+	DPAA_BUS_LOG(ERR, fmt, ## args)
+
+#endif /* _DPAA_LOGS_H_ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 03/40] bus/dpaa: add compatibility and helper macros
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  2017-07-04 14:43   ` [PATCH v2 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
  2017-07-04 14:43   ` [PATCH v2 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
@ 2017-07-04 14:43   ` Shreyansh Jain
  2017-07-04 14:43   ` [PATCH v2 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
                     ` (38 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Linked list, bit operations and compatibility macros.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/dpaa/include/compat.h    | 383 +++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/dpaa_bits.h |  65 ++++++
 drivers/bus/dpaa/include/dpaa_list.h | 101 +++++++++
 3 files changed, 549 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h

diff --git a/drivers/bus/dpaa/include/compat.h b/drivers/bus/dpaa/include/compat.h
new file mode 100644
index 0000000..3d46232
--- /dev/null
+++ b/drivers/bus/dpaa/include/compat.h
@@ -0,0 +1,383 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __COMPAT_H
+#define __COMPAT_H
+
+#include <sched.h>
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+#include <stdint.h>
+#include <stdlib.h>
+#include <stddef.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <pthread.h>
+#include <linux/types.h>
+#include <stdbool.h>
+#include <ctype.h>
+#include <malloc.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/mman.h>
+#include <limits.h>
+#include <assert.h>
+#include <dirent.h>
+#include <inttypes.h>
+#include <error.h>
+#include <rte_byteorder.h>
+#include <rte_atomic.h>
+#include <rte_spinlock.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+
+/* The following definitions are primarily to allow the single-source driver
+ * interfaces to be included by arbitrary program code. Ie. for interfaces that
+ * are also available in kernel-space, these definitions provide compatibility
+ * with certain attributes and types used in those interfaces.
+ */
+
+/* Required compiler attributes */
+#define __maybe_unused	__rte_unused
+#define __always_unused	__rte_unused
+#define __packed	__rte_packed
+#define noinline	__attribute__((noinline))
+
+#define L1_CACHE_BYTES 64
+#define ____cacheline_aligned __attribute__((aligned(L1_CACHE_BYTES)))
+#define __stringify_1(x) #x
+#define __stringify(x)	__stringify_1(x)
+
+#ifdef ARRAY_SIZE
+#undef ARRAY_SIZE
+#endif
+#define ARRAY_SIZE(a) (sizeof(a) / sizeof((a)[0]))
+
+/* Debugging */
+#define prflush(fmt, args...) \
+	do { \
+		printf(fmt, ##args); \
+		fflush(stdout); \
+	} while (0)
+
+#define pr_crit(fmt, args...)	 prflush("CRIT:" fmt, ##args)
+#define pr_err(fmt, args...)	 prflush("ERR:" fmt, ##args)
+#define pr_warn(fmt, args...)	 prflush("WARN:" fmt, ##args)
+#define pr_info(fmt, args...)	 prflush(fmt, ##args)
+
+#define ASSERT(x) do {\
+	if (!(x)) \
+		rte_panic("DPAA: x"); \
+} while (0)
+#define BUG_ON(x) ASSERT(!(x))
+
+/* Required types */
+typedef uint8_t		u8;
+typedef uint16_t	u16;
+typedef uint32_t	u32;
+typedef uint64_t	u64;
+typedef uint64_t	dma_addr_t;
+typedef cpu_set_t	cpumask_t;
+typedef uint32_t	phandle;
+typedef uint32_t	gfp_t;
+typedef uint32_t	irqreturn_t;
+
+#define IRQ_HANDLED	0
+#define request_irq	qbman_request_irq
+#define free_irq	qbman_free_irq
+
+#define __iomem
+#define GFP_KERNEL	0
+#define __raw_readb(p)	(*(const volatile unsigned char *)(p))
+#define __raw_readl(p)	(*(const volatile unsigned int *)(p))
+#define __raw_writel(v, p) {*(volatile unsigned int *)(p) = (v); }
+
+/* SMP stuff */
+#define DEFINE_PER_CPU(t, x)	__thread t per_cpu__##x
+#define get_cpu_var(x)		per_cpu__##x
+/* to be used as an upper-limit only */
+#define NR_CPUS			64
+
+/* Waitqueue stuff */
+typedef struct { }		wait_queue_head_t;
+#define DECLARE_WAIT_QUEUE_HEAD(x) int dummy_##x __always_unused
+#define wake_up(x)		do { } while (0)
+
+/* I/O operations */
+static inline u32 in_be32(volatile void *__p)
+{
+	volatile u32 *p = __p;
+	return rte_be_to_cpu_32(*p);
+}
+
+static inline void out_be32(volatile void *__p, u32 val)
+{
+	volatile u32 *p = __p;
+	*p = rte_cpu_to_be_32(val);
+}
+
+#define dcbt_ro(p) __builtin_prefetch(p, 0)
+#define dcbt_rw(p) __builtin_prefetch(p, 1)
+
+#define dcbz(p) { asm volatile("dc zva, %0" : : "r" (p) : "memory"); }
+#define dcbz_64(p) dcbz(p)
+#define hwsync() rte_rmb()
+#define lwsync() rte_wmb()
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+#define dccivac(p) { asm volatile("dc civac, %0" : : "r"(p) : "memory"); }
+
+#define dcbit_ro(p) \
+	do { \
+		dccivac(p);						\
+		asm volatile("prfm pldl1keep, [%0, #64]" : : "r" (p));	\
+	} while (0)
+
+#define barrier() { asm volatile ("" : : : "memory"); }
+#define cpu_relax barrier
+
+static inline uint64_t mfatb(void)
+{
+	uint64_t ret, ret_new, timeout = 200;
+
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret));
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	while (ret != ret_new && timeout--) {
+		ret = ret_new;
+		asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	}
+	BUG_ON(!timeout && (ret != ret_new));
+	return ret * 64;
+}
+
+/* Spin for a few cycles without bothering the bus */
+static inline void cpu_spin(int cycles)
+{
+	uint64_t now = mfatb();
+
+	while (mfatb() < (now + cycles))
+		;
+}
+
+/* Qman/Bman API inlines and macros; */
+#ifdef lower_32_bits
+#undef lower_32_bits
+#endif
+#define lower_32_bits(x) ((u32)(x))
+
+#ifdef upper_32_bits
+#undef upper_32_bits
+#endif
+#define upper_32_bits(x) ((u32)(((x) >> 16) >> 16))
+
+/*
+ * Swap bytes of a 48-bit value.
+ */
+static inline uint64_t
+__bswap_48(uint64_t x)
+{
+	return  ((x & 0x0000000000ffULL) << 40) |
+		((x & 0x00000000ff00ULL) << 24) |
+		((x & 0x000000ff0000ULL) <<  8) |
+		((x & 0x0000ff000000ULL) >>  8) |
+		((x & 0x00ff00000000ULL) >> 24) |
+		((x & 0xff0000000000ULL) >> 40);
+}
+
+/*
+ * Swap bytes of a 40-bit value.
+ */
+static inline uint64_t
+__bswap_40(uint64_t x)
+{
+	return  ((x & 0x00000000ffULL) << 32) |
+		((x & 0x000000ff00ULL) << 16) |
+		((x & 0x0000ff0000ULL)) |
+		((x & 0x00ff000000ULL) >> 16) |
+		((x & 0xff00000000ULL) >> 32);
+}
+
+/*
+ * Swap bytes of a 24-bit value.
+ */
+static inline uint32_t
+__bswap_24(uint32_t x)
+{
+	return  ((x & 0x0000ffULL) << 16) |
+		((x & 0x00ff00ULL)) |
+		((x & 0xff0000ULL) >> 16);
+}
+
+#define be64_to_cpu(x) rte_be_to_cpu_64(x)
+#define be32_to_cpu(x) rte_be_to_cpu_32(x)
+#define be16_to_cpu(x) rte_be_to_cpu_16(x)
+
+#define cpu_to_be64(x) rte_cpu_to_be_64(x)
+#define cpu_to_be32(x) rte_cpu_to_be_32(x)
+#define cpu_to_be16(x) rte_cpu_to_be_16(x)
+
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+
+#define cpu_to_be48(x) __bswap_48(x)
+#define be48_to_cpu(x) __bswap_48(x)
+
+#define cpu_to_be40(x) __bswap_40(x)
+#define be40_to_cpu(x) __bswap_40(x)
+
+#define cpu_to_be24(x) __bswap_24(x)
+#define be24_to_cpu(x) __bswap_24(x)
+
+#else /* RTE_BIG_ENDIAN */
+
+#define cpu_to_be48(x) (x)
+#define be48_to_cpu(x) (x)
+
+#define cpu_to_be40(x) (x)
+#define be40_to_cpu(x) (x)
+
+#define cpu_to_be24(x) (x)
+#define be24_to_cpu(x) (x)
+
+#endif /* RTE_BIG_ENDIAN */
+
+/* When copying aligned words or shorts, try to avoid memcpy() */
+/* memcpy() stuff - when you know alignments in advance */
+#define CONFIG_TRY_BETTER_MEMCPY
+
+#ifdef CONFIG_TRY_BETTER_MEMCPY
+static inline void copy_words(void *dest, const void *src, size_t sz)
+{
+	u32 *__dest = dest;
+	const u32 *__src = src;
+	size_t __sz = sz >> 2;
+
+	BUG_ON((unsigned long)dest & 0x3);
+	BUG_ON((unsigned long)src & 0x3);
+	BUG_ON(sz & 0x3);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_shorts(void *dest, const void *src, size_t sz)
+{
+	u16 *__dest = dest;
+	const u16 *__src = src;
+	size_t __sz = sz >> 1;
+
+	BUG_ON((unsigned long)dest & 0x1);
+	BUG_ON((unsigned long)src & 0x1);
+	BUG_ON(sz & 0x1);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_bytes(void *dest, const void *src, size_t sz)
+{
+	u8 *__dest = dest;
+	const u8 *__src = src;
+
+	while (sz--)
+		*(__dest++) = *(__src++);
+}
+#else
+#define copy_words memcpy
+#define copy_shorts memcpy
+#define copy_bytes memcpy
+#endif
+
+/* Allocator stuff */
+#define kmalloc(sz, t)	malloc(sz)
+#define vmalloc(sz)	malloc(sz)
+#define kfree(p)	{ if (p) free(p); }
+static inline void *kzalloc(size_t sz, gfp_t __foo __rte_unused)
+{
+	void *ptr = malloc(sz);
+
+	if (ptr)
+		memset(ptr, 0, sz);
+	return ptr;
+}
+
+static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused)
+{
+	void *p;
+
+	if (posix_memalign(&p, 4096, 4096))
+		return 0;
+	memset(p, 0, 4096);
+	return (unsigned long)p;
+}
+
+/* Spinlock stuff */
+#define spinlock_t		rte_spinlock_t
+#define __SPIN_LOCK_UNLOCKED(x)	RTE_SPINLOCK_INITIALIZER
+#define DEFINE_SPINLOCK(x)	spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
+#define spin_lock_init(x)	rte_spinlock_init(x)
+#define spin_lock_destroy(x)
+#define spin_lock(x)		rte_spinlock_lock(x)
+#define spin_unlock(x)		rte_spinlock_unlock(x)
+#define spin_lock_irq(x)	spin_lock(x)
+#define spin_unlock_irq(x)	spin_unlock(x)
+#define spin_lock_irqsave(x, f) spin_lock_irq(x)
+#define spin_unlock_irqrestore(x, f) spin_unlock_irq(x)
+
+#define atomic_t                rte_atomic32_t
+#define atomic_read(v)          rte_atomic32_read(v)
+#define atomic_set(v, i)        rte_atomic32_set(v, i)
+
+#define atomic_inc(v)           rte_atomic32_add(v, 1)
+#define atomic_dec(v)           rte_atomic32_sub(v, 1)
+
+#define atomic_inc_and_test(v)  rte_atomic32_inc_and_test(v)
+#define atomic_dec_and_test(v)  rte_atomic32_dec_and_test(v)
+
+#define atomic_inc_return(v)    rte_atomic32_add_return(v, 1)
+#define atomic_dec_return(v)    rte_atomic32_sub_return(v, 1)
+#define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0)
+
+#include <dpaa_list.h>
+#include <dpaa_bits.h>
+
+#endif /* __COMPAT_H */
diff --git a/drivers/bus/dpaa/include/dpaa_bits.h b/drivers/bus/dpaa/include/dpaa_bits.h
new file mode 100644
index 0000000..e29019b
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_bits.h
@@ -0,0 +1,65 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_BITS_H
+#define __DPAA_BITS_H
+
+/* Bitfield stuff. */
+#define BITS_PER_ULONG	(sizeof(unsigned long) << 3)
+#define SHIFT_PER_ULONG	(((1 << 5) == BITS_PER_ULONG) ? 5 : 6)
+#define BITS_MASK(idx)	(1UL << ((idx) & (BITS_PER_ULONG - 1)))
+#define BITS_IDX(idx)	((idx) >> SHIFT_PER_ULONG)
+
+static inline void dpaa_set_bits(unsigned long mask,
+				 volatile unsigned long *p)
+{
+	*p |= mask;
+}
+
+static inline void dpaa_set_bit(int idx, volatile unsigned long *bits)
+{
+	dpaa_set_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+static inline void dpaa_clear_bits(unsigned long mask,
+				   volatile unsigned long *p)
+{
+	*p &= ~mask;
+}
+
+static inline void dpaa_clear_bit(int idx,
+				  volatile unsigned long *bits)
+{
+	dpaa_clear_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+#endif /* __DPAA_BITS_H */
diff --git a/drivers/bus/dpaa/include/dpaa_list.h b/drivers/bus/dpaa/include/dpaa_list.h
new file mode 100644
index 0000000..7ad0f14
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_list.h
@@ -0,0 +1,101 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_LIST_H
+#define __DPAA_LIST_H
+
+/****************/
+/* Linked-lists */
+/****************/
+
+struct list_head {
+	struct list_head *prev;
+	struct list_head *next;
+};
+
+#define COMPAT_LIST_HEAD(n) \
+struct list_head n = { \
+	.prev = &n, \
+	.next = &n \
+}
+
+#define INIT_LIST_HEAD(p) \
+do { \
+	struct list_head *__p298 = (p); \
+	__p298->next = __p298; \
+	__p298->prev = __p298->next; \
+} while (0)
+#define list_entry(node, type, member) \
+	(type *)((void *)node - offsetof(type, member))
+#define list_empty(p) \
+({ \
+	const struct list_head *__p298 = (p); \
+	((__p298->next == __p298) && (__p298->prev == __p298)); \
+})
+#define list_add(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->next = __l298->next; \
+	__p298->prev = __l298; \
+	__l298->next->prev = __p298; \
+	__l298->next = __p298; \
+} while (0)
+#define list_add_tail(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->prev = __l298->prev; \
+	__p298->next = __l298; \
+	__l298->prev->next = __p298; \
+	__l298->prev = __p298; \
+} while (0)
+#define list_for_each(i, l)				\
+	for (i = (l)->next; i != (l); i = i->next)
+#define list_for_each_safe(i, j, l)			\
+	for (i = (l)->next, j = i->next; i != (l);	\
+	     i = j, j = i->next)
+#define list_for_each_entry(i, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name); &i->name != (l); \
+		i = list_entry(i->name.next, typeof(*i), name))
+#define list_for_each_entry_safe(i, j, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name), \
+		j = list_entry(i->name.next, typeof(*j), name); \
+		&i->name != (l); \
+		i = j, j = list_entry(j->name.next, typeof(*j), name))
+#define list_del(i) \
+do { \
+	(i)->next->prev = (i)->prev; \
+	(i)->prev->next = (i)->next; \
+} while (0)
+
+#endif /* __DPAA_LIST_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 04/40] bus/dpaa: add OF parser for device scanning
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (2 preceding siblings ...)
  2017-07-04 14:43   ` [PATCH v2 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
@ 2017-07-04 14:43   ` Shreyansh Jain
  2017-07-04 14:43   ` [PATCH v2 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
                     ` (37 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This layer is used by Bus driver's scan function. Devices are parsed
using OF parser and added to DPAA device list.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile       |   7 +
 drivers/bus/dpaa/base/fman/of.c | 576 ++++++++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/of.h   | 191 +++++++++++++
 3 files changed, 774 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/include/of.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index f44f3c4..cc685d1 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -45,7 +45,12 @@ CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 endif
 
+CFLAGS +=-Wno-pointer-arith
+CFLAGS +=-Wno-cast-qual
+CFLAGS += -D _GNU_SOURCE
+
 CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_BUS_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -59,5 +64,7 @@ LIBABIVER := 1
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/of.c \
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/fman/of.c b/drivers/bus/dpaa/base/fman/of.c
new file mode 100644
index 0000000..6cc3987
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/of.c
@@ -0,0 +1,576 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <of.h>
+#include <rte_dpaa_logs.h>
+
+static int alive;
+static struct dt_dir root_dir;
+static const char *base_dir;
+static COMPAT_LIST_HEAD(linear);
+
+static int
+of_open_dir(const char *relative_path, struct dirent ***d)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = scandir(full_path, d, 0, versionsort);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+of_close_dir(struct dirent **d, int num)
+{
+	while (num--)
+		free(d[num]);
+	free(d);
+}
+
+static int
+of_open_file(const char *relative_path)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = open(full_path, O_RDONLY);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+process_file(struct dirent *dent, struct dt_dir *parent)
+{
+	int fd;
+	struct dt_file *f = malloc(sizeof(*f));
+
+	if (!f) {
+		DPAA_BUS_LOG(DEBUG, "Unable to allocate memory for file node");
+		return;
+	}
+	f->node.is_file = 1;
+	snprintf(f->node.node.name, NAME_MAX, "%s", dent->d_name);
+	snprintf(f->node.node.full_name, PATH_MAX, "%s/%s",
+		 parent->node.node.full_name, dent->d_name);
+	f->parent = parent;
+	fd = of_open_file(f->node.node.full_name);
+	if (fd < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to open file node");
+		free(f);
+		return;
+	}
+	f->len = read(fd, f->buf, OF_FILE_BUF_MAX);
+	close(fd);
+	if (f->len < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to read file node");
+		free(f);
+		return;
+	}
+	list_add_tail(&f->node.list, &parent->files);
+}
+
+static const struct dt_dir *
+node2dir(const struct device_node *n)
+{
+	struct dt_node *dn = container_of((struct device_node *)n,
+					  struct dt_node, node);
+	const struct dt_dir *d = container_of(dn, struct dt_dir, node);
+
+	assert(!dn->is_file);
+	return d;
+}
+
+/* process_dir() calls iterate_dir(), but the latter will also call the former
+ * when recursing into sub-directories, so a predeclaration is needed.
+ */
+static int process_dir(const char *relative_path, struct dt_dir *dt);
+
+static int
+iterate_dir(struct dirent **d, int num, struct dt_dir *dt)
+{
+	int loop;
+	/* Iterate the directory contents */
+	for (loop = 0; loop < num; loop++) {
+		struct dt_dir *subdir;
+		int ret;
+		/* Ignore dot files of all types (especially "..") */
+		if (d[loop]->d_name[0] == '.')
+			continue;
+		switch (d[loop]->d_type) {
+		case DT_REG:
+			process_file(d[loop], dt);
+			break;
+		case DT_DIR:
+			subdir = malloc(sizeof(*subdir));
+			if (!subdir) {
+				perror("malloc");
+				return -ENOMEM;
+			}
+			snprintf(subdir->node.node.name, NAME_MAX, "%s",
+				 d[loop]->d_name);
+			snprintf(subdir->node.node.full_name, PATH_MAX,
+				 "%s/%s", dt->node.node.full_name,
+				 d[loop]->d_name);
+			subdir->parent = dt;
+			ret = process_dir(subdir->node.node.full_name, subdir);
+			if (ret)
+				return ret;
+			list_add_tail(&subdir->node.list, &dt->subdirs);
+			break;
+		default:
+			DPAA_BUS_LOG(DEBUG, "Ignoring invalid dt entry %s/%s",
+				     dt->node.node.full_name, d[loop]->d_name);
+		}
+	}
+	return 0;
+}
+
+static int
+process_dir(const char *relative_path, struct dt_dir *dt)
+{
+	struct dirent **d;
+	int ret, num;
+
+	dt->node.is_file = 0;
+	INIT_LIST_HEAD(&dt->subdirs);
+	INIT_LIST_HEAD(&dt->files);
+	ret = of_open_dir(relative_path, &d);
+	if (ret < 0)
+		return ret;
+	num = ret;
+	ret = iterate_dir(d, num, dt);
+	of_close_dir(d, num);
+	return (ret < 0) ? ret : 0;
+}
+
+static void
+linear_dir(struct dt_dir *d)
+{
+	struct dt_file *f;
+	struct dt_dir *dd;
+
+	d->compatible = NULL;
+	d->status = NULL;
+	d->lphandle = NULL;
+	d->a_cells = NULL;
+	d->s_cells = NULL;
+	d->reg = NULL;
+	list_for_each_entry(f, &d->files, node.list) {
+		if (!strcmp(f->node.node.name, "compatible")) {
+			if (d->compatible)
+				DPAA_BUS_LOG(DEBUG, "Duplicate compatible in"
+					     " %s", d->node.node.full_name);
+			d->compatible = f;
+		} else if (!strcmp(f->node.node.name, "status")) {
+			if (d->status)
+				DPAA_BUS_LOG(DEBUG, "Duplicate status in %s",
+					     d->node.node.full_name);
+			d->status = f;
+		} else if (!strcmp(f->node.node.name, "linux,phandle")) {
+			if (d->lphandle)
+				DPAA_BUS_LOG(DEBUG, "Duplicate lphandle in %s",
+					     d->node.node.full_name);
+			d->lphandle = f;
+		} else if (!strcmp(f->node.node.name, "#address-cells")) {
+			if (d->a_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate a_cells in %s",
+					     d->node.node.full_name);
+			d->a_cells = f;
+		} else if (!strcmp(f->node.node.name, "#size-cells")) {
+			if (d->s_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate s_cells in %s",
+					     d->node.node.full_name);
+			d->s_cells = f;
+		} else if (!strcmp(f->node.node.name, "reg")) {
+			if (d->reg)
+				DPAA_BUS_LOG(DEBUG, "Duplicate reg in %s",
+					     d->node.node.full_name);
+			d->reg = f;
+		}
+	}
+
+	list_for_each_entry(dd, &d->subdirs, node.list) {
+		list_add_tail(&dd->linear, &linear);
+		linear_dir(dd);
+	}
+}
+
+int
+of_init_path(const char *dt_path)
+{
+	int ret;
+
+	base_dir = dt_path;
+
+	/* This needs to be singleton initialization */
+	DPAA_BUS_WARN(alive, "Double-init of device-tree driver!");
+
+	/* Prepare root node (the remaining fields are set in process_dir()) */
+	root_dir.node.node.name[0] = '\0';
+	root_dir.node.node.full_name[0] = '\0';
+	INIT_LIST_HEAD(&root_dir.node.list);
+	root_dir.parent = NULL;
+
+	/* Kick things off... */
+	ret = process_dir("", &root_dir);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to parse device tree");
+		return ret;
+	}
+
+	/* Now make a flat, linear list of directories */
+	linear_dir(&root_dir);
+	alive = 1;
+	return 0;
+}
+
+static void
+destroy_dir(struct dt_dir *d)
+{
+	struct dt_file *f, *tmpf;
+	struct dt_dir *dd, *tmpd;
+
+	list_for_each_entry_safe(f, tmpf, &d->files, node.list) {
+		list_del(&f->node.list);
+		free(f);
+	}
+	list_for_each_entry_safe(dd, tmpd, &d->subdirs, node.list) {
+		destroy_dir(dd);
+		list_del(&dd->node.list);
+		free(dd);
+	}
+}
+
+void
+of_finish(void)
+{
+	DPAA_BUS_WARN(!alive, "Double-finish of device-tree driver!");
+
+	destroy_dir(&root_dir);
+	INIT_LIST_HEAD(&linear);
+	alive = 0;
+}
+
+static const struct dt_dir *
+next_linear(const struct dt_dir *f)
+{
+	if (f->linear.next == &linear)
+		return NULL;
+	return list_entry(f->linear.next, struct dt_dir, linear);
+}
+
+static int
+check_compatible(const struct dt_file *f, const char *compatible)
+{
+	const char *c = (char *)f->buf;
+	unsigned int len, remains = f->len;
+
+	while (remains) {
+		len = strlen(c);
+		if (!strcmp(c, compatible))
+			return 1;
+
+		if (remains < len + 1)
+			break;
+
+		c += (len + 1);
+		remains -= (len + 1);
+	}
+	return 0;
+}
+
+const struct device_node *
+of_find_compatible_node(const struct device_node *from,
+			const char *type __always_unused,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+
+	if (list_empty(&linear))
+		return NULL;
+	if (!from)
+		d = list_entry(linear.next, struct dt_dir, linear);
+	else
+		d = node2dir(from);
+	for (d = next_linear(d); d && (!d->compatible ||
+				       !check_compatible(d->compatible,
+				       compatible));
+			d = next_linear(d))
+		;
+	if (d)
+		return &d->node.node;
+	return NULL;
+}
+
+const void *
+of_get_property(const struct device_node *from, const char *name,
+		size_t *lenp)
+{
+	const struct dt_dir *d;
+	const struct dt_file *f;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+
+	d = node2dir(from);
+	list_for_each_entry(f, &d->files, node.list)
+		if (!strcmp(f->node.node.name, name)) {
+			if (lenp)
+				*lenp = f->len;
+			return f->buf;
+		}
+	return NULL;
+}
+
+bool
+of_device_is_available(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+	d = node2dir(dev_node);
+	if (!d->status)
+		return true;
+	if (!strcmp((char *)d->status->buf, "okay"))
+		return true;
+	if (!strcmp((char *)d->status->buf, "ok"))
+		return true;
+	return false;
+}
+
+const struct device_node *
+of_find_node_by_phandle(phandle ph)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+	list_for_each_entry(d, &linear, linear)
+		if (d->lphandle && (d->lphandle->len == 4) &&
+		    !memcmp(d->lphandle->buf, &ph, 4))
+			return &d->node.node;
+	return NULL;
+}
+
+const struct device_node *
+of_get_parent(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	d = node2dir(dev_node);
+	if (!d->parent)
+		return NULL;
+	return &d->parent->node.node;
+}
+
+const struct device_node *
+of_get_next_child(const struct device_node *dev_node,
+		  const struct device_node *prev)
+{
+	const struct dt_dir *p, *c;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	p = node2dir(dev_node);
+	if (prev) {
+		c = node2dir(prev);
+		DPAA_BUS_WARN((c->parent != p), "Parent/child mismatch");
+		if (c->parent != p)
+			return NULL;
+		if (c->node.list.next == &p->subdirs)
+			/* prev was the last child */
+			return NULL;
+		c = list_entry(c->node.list.next, struct dt_dir, node.list);
+		return &c->node.node;
+	}
+	/* Return first child */
+	if (list_empty(&p->subdirs))
+		return NULL;
+	c = list_entry(p->subdirs.next, struct dt_dir, node.list);
+	return &c->node.node;
+}
+
+uint32_t
+of_n_addr_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->a_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->a_cells->buf[0];
+			assert(d->a_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NA;
+}
+
+uint32_t
+of_n_size_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->s_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->s_cells->buf[0];
+			assert(d->s_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NS;
+}
+
+const uint32_t *
+of_get_address(const struct device_node *dev_node, size_t idx,
+	       uint64_t *size, uint32_t *flags __rte_unused)
+{
+	const struct dt_dir *d;
+	const unsigned char *buf;
+	uint32_t na = of_n_addr_cells(dev_node);
+	uint32_t ns = of_n_size_cells(dev_node);
+
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (!d->reg)
+		return NULL;
+	assert(d->reg->len % ((na + ns) * 4) == 0);
+	assert(d->reg->len / ((na + ns) * 4) > (unsigned int) idx);
+	buf = (const unsigned char *)&d->reg->buf[0];
+	buf += (na + ns) * idx * 4;
+	if (size)
+		for (*size = 0; ns > 0; ns--, na++)
+			*size = (*size << 32) +
+				(((uint32_t)buf[4 * na] << 24) |
+				((uint32_t)buf[4 * na + 1] << 16) |
+				((uint32_t)buf[4 * na + 2] << 8) |
+				(uint32_t)buf[4 * na + 3]);
+	return (const uint32_t *)buf;
+}
+
+uint64_t
+of_translate_address(const struct device_node *dev_node,
+		     const uint32_t *addr)
+{
+	uint64_t phys_addr, tmp_addr;
+	const struct device_node *parent;
+	const uint32_t *ranges;
+	size_t rlen;
+	uint32_t na, pna;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+	assert(dev_node != NULL);
+
+	na = of_n_addr_cells(dev_node);
+	phys_addr = of_read_number(addr, na);
+
+	dev_node = of_get_parent(dev_node);
+	if (!dev_node)
+		return 0;
+	else if (node2dir(dev_node) == &root_dir)
+		return phys_addr;
+
+	do {
+		pna = of_n_addr_cells(dev_node);
+		parent = of_get_parent(dev_node);
+		if (!parent)
+			return 0;
+
+		ranges = of_get_property(dev_node, "ranges", &rlen);
+		/* "ranges" property is missing. Translation breaks */
+		if (!ranges)
+			return 0;
+		/* "ranges" property is empty. Do 1:1 translation */
+		else if (rlen == 0)
+			continue;
+		else
+			tmp_addr = of_read_number(ranges + na, pna);
+
+		na = pna;
+		dev_node = parent;
+		phys_addr += tmp_addr;
+	} while (node2dir(parent) != &root_dir);
+
+	return phys_addr;
+}
+
+bool
+of_device_is_compatible(const struct device_node *dev_node,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_WARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (d->compatible && check_compatible(d->compatible, compatible))
+		return true;
+	return false;
+}
diff --git a/drivers/bus/dpaa/include/of.h b/drivers/bus/dpaa/include/of.h
new file mode 100644
index 0000000..e422a53
--- /dev/null
+++ b/drivers/bus/dpaa/include/of.h
@@ -0,0 +1,191 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor, Inc.
+ * Copyright 2017 NXP.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __OF_H
+#define	__OF_H
+
+#include <compat.h>
+
+#ifndef OF_INIT_DEFAULT_PATH
+#define OF_INIT_DEFAULT_PATH "/proc/device-tree"
+#endif
+
+#define OF_DEFAULT_NA 1
+#define OF_DEFAULT_NS 1
+
+#define OF_FILE_BUF_MAX 256
+
+/**
+ * Layout of Device Tree:
+ * dt_dir
+ *  |- dt_dir
+ *  |   |- dt_dir
+ *  |   |  |- dt_dir
+ *  |   |  |  |- dt_file
+ *  |   |  |  ``- dt_file
+ *  |   |  ``- dt_file
+ *  |   `-dt_file`
+ *  ``- dt_file
+ *
+ *  +------------------+
+ *  |dt_dir            |
+ *  |+----------------+|
+ *  ||dt_node         ||
+ *  ||+--------------+||
+ *  |||device_node   |||
+ *  ||+--------------+||
+ *  || list_dt_nodes  ||
+ *  |+----------------+|
+ *  | list of subdir   |
+ *  | list of files    |
+ *  +------------------+
+ */
+
+/**
+ * Device description on of a device node in device tree.
+ */
+struct device_node {
+	char name[NAME_MAX];
+	char full_name[PATH_MAX];
+};
+
+/**
+ * List of device nodes available in a device tree layout
+ */
+struct dt_node {
+	struct device_node node; /**< Property of node */
+	int is_file; /**< FALSE==dir, TRUE==file */
+	struct list_head list; /**< Nodes within a parent subdir */
+};
+
+/**
+ * Types we use to represent directories and files
+ */
+struct dt_file;
+struct dt_dir {
+	struct dt_node node;
+	struct list_head subdirs;
+	struct list_head files;
+	struct list_head linear;
+	struct dt_dir *parent;
+	struct dt_file *compatible;
+	struct dt_file *status;
+	struct dt_file *lphandle;
+	struct dt_file *a_cells;
+	struct dt_file *s_cells;
+	struct dt_file *reg;
+};
+
+struct dt_file {
+	struct dt_node node;
+	struct dt_dir *parent;
+	ssize_t len;
+	uint64_t buf[OF_FILE_BUF_MAX >> 3]; /** ASDF: Why? */
+};
+
+const struct device_node *of_find_compatible_node(
+					const struct device_node *from,
+					const char *type __always_unused,
+					const char *compatible)
+	__attribute__((nonnull(3)));
+
+#define for_each_compatible_node(dev_node, type, compatible) \
+	for (dev_node = of_find_compatible_node(NULL, type, compatible); \
+		dev_node != NULL; \
+		dev_node = of_find_compatible_node(dev_node, type, compatible))
+
+const void *of_get_property(const struct device_node *from, const char *name,
+			    size_t *lenp) __attribute__((nonnull(2)));
+bool of_device_is_available(const struct device_node *dev_node);
+
+const struct device_node *of_find_node_by_phandle(phandle ph);
+
+const struct device_node *of_get_parent(const struct device_node *dev_node);
+
+const struct device_node *of_get_next_child(const struct device_node *dev_node,
+					    const struct device_node *prev);
+
+#define for_each_child_node(parent, child) \
+	for (child = of_get_next_child(parent, NULL); child != NULL; \
+			child = of_get_next_child(parent, child))
+
+uint32_t of_n_addr_cells(const struct device_node *dev_node);
+uint32_t of_n_size_cells(const struct device_node *dev_node);
+
+const uint32_t *of_get_address(const struct device_node *dev_node, size_t idx,
+			       uint64_t *size, uint32_t *flags);
+
+uint64_t of_translate_address(const struct device_node *dev_node,
+			      const u32 *addr) __attribute__((nonnull));
+
+bool of_device_is_compatible(const struct device_node *dev_node,
+			     const char *compatible);
+
+/* of_init() must be called prior to initialisation or use of any driver
+ * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers, etc.
+ * The path should usually be "/proc/device-tree".
+ */
+int of_init_path(const char *dt_path);
+
+/* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
+ * full reload is desired without a process exit.
+ */
+void of_finish(void);
+
+/* Use of this wrapper is recommended. */
+static inline int of_init(void)
+{
+	return of_init_path(OF_INIT_DEFAULT_PATH);
+}
+
+/* Read a numeric property according to its size and return it as a 64-bit
+ * value.
+ */
+static inline uint64_t of_read_number(const __be32 *cell, int size)
+{
+	uint64_t r = 0;
+
+	while (size--)
+		r = (r << 32) | be32toh(*(cell++));
+	return r;
+}
+
+#endif	/*  __OF_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 05/40] bus/dpaa: introducing FMan configurations
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (3 preceding siblings ...)
  2017-07-04 14:43   ` [PATCH v2 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
@ 2017-07-04 14:43   ` Shreyansh Jain
  2017-07-04 14:43   ` [PATCH v2 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
                     ` (36 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

FMan or Frame Manager, inspects traffic, splits it into queueson ingress.
It is also responsible for directing traffic on queues on egress.

This patch introduces FMan configurational interfaces. This layer is
used by Bus driver for configuring the hardware block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   2 +
 drivers/bus/dpaa/base/fman/fman.c         | 540 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c | 205 ++++++++++++
 drivers/bus/dpaa/include/fman.h           | 472 ++++++++++++++++++++++++++
 drivers/bus/dpaa/include/netcfg.h         |  96 ++++++
 5 files changed, 1315 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index cc685d1..49abdc7 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -65,6 +65,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/fman.c \
 	base/fman/of.c \
+	base/fman/netcfg_layer.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
new file mode 100644
index 0000000..f1cdcf1
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -0,0 +1,540 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+
+#include <rte_malloc.h>
+
+/* This header declares the driver interface we implement */
+#include <fman.h>
+#include <of.h>
+
+#define QMI_PORT_REGS_OFFSET		0x400
+
+/* CCSR map address to access ccsr based register */
+void *fman_ccsr_map;
+/* fman version info */
+u16 fman_ip_rev;
+static int get_once;
+u32 fman_dealloc_bufs_mask_hi;
+u32 fman_dealloc_bufs_mask_lo;
+
+int fman_ccsr_map_fd = -1;
+static COMPAT_LIST_HEAD(__ifs);
+
+/* This is the (const) global variable that callers have read-only access to.
+ * Internally, we have read-write access directly to __ifs.
+ */
+const struct list_head *fman_if_list = &__ifs;
+
+static void
+if_destructor(struct __fman_if *__if)
+{
+	struct fman_if_bpool *bp, *tmpbp;
+
+	if (__if->__if.mac_type == fman_offline)
+		goto cleanup;
+
+	list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+cleanup:
+	rte_free(__if);
+}
+
+static int
+fman_get_ip_rev(const struct device_node *fman_node)
+{
+	const uint32_t *fman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	uint32_t ip_rev_1;
+	int _errno;
+
+	fman_addr = of_get_address(fman_node, 0, &regs_size, NULL);
+	if (!fman_addr) {
+		pr_err("of_get_address cannot return fman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(fman_node, fman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+	fman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, fman_ccsr_map_fd, phys_addr);
+	if (fman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map FMan ccsr base");
+		return -EINVAL;
+	}
+
+	ip_rev_1 = in_be32(fman_ccsr_map + FMAN_IP_REV_1);
+	fman_ip_rev = (ip_rev_1 & FMAN_IP_REV_1_MAJOR_MASK) >>
+			FMAN_IP_REV_1_MAJOR_SHIFT;
+
+	_errno = munmap(fman_ccsr_map, regs_size);
+	if (_errno)
+		pr_err("munmap() of FMan ccsr failed");
+
+	return 0;
+}
+
+static int
+fman_get_mac_index(uint64_t regs_addr_host, uint8_t *mac_idx)
+{
+	int ret = 0;
+
+	/*
+	 * MAC1 : E_0000h
+	 * MAC2 : E_2000h
+	 * MAC3 : E_4000h
+	 * MAC4 : E_6000h
+	 * MAC5 : E_8000h
+	 * MAC6 : E_A000h
+	 * MAC7 : E_C000h
+	 * MAC8 : E_E000h
+	 * MAC9 : F_0000h
+	 * MAC10: F_2000h
+	 */
+	switch (regs_addr_host) {
+	case 0xE0000:
+		*mac_idx = 1;
+		break;
+	case 0xE2000:
+		*mac_idx = 2;
+		break;
+	case 0xE4000:
+		*mac_idx = 3;
+		break;
+	case 0xE6000:
+		*mac_idx = 4;
+		break;
+	case 0xE8000:
+		*mac_idx = 5;
+		break;
+	case 0xEA000:
+		*mac_idx = 6;
+		break;
+	case 0xEC000:
+		*mac_idx = 7;
+		break;
+	case 0xEE000:
+		*mac_idx = 8;
+		break;
+	case 0xF0000:
+		*mac_idx = 9;
+		break;
+	case 0xF2000:
+		*mac_idx = 10;
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+fman_if_init(const struct device_node *dpa_node)
+{
+	const char *rprop, *mprop;
+	uint64_t phys_addr;
+	struct __fman_if *__if;
+	struct fman_if_bpool *bpool;
+
+	const phandle *mac_phandle, *ports_phandle, *pools_phandle;
+	const phandle *tx_channel_id = NULL, *mac_addr, *cell_idx;
+	const phandle *rx_phandle, *tx_phandle;
+	uint64_t tx_phandle_host[4] = {0};
+	uint64_t rx_phandle_host[4] = {0};
+	uint64_t regs_addr_host = 0;
+	uint64_t cell_idx_host = 0;
+
+	const struct device_node *mac_node = NULL, *tx_node;
+	const struct device_node *pool_node, *fman_node, *rx_node;
+	const uint32_t *regs_addr = NULL;
+	const char *mname, *fname;
+	const char *dname = dpa_node->full_name;
+	size_t lenp;
+	int _errno;
+	const char *char_prop;
+	uint32_t na;
+
+	if (of_device_is_available(dpa_node) == false)
+		return 0;
+
+	rprop = "fsl,qman-frame-queues-rx";
+	mprop = "fsl,fman-mac";
+
+	/* Allocate an object for this network interface */
+	__if = rte_malloc(NULL, sizeof(*__if), RTE_CACHE_LINE_SIZE);
+	FMAN_ERR(!__if, -ENOMEM, "malloc(%zu)\n", sizeof(*__if));
+	memset(__if, 0, sizeof(*__if));
+	INIT_LIST_HEAD(&__if->__if.bpool_list);
+	strncpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
+	__if->node_path[PATH_MAX - 1] = '\0';
+
+	/** ASDF: This needs to be revisited */
+	/* Obtain the MAC node used by this interface except macless */
+	mac_phandle = of_get_property(dpa_node, mprop, &lenp);
+	FMAN_ERR(!mac_phandle, -EINVAL, "%s: no %s\n", dname, mprop);
+	assert(lenp == sizeof(phandle));
+	mac_node = of_find_node_by_phandle(*mac_phandle);
+	FMAN_ERR(!mac_node, -ENXIO, "%s: bad 'fsl,fman-mac\n", dname);
+	mname = mac_node->full_name;
+
+	/* Map the CCSR regs for the MAC node */
+	regs_addr = of_get_address(mac_node, 0, &__if->regs_size, NULL);
+	FMAN_ERR(!regs_addr, -EINVAL, "of_get_address(%s)\n", mname);
+	phys_addr = of_translate_address(mac_node, regs_addr);
+	FMAN_ERR(!phys_addr, -EINVAL, "of_translate_address(%s, %p)\n",
+		mname, regs_addr);
+		__if->ccsr_map = mmap(NULL, __if->regs_size,
+		PROT_READ | PROT_WRITE, MAP_SHARED,
+		fman_ccsr_map_fd, phys_addr);
+	FMAN_ERR(__if->ccsr_map == MAP_FAILED, -errno,
+		"mmap(0x%"PRIx64")\n", phys_addr);
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	regs_addr_host = of_read_number(regs_addr, na);
+
+
+	/* Get the index of the Fman this i/f belongs to */
+	fman_node = of_get_parent(mac_node);
+	na = of_n_addr_cells(mac_node);
+	FMAN_ERR(!fman_node, -ENXIO, "of_get_parent(%s)\n", mname);
+	fname = fman_node->full_name;
+	cell_idx = of_get_property(fman_node, "cell-index", &lenp);
+	FMAN_ERR(!cell_idx, -ENXIO, "%s: no cell-index)\n", fname);
+	assert(lenp == sizeof(*cell_idx));
+	cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+	__if->__if.fman_idx = cell_idx_host;
+	if (!get_once) {
+		_errno = fman_get_ip_rev(fman_node);
+		FMAN_ERR(_errno, -ENXIO, "%s: ip_rev is not available\n",
+		       fname);
+	}
+
+	if (fman_ip_rev >= FMAN_V3) {
+		/*
+		 * Set A2V, OVOM, EBD bits in contextA to allow external
+		 * buffer deallocation by fman.
+		 */
+		fman_dealloc_bufs_mask_hi = FMAN_V3_CONTEXTA_EN_A2V |
+						FMAN_V3_CONTEXTA_EN_OVOM;
+		fman_dealloc_bufs_mask_lo = FMAN_V3_CONTEXTA_EN_EBD;
+	} else {
+		fman_dealloc_bufs_mask_hi = 0;
+		fman_dealloc_bufs_mask_lo = 0;
+	}
+	/* Is the MAC node 1G, 10G? */
+	__if->__if.is_memac = 0;
+
+	if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
+		__if->__if.mac_type = fman_mac_1g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
+		__if->__if.mac_type = fman_mac_10g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-memac")) {
+		/** ASDF: what is memac? */
+		__if->__if.is_memac = 1;
+		char_prop = of_get_property(mac_node, "phy-connection-type",
+					    NULL);
+		if (!char_prop) {
+			printf("memac: unknown MII type assuming 1G\n");
+			/* Right now forcing memac to 1g in case of error*/
+			__if->__if.mac_type = fman_mac_1g;
+		} else {
+			if (strstr(char_prop, "sgmii"))
+				__if->__if.mac_type = fman_mac_1g;
+			else if (strstr(char_prop, "rgmii")) {
+				__if->__if.mac_type = fman_mac_1g;
+				__if->__if.is_rgmii = 1;
+			} else if (strstr(char_prop, "xgmii"))
+				__if->__if.mac_type = fman_mac_10g;
+		}
+	} else
+		FMAN_ERR(1, -EINVAL, "%s: unknown MAC type\n", mname);
+
+	/*
+	 * For MAC ports, we cannot rely on cell-index. In
+	 * T2080, two of the 10G ports on single FMAN have same
+	 * duplicate cell-indexes as the other two 10G ports on
+	 * same FMAN. Hence, we now rely upon addresses of the
+	 * ports from device tree to deduce the index.
+	 */
+
+	_errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
+	FMAN_ERR(_errno, -EINVAL, "Invalid register address: %lu",
+		 regs_addr_host);
+
+	/* Extract the MAC address for private and shared interfaces */
+	mac_addr = of_get_property(mac_node, "local-mac-address",
+				   &lenp);
+	FMAN_ERR(!mac_addr, -EINVAL, "%s: no local-mac-address\n",
+	       mname);
+	memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
+
+	/* Extract the Tx port (it's the second of the two port handles)
+	 * and get its channel ID
+	 */
+	ports_phandle = of_get_property(mac_node, "fsl,port-handles",
+					&lenp);
+	if (!ports_phandle)
+		ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+						&lenp);
+	FMAN_ERR(!ports_phandle, -EINVAL, "%s: no fsl,port-handles\n",
+	       mname);
+	assert(lenp == (2 * sizeof(phandle)));
+	tx_node = of_find_node_by_phandle(ports_phandle[1]);
+	FMAN_ERR(!tx_node, -ENXIO, "%s: bad fsl,port-handle[1]\n", mname);
+	/* Extract the channel ID (from tx-port-handle) */
+	tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
+					&lenp);
+	FMAN_ERR(!tx_channel_id, -EINVAL, "%s: no fsl-qman-channel-id\n",
+	       tx_node->full_name);
+
+	rx_node = of_find_node_by_phandle(ports_phandle[0]);
+	FMAN_ERR(!rx_node, -ENXIO, "%s: bad fsl,port-handle[0]\n", mname);
+	regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+	FMAN_ERR(!regs_addr, -EINVAL, "of_get_address(%s)\n", mname);
+	phys_addr = of_translate_address(rx_node, regs_addr);
+	FMAN_ERR(!phys_addr, -EINVAL, "of_translate_address(%s, %p)\n",
+	       mname, regs_addr);
+	__if->bmi_map = mmap(NULL, __if->regs_size,
+				 PROT_READ | PROT_WRITE, MAP_SHARED,
+				 fman_ccsr_map_fd, phys_addr);
+	FMAN_ERR(__if->bmi_map == MAP_FAILED, -errno,
+	       "mmap(0x%"PRIx64")\n", phys_addr);
+
+	/* No channel ID for MAC-less */
+	assert(lenp == sizeof(*tx_channel_id));
+	na = of_n_addr_cells(mac_node);
+	__if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+	/* Extract the Rx FQIDs. (Note, the device representation is silly,
+	 * there are "counts" that must always be 1.)
+	 */
+	rx_phandle = of_get_property(dpa_node, rprop, &lenp);
+	FMAN_ERR(!rx_phandle, -EINVAL, "%s: no fsl,qman-frame-queues-rx\n",
+	       dname);
+
+	assert(lenp == (4 * sizeof(phandle)));
+
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
+	rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
+	rx_phandle_host[2] = of_read_number(&rx_phandle[2], na);
+	rx_phandle_host[3] = of_read_number(&rx_phandle[3], na);
+
+	assert((rx_phandle_host[1] == 1) && (rx_phandle_host[3] == 1));
+	__if->__if.fqid_rx_err = rx_phandle_host[0];
+	__if->__if.fqid_rx_def = rx_phandle_host[2];
+
+	/* Extract the Tx FQIDs */
+	tx_phandle = of_get_property(dpa_node,
+				     "fsl,qman-frame-queues-tx", &lenp);
+	FMAN_ERR(!tx_phandle, -EINVAL, "%s: no fsl,qman-frame-queues-tx\n",
+	       dname);
+
+	assert(lenp == (4 * sizeof(phandle)));
+	/*TODO: Fix for other cases also */
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	tx_phandle_host[0] = of_read_number(&tx_phandle[0], na);
+	tx_phandle_host[1] = of_read_number(&tx_phandle[1], na);
+	tx_phandle_host[2] = of_read_number(&tx_phandle[2], na);
+	tx_phandle_host[3] = of_read_number(&tx_phandle[3], na);
+	assert((tx_phandle_host[1] == 1) && (tx_phandle_host[3] == 1));
+	__if->__if.fqid_tx_err = tx_phandle_host[0];
+	__if->__if.fqid_tx_confirm = tx_phandle_host[2];
+
+	/* Obtain the buffer pool nodes used by this interface */
+	pools_phandle = of_get_property(dpa_node, "fsl,bman-buffer-pools",
+					&lenp);
+	FMAN_ERR(!pools_phandle, -EINVAL, "%s: no fsl,bman-buffer-pools\n",
+	       dname);
+	/* For each pool, parse the corresponding node and add a pool object
+	 * to the interface's "bpool_list"
+	 */
+	assert(lenp && !(lenp % sizeof(phandle)));
+	while (lenp) {
+		size_t proplen;
+		const phandle *prop;
+		uint64_t bpid_host = 0;
+		uint64_t bpool_host[6] = {0};
+		const char *pname;
+		/* Allocate an object for the pool */
+		bpool = rte_malloc(NULL, sizeof(*bpool), RTE_CACHE_LINE_SIZE);
+		FMAN_ERR(!bpool, -ENOMEM, "malloc(%zu)\n", sizeof(*bpool));
+		/* Find the pool node */
+		pool_node = of_find_node_by_phandle(*pools_phandle);
+		FMAN_ERR(!pool_node, -ENXIO, "%s: bad fsl,bman-buffer-pools\n",
+		       dname);
+		pname = pool_node->full_name;
+		/* Extract the BPID property */
+		prop = of_get_property(pool_node, "fsl,bpid", &proplen);
+		FMAN_ERR(!prop, -EINVAL, "%s: no fsl,bpid\n", pname);
+		assert(proplen == sizeof(*prop));
+		na = of_n_addr_cells(mac_node);
+		/* Get rid of endianness (issues).
+		 * Convert to host byte-order
+		 */
+		bpid_host = of_read_number(prop, na);
+		bpool->bpid = bpid_host;
+		/* Extract the cfg property (count/size/addr). "fsl,bpool-cfg"
+		 * indicates for the Bman driver to seed the pool.
+		 * "fsl,bpool-ethernet-cfg" is used by the network driver. The
+		 * two are mutually exclusive, so check for either of them.
+		 */
+		prop = of_get_property(pool_node, "fsl,bpool-cfg",
+				       &proplen);
+		if (!prop)
+			prop = of_get_property(pool_node,
+					       "fsl,bpool-ethernet-cfg",
+					       &proplen);
+		if (!prop) {
+			/* It's OK for there to be no bpool-cfg */
+			bpool->count = bpool->size = bpool->addr = 0;
+		} else {
+			assert(proplen == (6 * sizeof(*prop)));
+			na = of_n_addr_cells(mac_node);
+			/* Get rid of endianness (issues).
+			 * Convert to host byte order
+			 */
+			bpool_host[0] = of_read_number(&prop[0], na);
+			bpool_host[1] = of_read_number(&prop[1], na);
+			bpool_host[2] = of_read_number(&prop[2], na);
+			bpool_host[3] = of_read_number(&prop[3], na);
+			bpool_host[4] = of_read_number(&prop[4], na);
+			bpool_host[5] = of_read_number(&prop[5], na);
+
+			bpool->count = ((uint64_t)bpool_host[0] << 32) |
+					bpool_host[1];
+			bpool->size = ((uint64_t)bpool_host[2] << 32) |
+					bpool_host[3];
+			bpool->addr = ((uint64_t)bpool_host[4] << 32) |
+					bpool_host[5];
+		}
+		/* Parsing of the pool is complete, add it to the interface
+		 * list.
+		 */
+		list_add_tail(&bpool->node, &__if->__if.bpool_list);
+		lenp -= sizeof(phandle);
+		pools_phandle++;
+	}
+
+	/* Parsing of the network interface is complete, add it to the list */
+	DPAA_BUS_LOG(DEBUG, "Found %s, Tx Channel = %x, FMAN = %x,"
+		    "Port ID = %x\n",
+		    dname, __if->__if.tx_channel_id, __if->__if.fman_idx,
+		    __if->__if.mac_idx);
+
+	list_add_tail(&__if->__if.node, &__ifs);
+	return 0;
+err:
+	if_destructor(__if);
+	return _errno;
+}
+
+int
+fman_init(void)
+{
+	const struct device_node *dpa_node;
+	int _errno;
+
+	/* If multiple dependencies try to initialise the Fman driver, don't
+	 * panic.
+	 */
+	if (fman_ccsr_map_fd != -1)
+		return 0;
+
+	fman_ccsr_map_fd = open(FMAN_DEVICE_PATH, O_RDWR);
+	if (unlikely(fman_ccsr_map_fd < 0)) {
+		DPAA_BUS_LOG(ERR, "Unable to open (/dev/mem)");
+		return fman_ccsr_map_fd;
+	}
+
+	for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-init") {
+		_errno = fman_if_init(dpa_node);
+		FMAN_ERR(_errno, _errno, "if_init(%s)\n", dpa_node->full_name);
+	}
+
+	return 0;
+err:
+	fman_finish();
+	return _errno;
+}
+
+void
+fman_finish(void)
+{
+	struct __fman_if *__if, *tmpif;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	list_for_each_entry_safe(__if, tmpif, &__ifs, __if.node) {
+		int _errno;
+
+		/* disable Rx and Tx */
+		if ((__if->__if.mac_type == fman_mac_1g) &&
+		    (!__if->__if.is_memac))
+			out_be32(__if->ccsr_map + 0x100,
+				 in_be32(__if->ccsr_map + 0x100) & ~(u32)0x5);
+		else
+			out_be32(__if->ccsr_map + 8,
+				 in_be32(__if->ccsr_map + 8) & ~(u32)3);
+		/* release the mapping */
+		_errno = munmap(__if->ccsr_map, __if->regs_size);
+		if (unlikely(_errno < 0))
+			fprintf(stderr, "%s:%hu:%s(): munmap() = %d (%s)\n",
+				__FILE__, __LINE__, __func__,
+				-errno, strerror(errno));
+		printf("Tearing down %s\n", __if->node_path);
+		list_del(&__if->__if.node);
+		rte_free(__if);
+	}
+
+	close(fman_ccsr_map_fd);
+	fman_ccsr_map_fd = -1;
+}
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
new file mode 100644
index 0000000..e3a0ced
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -0,0 +1,205 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <inttypes.h>
+#include <of.h>
+#include <net/if.h>
+#include <sys/ioctl.h>
+#include <error.h>
+#include <net/if_arp.h>
+#include <assert.h>
+#include <unistd.h>
+
+#include <rte_malloc.h>
+
+#include <rte_dpaa_logs.h>
+#include <netcfg.h>
+
+/* Structure contains information about all the interfaces given by user
+ * on command line.
+ */
+struct netcfg_interface *netcfg_interface;
+
+/* This data structure contaings all configurations information
+ * related to usages of DPA devices.
+ */
+struct netcfg_info *netcfg;
+/* fd to open a socket for making ioctl request to disable/enable shared
+ *  interfaces.
+ */
+static int skfd = -1;
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dump_netcfg(struct netcfg_info *cfg_ptr)
+{
+	int i;
+
+	printf("..........  DPAA Configuration  ..........\n\n");
+
+	/* Network interfaces */
+	printf("Network interfaces: %d\n", cfg_ptr->num_ethports);
+	for (i = 0; i < cfg_ptr->num_ethports; i++) {
+		struct fman_if_bpool *bpool;
+		struct fm_eth_port_cfg *p_cfg = &cfg_ptr->port_cfg[i];
+		struct fman_if *__if = p_cfg->fman_if;
+
+		printf("\n+ Fman %d, MAC %d (%s);\n",
+		       __if->fman_idx, __if->mac_idx,
+		       (__if->mac_type == fman_mac_1g) ? "1G" : "10G");
+
+		printf("\tmac_addr: " ETH_MAC_PRINTF_FMT "\n",
+		       ETH_MAC_PRINTF_ARGS(&__if->mac_addr));
+
+		printf("\ttx_channel_id: 0x%02x\n",
+		       __if->tx_channel_id);
+
+		printf("\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
+		printf("\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
+
+		printf("\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
+		printf("\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
+		fman_if_for_each_bpool(bpool, __if)
+			printf("\tbuffer pool: (bpid=%d, count=%"PRId64
+			       " size=%"PRId64", addr=0x%"PRIx64")\n",
+			       bpool->bpid, bpool->count, bpool->size,
+			       bpool->addr);
+	}
+}
+#endif /* RTE_LIBRTE_DPAA_DEBUG_DRIVER */
+
+static inline int
+get_num_netcfg_interfaces(char *str)
+{
+	char *pch;
+	uint8_t count = 0;
+
+	if (str == NULL)
+		return -EINVAL;
+	pch = strtok(str, ",");
+	while (pch != NULL) {
+		count++;
+		pch = strtok(NULL, ",");
+	}
+	return count;
+}
+
+struct netcfg_info *
+netcfg_acquire(void)
+{
+	struct fman_if *__if;
+	int _errno, idx = 0;
+	uint8_t num_ports = 0;
+	uint8_t num_cfg_ports = 0;
+	size_t size;
+
+	/* Extract dpa configuration from fman driver and FMC configuration
+	 * for command-line interfaces.
+	 */
+
+	if (skfd == -1) {
+		/* Open a basic socket to enable/disable shared
+		 * interfaces.
+		 */
+		skfd = socket(AF_PACKET, SOCK_RAW, 0);
+		if (unlikely(skfd < 0)) {
+			/** ASDF: logging would need to be changed */
+			error(0, errno, "%s(): open(SOCK_RAW)", __func__);
+			return NULL;
+		}
+	}
+
+	/* Initialise the Fman driver */
+	_errno = fman_init();
+	if (_errno) {
+		DPAA_BUS_LOG(ERR, "FMAN driver init failed (%d)", errno);
+		return NULL;
+	}
+
+	/* Number of MAC ports */
+	list_for_each_entry(__if, fman_if_list, node)
+		num_ports++;
+
+	if (!num_ports) {
+		DPAA_BUS_LOG(ERR, "FMAN ports not available");
+		return NULL;
+	}
+	/* Allocate space for all enabled mac ports */
+	size = sizeof(*netcfg) +
+		(num_ports * sizeof(struct fm_eth_port_cfg));
+	/** ASDF: Needs to be changed to rte_malloc */
+	netcfg = rte_zmalloc(NULL, size * 1, RTE_CACHE_LINE_SIZE);
+	if (unlikely(netcfg == NULL)) {
+		DPAA_BUS_LOG(ERR, "Unable to allocat mem for netcfg");
+		goto error;
+	}
+
+	netcfg->num_ethports = num_ports;
+
+	list_for_each_entry(__if, fman_if_list, node) {
+		struct fm_eth_port_cfg *cfg = &netcfg->port_cfg[idx];
+		/* Hook in the fman driver interface */
+		cfg->fman_if = __if;
+		cfg->rx_def = __if->fqid_rx_def;
+		num_cfg_ports++;
+		idx++;
+	}
+
+	if (!num_cfg_ports) {
+		DPAA_BUS_LOG(ERR, "No FMAN ports found");
+		goto error;
+	} else if (num_ports != num_cfg_ports)
+		netcfg->num_ethports = num_cfg_ports;
+
+	return netcfg;
+
+error:
+	return NULL;
+}
+
+void
+netcfg_release(struct netcfg_info *cfg_ptr)
+{
+	rte_free(cfg_ptr);
+	/* Close socket for shared interfaces */
+	if (skfd >= 0) {
+		close(skfd);
+		skfd = -1;
+	}
+}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
new file mode 100644
index 0000000..19105bb
--- /dev/null
+++ b/drivers/bus/dpaa/include/fman.h
@@ -0,0 +1,472 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FMAN_H
+#define __FMAN_H
+
+#include <stdbool.h>
+#include <net/if.h>
+
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+
+#include <compat.h>
+#include <rte_dpaa_logs.h>
+
+#ifndef FMAN_DEVICE_PATH
+#define FMAN_DEVICE_PATH "/dev/mem"
+#endif
+
+#define MEMAC_NUM_OF_PADDRS 7 /* Num of additional exact match MAC adr regs */
+
+/* Control and Configuration Register (COMMAND_CONFIG) for MEMAC */
+#define CMD_CFG_LOOPBACK_EN	0x00000400
+/**< 21 XGMII/GMII loopback enable */
+#define CMD_CFG_PROMIS_EN	0x00000010
+/**< 27 Promiscuous operation enable */
+#define CMD_CFG_PAUSE_IGNORE	0x00000100
+/**< 23 Ignore Pause frame quanta */
+
+/* Statistics Configuration Register (STATN_CONFIG) */
+#define STATS_CFG_CLR           0x00000004
+/**< 29 Reset all counters */
+#define STATS_CFG_CLR_ON_RD     0x00000002
+/**< 30 Clear on read */
+#define STATS_CFG_SATURATE      0x00000001
+/**< 31 Saturate at the maximum val */
+
+/**< Max receive frame length mask */
+#define MAXFRM_SIZE_MEMAC	0x00007fe0
+#define MAXFRM_RX_MASK		0x0000ffff
+
+/**< Interface Mode Register Register for MEMAC */
+#define IF_MODE_RLP 0x00000820
+
+/**< Pool Limits */
+#define FMAN_PORT_MAX_EXT_POOLS_NUM	8
+#define FMAN_PORT_OBS_EXT_POOLS_NUM	2
+
+#define FMAN_PORT_CG_MAP_NUM		8
+#define FMAN_PORT_PRS_RESULT_WORDS_NUM	8
+#define FMAN_PORT_BMI_FIFO_UNITS	0x100
+#define FMAN_PORT_IC_OFFSET_UNITS	0x10
+
+#define FMAN_ENABLE_BPOOL_DEPLETION	0xF00000F0
+
+#define HASH_CTRL_MCAST_EN	0x00000100
+#define GROUP_ADDRESS		0x0000010000000000LL
+#define HASH_CTRL_ADDR_MASK	0x0000003F
+
+/* Pre definitions of FMAN interface and Bpool structures */
+struct __fman_if;
+struct fman_if_bpool;
+/* Lists of fman interfaces and bpools */
+TAILQ_HEAD(rte_fman_if_list, __fman_if);
+
+/* Represents the different flavour of network interface */
+enum fman_mac_type {
+	fman_offline = 0, /* ASDF: Should it be removed? */
+	fman_mac_1g,
+	fman_mac_10g,
+};
+
+struct mac_addr {
+	uint32_t   mac_addr_l;	/**< Lower 32 bits of 48-bit MAC address */
+	uint32_t   mac_addr_u;	/**< Upper 16 bits of 48-bit MAC address */
+};
+
+struct memac_regs {
+	/* General Control and Status */
+	uint32_t res0000[2];
+	uint32_t command_config;	/**< 0x008 Ctrl and cfg */
+	struct mac_addr mac_addr0;	/**< 0x00C-0x010 MAC_ADDR_0...1 */
+	uint32_t maxfrm;		/**< 0x014 Max frame length */
+	uint32_t res0018[5];
+	uint32_t hashtable_ctrl;	/**< 0x02C Hash table control */
+	uint32_t res0030[4];
+	uint32_t ievent;		/**< 0x040 Interrupt event */
+	uint32_t tx_ipg_length;
+	/**< 0x044 Transmitter inter-packet-gap */
+	uint32_t res0048;
+	uint32_t imask;			/**< 0x04C Interrupt mask */
+	uint32_t res0050;
+	uint32_t pause_quanta[4];	/**< 0x054 Pause quanta */
+	uint32_t pause_thresh[4];	/**< 0x064 Pause quanta threshold */
+	uint32_t rx_pause_status;	/**< 0x074 Receive pause status */
+	uint32_t res0078[2];
+	struct mac_addr mac_addr[MEMAC_NUM_OF_PADDRS];
+	/**< 0x80-0x0B4 mac padr */
+	uint32_t lpwake_timer;
+	/**< 0x0B8 Low Power Wakeup Timer */
+	uint32_t sleep_timer;
+	/**< 0x0BC Transmit EEE Low Power Timer */
+	uint32_t res00c0[8];
+	uint32_t statn_config;
+	/**< 0x0E0 Statistics configuration */
+	uint32_t res00e4[7];
+	/* Rx Statistics Counter */
+	uint32_t reoct_l;
+	uint32_t reoct_u;
+	uint32_t roct_l;
+	uint32_t roct_u;
+	uint32_t raln_l;
+	uint32_t raln_u;
+	uint32_t rxpf_l;
+	uint32_t rxpf_u;
+	uint32_t rfrm_l;
+	uint32_t rfrm_u;
+	uint32_t rfcs_l;
+	uint32_t rfcs_u;
+	uint32_t rvlan_l;
+	uint32_t rvlan_u;
+	uint32_t rerr_l;
+	uint32_t rerr_u;
+	uint32_t ruca_l;
+	uint32_t ruca_u;
+	uint32_t rmca_l;
+	uint32_t rmca_u;
+	uint32_t rbca_l;
+	uint32_t rbca_u;
+	uint32_t rdrp_l;
+	uint32_t rdrp_u;
+	uint32_t rpkt_l;
+	uint32_t rpkt_u;
+	uint32_t rund_l;
+	uint32_t rund_u;
+	uint32_t r64_l;
+	uint32_t r64_u;
+	uint32_t r127_l;
+	uint32_t r127_u;
+	uint32_t r255_l;
+	uint32_t r255_u;
+	uint32_t r511_l;
+	uint32_t r511_u;
+	uint32_t r1023_l;
+	uint32_t r1023_u;
+	uint32_t r1518_l;
+	uint32_t r1518_u;
+	uint32_t r1519x_l;
+	uint32_t r1519x_u;
+	uint32_t rovr_l;
+	uint32_t rovr_u;
+	uint32_t rjbr_l;
+	uint32_t rjbr_u;
+	uint32_t rfrg_l;
+	uint32_t rfrg_u;
+	uint32_t rcnp_l;
+	uint32_t rcnp_u;
+	uint32_t rdrntp_l;
+	uint32_t rdrntp_u;
+	uint32_t res01d0[12];
+	/* Tx Statistics Counter */
+	uint32_t teoct_l;
+	uint32_t teoct_u;
+	uint32_t toct_l;
+	uint32_t toct_u;
+	uint32_t res0210[2];
+	uint32_t txpf_l;
+	uint32_t txpf_u;
+	uint32_t tfrm_l;
+	uint32_t tfrm_u;
+	uint32_t tfcs_l;
+	uint32_t tfcs_u;
+	uint32_t tvlan_l;
+	uint32_t tvlan_u;
+	uint32_t terr_l;
+	uint32_t terr_u;
+	uint32_t tuca_l;
+	uint32_t tuca_u;
+	uint32_t tmca_l;
+	uint32_t tmca_u;
+	uint32_t tbca_l;
+	uint32_t tbca_u;
+	uint32_t res0258[2];
+	uint32_t tpkt_l;
+	uint32_t tpkt_u;
+	uint32_t tund_l;
+	uint32_t tund_u;
+	uint32_t t64_l;
+	uint32_t t64_u;
+	uint32_t t127_l;
+	uint32_t t127_u;
+	uint32_t t255_l;
+	uint32_t t255_u;
+	uint32_t t511_l;
+	uint32_t t511_u;
+	uint32_t t1023_l;
+	uint32_t t1023_u;
+	uint32_t t1518_l;
+	uint32_t t1518_u;
+	uint32_t t1519x_l;
+	uint32_t t1519x_u;
+	uint32_t res02a8[6];
+	uint32_t tcnp_l;
+	uint32_t tcnp_u;
+	uint32_t res02c8[14];
+	/* Line Interface Control */
+	uint32_t if_mode;		/**< 0x300 Interface Mode Control */
+	uint32_t if_status;		/**< 0x304 Interface Status */
+	uint32_t res0308[14];
+	/* HiGig/2 */
+	uint32_t hg_config;		/**< 0x340 Control and cfg */
+	uint32_t res0344[3];
+	uint32_t hg_pause_quanta;	/**< 0x350 Pause quanta */
+	uint32_t res0354[3];
+	uint32_t hg_pause_thresh;	/**< 0x360 Pause quanta threshold */
+	uint32_t res0364[3];
+	uint32_t hgrx_pause_status;	/**< 0x370 Receive pause status */
+	uint32_t hg_fifos_status;	/**< 0x374 fifos status */
+	uint32_t rhm;			/**< 0x378 rx messages counter */
+	uint32_t thm;			/**< 0x37C tx messages counter */
+};
+
+struct rx_bmi_regs {
+	uint32_t fmbm_rcfg;		/**< Rx Configuration */
+	uint32_t fmbm_rst;		/**< Rx Status */
+	uint32_t fmbm_rda;		/**< Rx DMA attributes*/
+	uint32_t fmbm_rfp;		/**< Rx FIFO Parameters*/
+	uint32_t fmbm_rfed;		/**< Rx Frame End Data*/
+	uint32_t fmbm_ricp;		/**< Rx Internal Context Parameters*/
+	uint32_t fmbm_rim;		/**< Rx Internal Buffer Margins*/
+	uint32_t fmbm_rebm;		/**< Rx External Buffer Margins*/
+	uint32_t fmbm_rfne;		/**< Rx Frame Next Engine*/
+	uint32_t fmbm_rfca;		/**< Rx Frame Command Attributes.*/
+	uint32_t fmbm_rfpne;		/**< Rx Frame Parser Next Engine*/
+	uint32_t fmbm_rpso;		/**< Rx Parse Start Offset*/
+	uint32_t fmbm_rpp;		/**< Rx Policer Profile  */
+	uint32_t fmbm_rccb;		/**< Rx Coarse Classification Base */
+	uint32_t fmbm_reth;		/**< Rx Excessive Threshold */
+	uint32_t reserved003c[1];	/**< (0x03C 0x03F) */
+	uint32_t fmbm_rprai[FMAN_PORT_PRS_RESULT_WORDS_NUM];
+					/**< Rx Parse Results Array Init*/
+	uint32_t fmbm_rfqid;		/**< Rx Frame Queue ID*/
+	uint32_t fmbm_refqid;		/**< Rx Error Frame Queue ID*/
+	uint32_t fmbm_rfsdm;		/**< Rx Frame Status Discard Mask*/
+	uint32_t fmbm_rfsem;		/**< Rx Frame Status Error Mask*/
+	uint32_t fmbm_rfene;		/**< Rx Frame Enqueue Next Engine */
+	uint32_t reserved0074[0x2];	/**< (0x074-0x07C)  */
+	uint32_t fmbm_rcmne;
+	/**< Rx Frame Continuous Mode Next Engine */
+	uint32_t reserved0080[0x20];/**< (0x080 0x0FF)  */
+	uint32_t fmbm_ebmpi[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Buffer Manager pool Information-*/
+	uint32_t fmbm_acnt[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Allocate Counter-*/
+	uint32_t reserved0130[8];
+					/**< 0x130/0x140 - 0x15F reserved -*/
+	uint32_t fmbm_rcgm[FMAN_PORT_CG_MAP_NUM];
+					/**< Congestion Group Map*/
+	uint32_t fmbm_mpd;		/**< BM Pool Depletion  */
+	uint32_t reserved0184[0x1F];	/**< (0x184 0x1FF) */
+	uint32_t fmbm_rstc;		/**< Rx Statistics Counters*/
+	uint32_t fmbm_rfrc;		/**< Rx Frame Counter*/
+	uint32_t fmbm_rfbc;		/**< Rx Bad Frames Counter*/
+	uint32_t fmbm_rlfc;		/**< Rx Large Frames Counter*/
+	uint32_t fmbm_rffc;		/**< Rx Filter Frames Counter*/
+	uint32_t fmbm_rfdc;		/**< Rx Frame Discard Counter*/
+	uint32_t fmbm_rfldec;		/**< Rx Frames List DMA Error Counter*/
+	uint32_t fmbm_rodc;		/**< Rx Out of Buffers Discard nntr*/
+	uint32_t fmbm_rbdc;		/**< Rx Buffers Deallocate Counter*/
+	uint32_t reserved0224[0x17];	/**< (0x224 0x27F) */
+	uint32_t fmbm_rpc;		/**< Rx Performance Counters*/
+	uint32_t fmbm_rpcp;		/**< Rx Performance Count Parameters*/
+	uint32_t fmbm_rccn;		/**< Rx Cycle Counter*/
+	uint32_t fmbm_rtuc;		/**< Rx Tasks Utilization Counter*/
+	uint32_t fmbm_rrquc;
+	/**< Rx Receive Queue Utilization cntr*/
+	uint32_t fmbm_rduc;		/**< Rx DMA Utilization Counter*/
+	uint32_t fmbm_rfuc;		/**< Rx FIFO Utilization Counter*/
+	uint32_t fmbm_rpac;		/**< Rx Pause Activation Counter*/
+	uint32_t reserved02a0[0x18];	/**< (0x2A0 0x2FF) */
+	uint32_t fmbm_rdbg;		/**< Rx Debug-*/
+};
+
+struct fman_port_qmi_regs {
+	uint32_t fmqm_pnc;		/**< PortID n Configuration Register */
+	uint32_t fmqm_pns;		/**< PortID n Status Register */
+	uint32_t fmqm_pnts;		/**< PortID n Task Status Register */
+	uint32_t reserved00c[4];	/**< 0xn00C - 0xn01B */
+	uint32_t fmqm_pnen;		/**< PortID n Enqueue NIA Register */
+	uint32_t fmqm_pnetfc;		/**< PortID n Enq Total Frame Counter */
+	uint32_t reserved024[2];	/**< 0xn024 - 0x02B */
+	uint32_t fmqm_pndn;		/**< PortID n Dequeue NIA Register */
+	uint32_t fmqm_pndc;		/**< PortID n Dequeue Config Register */
+	uint32_t fmqm_pndtfc;		/**< PortID n Dequeue tot Frame cntr */
+	uint32_t fmqm_pndfdc;		/**< PortID n Dequeue FQID Dflt Cntr */
+	uint32_t fmqm_pndcc;		/**< PortID n Dequeue Confirm Counter */
+};
+
+/* This struct exports parameters about an Fman network interface, determined
+ * from the device-tree.
+ */
+struct fman_if {
+	/* Which Fman this interface belongs to */
+	uint8_t fman_idx;
+	/* The type/speed of the interface */
+	enum fman_mac_type mac_type;
+	/* Boolean, set when mac type is memac */
+	uint8_t is_memac;
+	/* Boolean, set when PHY is RGMII */
+	uint8_t is_rgmii;
+	/* The index of this MAC (within the Fman it belongs to) */
+	uint8_t mac_idx;
+	/* The MAC address */
+	struct ether_addr mac_addr;
+	/* The Qman channel to schedule Tx FQs to */
+	u16 tx_channel_id;
+	/* The hard-coded FQIDs for this interface. Note: this doesn't cover
+	 * the PCD nor the "Rx default" FQIDs, which are configured via FMC
+	 * and its XML-based configuration.
+	 */
+	uint32_t fqid_rx_def;
+	uint32_t fqid_rx_err;
+	uint32_t fqid_tx_err;
+	uint32_t fqid_tx_confirm;
+
+	struct list_head bpool_list;
+	/* The node for linking this interface into "fman_if_list" */
+	struct list_head node;
+};
+
+/* This struct exposes parameters for buffer pools, extracted from the network
+ * interface settings in the device tree.
+ */
+struct fman_if_bpool {
+	uint32_t bpid;
+	uint64_t count;
+	uint64_t size;
+	uint64_t addr;
+	/* The node for linking this bpool into fman_if::bpool_list */
+	struct list_head node;
+};
+
+/* Internal Context transfer params - FMBM_RICP*/
+struct fman_if_ic_params {
+	/*IC offset in the packet buffer */
+	uint16_t iceof;
+	/*IC internal offset */
+	uint16_t iciof;
+	/*IC size to copy */
+	uint16_t icsz;
+};
+
+/* The exported "struct fman_if" type contains the subset of fields we want
+ * exposed. This struct is embedded in a larger "struct __fman_if" which
+ * contains the extra bits we *don't* want exposed.
+ */
+struct __fman_if {
+	struct fman_if __if;
+	char node_path[PATH_MAX];
+	uint64_t regs_size;
+	void *ccsr_map;
+	void *bmi_map;
+	void *qmi_map;
+	struct list_head node;
+};
+
+/* And this is the base list node that the interfaces are added to. (See
+ * fman_if_enable_all_rx() below for an example of its use.)
+ */
+extern const struct list_head *fman_if_list;
+
+/* To display MAC addresses (of type "struct ether_addr") via printf()-style
+ * interfaces, these macros may come in handy. Eg;
+ *        struct fman_if *p = get_ptr_to_some_interface();
+ *        printf("MAC address is " ETH_MAC_PRINTF_FMT "\n",
+ *               ETH_MAC_PRINTF_ARGS(&p->mac_addr));
+ */
+#define ETH_MAC_PRINTF_FMT "%02x:%02x:%02x:%02x:%02x:%02x"
+#define ETH_MAC_PRINTF_ARGS(a) \
+		(a)->addr_bytes[0], (a)->addr_bytes[1], \
+		(a)->addr_bytes[2], (a)->addr_bytes[3], \
+		(a)->addr_bytes[4], (a)->addr_bytes[5]
+
+/* To iterate the "bpool_list" for an interface. Eg;
+ *        struct fman_if *p = get_ptr_to_some_interface();
+ *        struct fman_if_bpool *bp;
+ *        printf("Interface uses following BPIDs;\n");
+ *        fman_if_for_each_bpool(bp, p) {
+ *            printf("    %d\n", bp->bpid);
+ *            [...]
+ *        }
+ */
+#define fman_if_for_each_bpool(bp, __if) \
+	list_for_each_entry(bp, &(__if)->bpool_list, node)
+
+#define FMAN_ERR(cond, rc, fmt, args...) \
+	do { \
+		if (unlikely(cond)) { \
+			_errno = (rc); \
+			DPAA_BUS_LOG(ERR, fmt "(%d)", ##args, errno); \
+			goto err; \
+		} \
+	} while (0)
+
+#define FMAN_IP_REV_1	0xC30C4
+#define FMAN_IP_REV_1_MAJOR_MASK 0x0000FF00
+#define FMAN_IP_REV_1_MAJOR_SHIFT 8
+#define FMAN_V3	0x06
+#define FMAN_V3_CONTEXTA_EN_A2V	0x10000000
+#define FMAN_V3_CONTEXTA_EN_OVOM	0x02000000
+#define FMAN_V3_CONTEXTA_EN_EBD	0x80000000
+#define FMAN_CONTEXTA_DIS_CHECKSUM	0x7ull
+#define FMAN_CONTEXTA_SET_OPCODE11 0x2000000b00000000
+extern u16 fman_ip_rev;
+extern u32 fman_dealloc_bufs_mask_hi;
+extern u32 fman_dealloc_bufs_mask_lo;
+
+/**
+ * Initialize the FMAN driver
+ *
+ * @args void
+ * @return
+ *	0 for success; error OTHERWISE
+ */
+int fman_init(void);
+
+/**
+ * Teardown the FMAN driver
+ *
+ * @args void
+ * @return void
+ */
+void fman_finish(void);
+
+#endif	/* __FMAN_H */
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
new file mode 100644
index 0000000..b77a678
--- /dev/null
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -0,0 +1,96 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __NETCFG_H
+#define __NETCFG_H
+
+#include <fman.h>
+#include <argp.h>
+
+/* Configuration information related to a specific ethernet port */
+struct fm_eth_port_cfg {
+	/**< A list of PCD FQ ranges, obtained from FMC configuration */
+	struct list_head *list;
+	/**< The "Rx default" FQID, obtained from FMC configuration */
+	uint32_t rx_def;
+	/**< Other interface details are in the fman driver interface */
+	struct fman_if *fman_if;
+};
+
+struct netcfg_info {
+	uint8_t num_ethports;
+	/**< Number of ports */
+	struct fm_eth_port_cfg port_cfg[0];
+	/**< Variable structure array of size num_ethports */
+};
+
+struct interface_info {
+	char *name;
+	struct ether_addr mac_addr;
+	struct ether_addr peer_mac;
+	int mac_present;
+	int fman_enabled_mac_interface;
+};
+
+struct netcfg_interface {
+	uint8_t numof_netcfg_interface;
+	uint8_t numof_fman_enabled_macless;
+	struct interface_info interface_info[0];
+};
+
+/* pcd_file: FMC netpcd XML ("policy") file, that contains PCD information.
+ * cfg_file: FMC config XML file
+ * Returns the configuration information in newly allocated memory.
+ */
+struct netcfg_info *netcfg_acquire(void);
+
+/* cfg_ptr: configuration information pointer.
+ * Frees the resources allocated by the configuration layer.
+ */
+void netcfg_release(struct netcfg_info *cfg_ptr);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* cfg_ptr: configuration information pointer.
+ * This function dumps configuration data to stdout.
+ */
+void dump_netcfg(struct netcfg_info *cfg_ptr);
+#endif
+
+#endif /* __NETCFG_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 06/40] bus/dpaa: add FMan hardware operations
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (4 preceding siblings ...)
  2017-07-04 14:43   ` [PATCH v2 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
@ 2017-07-04 14:43   ` Shreyansh Jain
  2017-07-04 14:43   ` [PATCH v2 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
                     ` (35 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/fman/fman_hw.c      | 606 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fman.h           |   2 +
 drivers/bus/dpaa/include/fsl_fman.h       | 182 +++++++++
 drivers/bus/dpaa/include/fsl_fman_crc64.h | 263 +++++++++++++
 5 files changed, 1054 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 49abdc7..94849b8 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -66,6 +66,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
+	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c
 
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
new file mode 100644
index 0000000..77908ec
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -0,0 +1,606 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+#include <fman.h>
+/* This header declares things about Fman hardware itself (the format of status
+ * words and an inline implementation of CRC64). We include it only in order to
+ * instantiate the one global variable it depends on.
+ */
+#include <fsl_fman.h>
+#include <fsl_fman_crc64.h>
+
+/* Instantiate the global variable that the inline CRC64 implementation (in
+ * <fsl_fman.h>) depends on.
+ */
+DECLARE_FMAN_CRC64_TABLE();
+
+#define ETH_ADDR_TO_UINT64(eth_addr)                  \
+	(uint64_t)(((uint64_t)(eth_addr)[0] << 40) |   \
+	((uint64_t)(eth_addr)[1] << 32) |   \
+	((uint64_t)(eth_addr)[2] << 24) |   \
+	((uint64_t)(eth_addr)[3] << 16) |   \
+	((uint64_t)(eth_addr)[4] << 8) |    \
+	((uint64_t)(eth_addr)[5]))
+
+void
+fman_if_set_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i|HASH_CTRL_MCAST_EN);
+}
+
+void
+fman_if_reset_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i & ~HASH_CTRL_MCAST_EN);
+}
+
+static
+uint32_t get_mac_hash_code(uint64_t eth_addr)
+{
+	uint64_t	mask1, mask2;
+	uint32_t	xorVal = 0;
+	uint8_t		i, j;
+
+	for (i = 0; i < 6; i++) {
+		mask1 = eth_addr & (uint64_t)0x01;
+		eth_addr >>= 1;
+
+		for (j = 0; j < 7; j++) {
+			mask2 = eth_addr & (uint64_t)0x01;
+			mask1 ^= mask2;
+			eth_addr >>= 1;
+		}
+
+		xorVal |= (mask1 << (5 - i));
+	}
+
+	return xorVal;
+}
+
+int
+fman_memac_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	uint64_t eth_addr;
+	void *hashtable_ctrl;
+	uint32_t hash;
+
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	eth_addr = ETH_ADDR_TO_UINT64(eth);
+
+	if (!(eth_addr & GROUP_ADDRESS))
+		return -1;
+
+	hash = get_mac_hash_code(eth_addr) & HASH_CTRL_ADDR_MASK;
+	hash = hash | HASH_CTRL_MCAST_EN;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	out_be32(hashtable_ctrl, hash);
+
+	return 0;
+}
+
+int
+fman_memac_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *mac_reg =
+		&((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
+	u32 val = in_be32(mac_reg);
+
+	eth[0] = (val & 0x000000ff) >> 0;
+	eth[1] = (val & 0x0000ff00) >> 8;
+	eth[2] = (val & 0x00ff0000) >> 16;
+	eth[3] = (val & 0xff000000) >> 24;
+
+	mac_reg =  &((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_u;
+	val = in_be32(mac_reg);
+
+	eth[4] = (val & 0x000000ff) >> 0;
+	eth[5] = (val & 0x0000ff00) >> 8;
+
+	return 0;
+}
+
+static void
+fman_memac_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	void *reg;
+
+	if (addr_num) {
+		reg = &((struct memac_regs *)m->ccsr_map)->
+				mac_addr[addr_num-1].mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+		out_be32(reg, 0x0);
+	} else {
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+		out_be32(reg, 0x0);
+	}
+}
+
+static int
+fman_memac_add_mac_addr(struct fman_if *p, uint8_t *eth,
+				       uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+
+	void *reg;
+	u32 val;
+
+	memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_l;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+
+	val = (m->__if.mac_addr.addr_bytes[0] |
+	       (m->__if.mac_addr.addr_bytes[1] << 8) |
+	       (m->__if.mac_addr.addr_bytes[2] << 16) |
+	       (m->__if.mac_addr.addr_bytes[3] << 24));
+	out_be32(reg, val);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+
+	val = ((m->__if.mac_addr.addr_bytes[4] << 0) |
+	       (m->__if.mac_addr.addr_bytes[5] << 8));
+	out_be32(reg, val);
+
+	return 0;
+}
+
+
+static void
+fman_memac_stats_get(struct fman_if *p,
+		     struct rte_eth_stats *stats)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+
+	/* read recved packet count */
+	stats->ipackets = ((u64)in_be32(&regs->rfrm_u)) << 32 |
+			in_be32(&regs->rfrm_l);
+	stats->ibytes = ((u64)in_be32(&regs->roct_u)) << 32 |
+			in_be32(&regs->roct_l);
+	stats->ierrors = ((u64)in_be32(&regs->rerr_u)) << 32 |
+			in_be32(&regs->rerr_l);
+
+	/* read xmited packet count */
+	stats->opackets = ((u64)in_be32(&regs->tfrm_u)) << 32 |
+			in_be32(&regs->tfrm_l);
+	stats->obytes = ((u64)in_be32(&regs->toct_u)) << 32 |
+			in_be32(&regs->toct_l);
+	stats->oerrors = ((u64)in_be32(&regs->terr_u)) << 32 |
+			in_be32(&regs->terr_l);
+}
+
+static void
+fman_memac_reset_stat(struct fman_if *p)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+	uint32_t tmp;
+
+	tmp = in_be32(&regs->statn_config);
+
+	tmp |= STATS_CFG_CLR;
+
+	out_be32(&regs->statn_config, tmp);
+
+	while (in_be32(&regs->statn_config) & STATS_CFG_CLR)
+		;
+}
+
+int
+fm_mac_add_exact_match_mac_addr(struct fman_if *p, uint8_t *eth,
+				    uint8_t addr_num)
+{
+	assert(fman_ccsr_map_fd != -1);
+
+	return fman_memac_add_mac_addr(p, eth, addr_num);
+}
+
+int
+fm_mac_rem_exact_match_mac_addr(struct fman_if *p, int8_t addr_num)
+{
+	assert(fman_ccsr_map_fd != -1);
+
+	fman_memac_clear_mac_addr(p, addr_num);
+	return 0;
+}
+
+int
+fm_mac_config(struct fman_if *p,  uint8_t *eth)
+{
+	assert(fman_ccsr_map_fd != -1);
+
+	return fman_memac_get_primary_mac_addr(p, eth);
+}
+
+void
+fm_mac_set_rx_ignore_pause_frames(struct fman_if *p, bool enable)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	u32 value = 0;
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Rx Ignore Pause Frames */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	if (enable)
+		value = in_be32(cmdcfg) | CMD_CFG_PAUSE_IGNORE;
+	else
+		value = in_be32(cmdcfg) & ~CMD_CFG_PAUSE_IGNORE;
+
+	out_be32(cmdcfg, value);
+}
+
+void
+fm_mac_config_loopback(struct fman_if *p, bool enable)
+{
+	if (enable)
+		/* Enable loopback mode */
+		fman_if_loopback_enable(p);
+	else
+		/* Disable loopback mode */
+		fman_if_loopback_disable(p);
+}
+
+void
+fm_mac_conf_max_frame_len(struct fman_if *p,
+			       unsigned int max_frame_len)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	unsigned int *maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Max frame length */
+	maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+	out_be32(maxfrm, (MAXFRM_RX_MASK & max_frame_len));
+}
+
+void
+fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats)
+{
+	fman_memac_stats_get(p, stats);
+}
+
+void
+fman_if_stats_reset(struct fman_if *p)
+{
+	fman_memac_reset_stat(p);
+}
+
+void
+fm_mac_set_promiscuous(struct fman_if *p)
+{
+	fman_if_promiscuous_enable(p);
+}
+
+void
+fman_if_promiscuous_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_PROMIS_EN);
+}
+
+void
+fman_if_promiscuous_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Disable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) & (~CMD_CFG_PROMIS_EN));
+}
+
+void
+fman_if_enable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* enable Rx and Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) | 3);
+}
+
+void
+fman_if_disable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* only disable Rx, not Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) & ~(u32)2);
+}
+
+void
+fman_if_loopback_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) | IF_MODE_RLP);
+	} else{
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_loopback_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+	/* Disable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) & ~IF_MODE_RLP);
+	} else {
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) & ~CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
+		    int bpid, size_t bufsize)
+{
+	u32 fmbm_ebmpi;
+	u32 ebmpi_val_ace = 0xc0000000;
+	u32 ebmpi_mask = 0xffc00000;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ebmpi =
+	       in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0]);
+	fmbm_ebmpi = ebmpi_val_ace | (fmbm_ebmpi & ebmpi_mask) | (bpid << 16) |
+		     (bufsize);
+
+	out_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0],
+		 fmbm_ebmpi);
+}
+
+int
+fman_if_get_fc_quanta(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	return in_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0]);
+}
+
+int
+fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	out_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0],
+		 pause_quanta);
+	return 0;
+}
+
+int
+fman_if_get_fdoff(struct fman_if *fm_if)
+{
+	u32 fmbm_ricp;
+	int fdoff;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ricp =
+		   in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp);
+	/*iceof + icsz*/
+	fdoff = ((fmbm_ricp & iceof_mask) >> 16) * 16 +
+		(fmbm_ricp & icsz_mask) * 16;
+
+	return fdoff;
+}
+
+void
+fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_refqid =
+			&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_refqid;
+	out_be32(fmbm_refqid, err_fqid);
+}
+
+int
+fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	val = in_be32(fmbm_ricp);
+
+	icp->iceof = (val & iceof_mask) >> 12;
+	icp->iciof = (val & iciof_mask) >> 4;
+	icp->icsz = (val & icsz_mask) << 4;
+
+	return 0;
+}
+
+int
+fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	val |= (icp->iceof << 12) & iceof_mask;
+	val |= (icp->iciof << 4) & iciof_mask;
+	val |= (icp->icsz >> 4) & icsz_mask;
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	out_be32(fmbm_ricp, val);
+
+	return 0;
+}
+
+void
+fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rebm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_rebm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rebm;
+
+	out_be32(fmbm_rebm, in_be32(fmbm_rebm) | (fd_offset << 16));
+}
+
+void
+fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	out_be32(reg_maxfrm, (in_be32(reg_maxfrm) & 0xFFFF0000) | max_frm);
+}
+
+uint16_t
+fman_if_get_maxfrm(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	return (in_be32(reg_maxfrm) | 0x0000FFFF);
+}
+
+void
+fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmqm_pndn;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmqm_pndn = &((struct fman_port_qmi_regs *)__if->qmi_map)->fmqm_pndn;
+
+	out_be32(fmqm_pndn, nia);
+}
+
+void
+fman_if_discard_rx_errors(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rfsdm, *fmbm_rfsem;
+
+	fmbm_rfsem = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsem;
+	out_be32(fmbm_rfsem, 0);
+
+	/* Configure the discard mask to discard the error packets which have
+	 * DMA errors, Frame size error, Header error etc. The mask 0x010CE3F0
+	 * is to configured discard all the errors which come in the FD[STATUS]
+	 */
+	fmbm_rfsdm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsdm;
+	out_be32(fmbm_rfsdm, 0x010CE3F0);
+}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 19105bb..aeb707b 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -406,6 +406,8 @@ struct __fman_if {
  */
 extern const struct list_head *fman_if_list;
 
+extern int fman_ccsr_map_fd;
+
 /* To display MAC addresses (of type "struct ether_addr") via printf()-style
  * interfaces, these macros may come in handy. Eg;
  *        struct fman_if *p = get_ptr_to_some_interface();
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
new file mode 100644
index 0000000..0aff22c
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -0,0 +1,182 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_H
+#define __FSL_FMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Status field in FD is updated on Rx side by FMAN with following information.
+ * Refer to field description in FM BG.
+ */
+struct fm_status_t {
+	unsigned int reserved0:3;
+	unsigned int dcl4c:1; /* Don't Check L4 Checksum */
+	unsigned int reserved1:1;
+	unsigned int ufd:1; /* Unsupported Format */
+	unsigned int lge:1; /* Length Error */
+	unsigned int dme:1; /* DMA Error */
+
+	unsigned int reserved2:4;
+	unsigned int fpe:1; /* Frame physical Error */
+	unsigned int fse:1; /* Frame Size Error */
+	unsigned int dis:1; /* Discard by Classification */
+	unsigned int reserved3:1;
+
+	unsigned int eof:1; /* Key Extraction goes out of frame */
+	unsigned int nss:1; /* No Scheme selected */
+	unsigned int kso:1; /* Key Size Overflow */
+	unsigned int reserved4:1;
+	unsigned int fcl:2; /* Frame Color */
+	unsigned int ipp:1; /* Illegal Policer Profile Selected */
+	unsigned int flm:1; /* Frame Length Mismatch */
+	unsigned int pte:1; /* Parser Timeout */
+	unsigned int isp:1; /* Invalid Soft Parser Instruction */
+	unsigned int phe:1; /* Header Error during parsing */
+	unsigned int frdr:1; /* Frame Dropped by disabled port */
+	unsigned int reserved5:4;
+} __attribute__ ((__packed__));
+
+/* Set promiscuous mode on an interface */
+void fm_mac_set_promiscuous(struct fman_if *p);
+
+/* Get mac config*/
+int fm_mac_config(struct fman_if *p, uint8_t *eth);
+
+/* Set MAC address for a particular interface */
+int fm_mac_add_exact_match_mac_addr(struct fman_if *p, uint8_t *eth,
+					      uint8_t addr_num);
+
+/* Remove a MAC address for a particular interface */
+int fm_mac_rem_exact_match_mac_addr(struct fman_if *p, int8_t addr_num);
+
+/* Get the FMAN statistics */
+void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);
+
+/* Reset the FMAN statistics */
+void fman_if_stats_reset(struct fman_if *p);
+
+/* Set ignore pause option for a specific interface */
+void fm_mac_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
+
+/* Enable Loopback mode */
+void fm_mac_config_loopback(struct fman_if *p, bool enable);
+
+/* Set max frame length */
+void fm_mac_conf_max_frame_len(struct fman_if *p,
+			       unsigned int max_frame_len);
+
+/* Enable/disable Rx promiscuous mode on specified interface */
+void fman_if_promiscuous_enable(struct fman_if *);
+void fman_if_promiscuous_disable(struct fman_if *);
+
+/* Enable/disable Rx on specific interfaces */
+void fman_if_enable_rx(struct fman_if *);
+void fman_if_disable_rx(struct fman_if *);
+
+/* Enable/disable loopback on specific interfaces */
+void fman_if_loopback_enable(struct fman_if *);
+void fman_if_loopback_disable(struct fman_if *);
+
+/* Set buffer pool on specific interface */
+void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
+		    size_t bufsize);
+
+/* Get Flow Control pause quanta on specific interface */
+int fman_if_get_fc_quanta(struct fman_if *fm_if);
+
+/* Set Flow Control pause quanta on specific interface */
+int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
+
+/* Set default error fqid on specific interface */
+void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
+
+/* Get IC transfer params */
+int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
+
+/* Set IC transfer params */
+int fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp);
+
+/* Get interface fd->offset value */
+int fman_if_get_fdoff(struct fman_if *fm_if);
+
+/* Set interface fd->offset value */
+void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
+
+/* Get interface Max Frame length (MTU) */
+uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
+
+/* Set interface  Max Frame length (MTU) */
+void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
+
+/* Set interface next invoked action for dequeue operation */
+void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
+
+/* discard error packets on rx */
+void fman_if_discard_rx_errors(struct fman_if *fm_if);
+
+void fman_if_set_mcast_filter_table(struct fman_if *p);
+
+void fman_if_reset_mcast_filter_table(struct fman_if *p);
+
+int fman_memac_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
+
+int fman_memac_get_primary_mac_addr(struct fman_if *p, uint8_t *eth);
+
+
+/* Enable/disable Rx on all interfaces */
+static inline void fman_if_enable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_enable_rx(__if);
+}
+
+static inline void fman_if_disable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_disable_rx(__if);
+}
+#endif /* __FSL_FMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_fman_crc64.h b/drivers/bus/dpaa/include/fsl_fman_crc64.h
new file mode 100644
index 0000000..af5803f
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman_crc64.h
@@ -0,0 +1,263 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_CRC64_H
+#define __FSL_FMAN_CRC64_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*
+ * This following definitions provide a software implementation of the CRC64
+ * algorithm implemented within Fman.
+ *
+ * The following example shows how to compute a CRC64 hash value based on
+ * SRC_IP, DST_IP and ESP_SPI values
+ *
+ *     #define compute_hash(saddr,daddr,spi) \
+ *        do { \
+ *           uint64_t result; \
+ *           result = fman_crc64_init(); \
+ *           result = fman_crc64_compute_32bit(saddr, result); \
+ *           result = fman_crc64_compute_32bit(daddr, result); \
+ *           result = fman_crc64_compute_32bit(spi, result); \
+ *           return (uint32_t) result & RC_HASH_MASK; \
+ *        } while (0);
+ *
+ * If hashing over a different number of fields (or of different types) is
+ * required, this can be implemented using the following primitives.
+ */
+
+/* The following table provides the constants used by the Fman CRC64
+ * implementation. The table is instantiated within the DPAA fman driver.
+ * However if the application is not going to be linked against the DPAA fman
+ * driver but will use this Fman CRC64 implementation, then it will need to
+ * instantiate this table by using the DECLARE_FMAN_CRC64_TABLE() macro.
+ */
+struct fman_crc64_t {
+	uint64_t initial;
+	uint64_t table[1 << 8];
+};
+extern struct fman_crc64_t FMAN_CRC64_ECMA_182;
+#define DECLARE_FMAN_CRC64_TABLE() \
+struct fman_crc64_t FMAN_CRC64_ECMA_182 = { \
+	0xFFFFFFFFFFFFFFFFULL, \
+	{ \
+		0x0000000000000000ULL, 0xb32e4cbe03a75f6fULL, \
+		0xf4843657a840a05bULL, 0x47aa7ae9abe7ff34ULL, \
+		0x7bd0c384ff8f5e33ULL, 0xc8fe8f3afc28015cULL, \
+		0x8f54f5d357cffe68ULL, 0x3c7ab96d5468a107ULL, \
+		0xf7a18709ff1ebc66ULL, 0x448fcbb7fcb9e309ULL, \
+		0x0325b15e575e1c3dULL, 0xb00bfde054f94352ULL, \
+		0x8c71448d0091e255ULL, 0x3f5f08330336bd3aULL, \
+		0x78f572daa8d1420eULL, 0xcbdb3e64ab761d61ULL, \
+		0x7d9ba13851336649ULL, 0xceb5ed8652943926ULL, \
+		0x891f976ff973c612ULL, 0x3a31dbd1fad4997dULL, \
+		0x064b62bcaebc387aULL, 0xb5652e02ad1b6715ULL, \
+		0xf2cf54eb06fc9821ULL, 0x41e11855055bc74eULL, \
+		0x8a3a2631ae2dda2fULL, 0x39146a8fad8a8540ULL, \
+		0x7ebe1066066d7a74ULL, 0xcd905cd805ca251bULL, \
+		0xf1eae5b551a2841cULL, 0x42c4a90b5205db73ULL, \
+		0x056ed3e2f9e22447ULL, 0xb6409f5cfa457b28ULL, \
+		0xfb374270a266cc92ULL, 0x48190ecea1c193fdULL, \
+		0x0fb374270a266cc9ULL, 0xbc9d3899098133a6ULL, \
+		0x80e781f45de992a1ULL, 0x33c9cd4a5e4ecdceULL, \
+		0x7463b7a3f5a932faULL, 0xc74dfb1df60e6d95ULL, \
+		0x0c96c5795d7870f4ULL, 0xbfb889c75edf2f9bULL, \
+		0xf812f32ef538d0afULL, 0x4b3cbf90f69f8fc0ULL, \
+		0x774606fda2f72ec7ULL, 0xc4684a43a15071a8ULL, \
+		0x83c230aa0ab78e9cULL, 0x30ec7c140910d1f3ULL, \
+		0x86ace348f355aadbULL, 0x3582aff6f0f2f5b4ULL, \
+		0x7228d51f5b150a80ULL, 0xc10699a158b255efULL, \
+		0xfd7c20cc0cdaf4e8ULL, 0x4e526c720f7dab87ULL, \
+		0x09f8169ba49a54b3ULL, 0xbad65a25a73d0bdcULL, \
+		0x710d64410c4b16bdULL, 0xc22328ff0fec49d2ULL, \
+		0x85895216a40bb6e6ULL, 0x36a71ea8a7ace989ULL, \
+		0x0adda7c5f3c4488eULL, 0xb9f3eb7bf06317e1ULL, \
+		0xfe5991925b84e8d5ULL, 0x4d77dd2c5823b7baULL, \
+		0x64b62bcaebc387a1ULL, 0xd7986774e864d8ceULL, \
+		0x90321d9d438327faULL, 0x231c512340247895ULL, \
+		0x1f66e84e144cd992ULL, 0xac48a4f017eb86fdULL, \
+		0xebe2de19bc0c79c9ULL, 0x58cc92a7bfab26a6ULL, \
+		0x9317acc314dd3bc7ULL, 0x2039e07d177a64a8ULL, \
+		0x67939a94bc9d9b9cULL, 0xd4bdd62abf3ac4f3ULL, \
+		0xe8c76f47eb5265f4ULL, 0x5be923f9e8f53a9bULL, \
+		0x1c4359104312c5afULL, 0xaf6d15ae40b59ac0ULL, \
+		0x192d8af2baf0e1e8ULL, 0xaa03c64cb957be87ULL, \
+		0xeda9bca512b041b3ULL, 0x5e87f01b11171edcULL, \
+		0x62fd4976457fbfdbULL, 0xd1d305c846d8e0b4ULL, \
+		0x96797f21ed3f1f80ULL, 0x2557339fee9840efULL, \
+		0xee8c0dfb45ee5d8eULL, 0x5da24145464902e1ULL, \
+		0x1a083bacedaefdd5ULL, 0xa9267712ee09a2baULL, \
+		0x955cce7fba6103bdULL, 0x267282c1b9c65cd2ULL, \
+		0x61d8f8281221a3e6ULL, 0xd2f6b4961186fc89ULL, \
+		0x9f8169ba49a54b33ULL, 0x2caf25044a02145cULL, \
+		0x6b055fede1e5eb68ULL, 0xd82b1353e242b407ULL, \
+		0xe451aa3eb62a1500ULL, 0x577fe680b58d4a6fULL, \
+		0x10d59c691e6ab55bULL, 0xa3fbd0d71dcdea34ULL, \
+		0x6820eeb3b6bbf755ULL, 0xdb0ea20db51ca83aULL, \
+		0x9ca4d8e41efb570eULL, 0x2f8a945a1d5c0861ULL, \
+		0x13f02d374934a966ULL, 0xa0de61894a93f609ULL, \
+		0xe7741b60e174093dULL, 0x545a57dee2d35652ULL, \
+		0xe21ac88218962d7aULL, 0x5134843c1b317215ULL, \
+		0x169efed5b0d68d21ULL, 0xa5b0b26bb371d24eULL, \
+		0x99ca0b06e7197349ULL, 0x2ae447b8e4be2c26ULL, \
+		0x6d4e3d514f59d312ULL, 0xde6071ef4cfe8c7dULL, \
+		0x15bb4f8be788911cULL, 0xa6950335e42fce73ULL, \
+		0xe13f79dc4fc83147ULL, 0x521135624c6f6e28ULL, \
+		0x6e6b8c0f1807cf2fULL, 0xdd45c0b11ba09040ULL, \
+		0x9aefba58b0476f74ULL, 0x29c1f6e6b3e0301bULL, \
+		0xc96c5795d7870f42ULL, 0x7a421b2bd420502dULL, \
+		0x3de861c27fc7af19ULL, 0x8ec62d7c7c60f076ULL, \
+		0xb2bc941128085171ULL, 0x0192d8af2baf0e1eULL, \
+		0x4638a2468048f12aULL, 0xf516eef883efae45ULL, \
+		0x3ecdd09c2899b324ULL, 0x8de39c222b3eec4bULL, \
+		0xca49e6cb80d9137fULL, 0x7967aa75837e4c10ULL, \
+		0x451d1318d716ed17ULL, 0xf6335fa6d4b1b278ULL, \
+		0xb199254f7f564d4cULL, 0x02b769f17cf11223ULL, \
+		0xb4f7f6ad86b4690bULL, 0x07d9ba1385133664ULL, \
+		0x4073c0fa2ef4c950ULL, 0xf35d8c442d53963fULL, \
+		0xcf273529793b3738ULL, 0x7c0979977a9c6857ULL, \
+		0x3ba3037ed17b9763ULL, 0x888d4fc0d2dcc80cULL, \
+		0x435671a479aad56dULL, 0xf0783d1a7a0d8a02ULL, \
+		0xb7d247f3d1ea7536ULL, 0x04fc0b4dd24d2a59ULL, \
+		0x3886b22086258b5eULL, 0x8ba8fe9e8582d431ULL, \
+		0xcc0284772e652b05ULL, 0x7f2cc8c92dc2746aULL, \
+		0x325b15e575e1c3d0ULL, 0x8175595b76469cbfULL, \
+		0xc6df23b2dda1638bULL, 0x75f16f0cde063ce4ULL, \
+		0x498bd6618a6e9de3ULL, 0xfaa59adf89c9c28cULL, \
+		0xbd0fe036222e3db8ULL, 0x0e21ac88218962d7ULL, \
+		0xc5fa92ec8aff7fb6ULL, 0x76d4de52895820d9ULL, \
+		0x317ea4bb22bfdfedULL, 0x8250e80521188082ULL, \
+		0xbe2a516875702185ULL, 0x0d041dd676d77eeaULL, \
+		0x4aae673fdd3081deULL, 0xf9802b81de97deb1ULL, \
+		0x4fc0b4dd24d2a599ULL, 0xfceef8632775faf6ULL, \
+		0xbb44828a8c9205c2ULL, 0x086ace348f355aadULL, \
+		0x34107759db5dfbaaULL, 0x873e3be7d8faa4c5ULL, \
+		0xc094410e731d5bf1ULL, 0x73ba0db070ba049eULL, \
+		0xb86133d4dbcc19ffULL, 0x0b4f7f6ad86b4690ULL, \
+		0x4ce50583738cb9a4ULL, 0xffcb493d702be6cbULL, \
+		0xc3b1f050244347ccULL, 0x709fbcee27e418a3ULL, \
+		0x3735c6078c03e797ULL, 0x841b8ab98fa4b8f8ULL, \
+		0xadda7c5f3c4488e3ULL, 0x1ef430e13fe3d78cULL, \
+		0x595e4a08940428b8ULL, 0xea7006b697a377d7ULL, \
+		0xd60abfdbc3cbd6d0ULL, 0x6524f365c06c89bfULL, \
+		0x228e898c6b8b768bULL, 0x91a0c532682c29e4ULL, \
+		0x5a7bfb56c35a3485ULL, 0xe955b7e8c0fd6beaULL, \
+		0xaeffcd016b1a94deULL, 0x1dd181bf68bdcbb1ULL, \
+		0x21ab38d23cd56ab6ULL, 0x9285746c3f7235d9ULL, \
+		0xd52f0e859495caedULL, 0x6601423b97329582ULL, \
+		0xd041dd676d77eeaaULL, 0x636f91d96ed0b1c5ULL, \
+		0x24c5eb30c5374ef1ULL, 0x97eba78ec690119eULL, \
+		0xab911ee392f8b099ULL, 0x18bf525d915feff6ULL, \
+		0x5f1528b43ab810c2ULL, 0xec3b640a391f4fadULL, \
+		0x27e05a6e926952ccULL, 0x94ce16d091ce0da3ULL, \
+		0xd3646c393a29f297ULL, 0x604a2087398eadf8ULL, \
+		0x5c3099ea6de60cffULL, 0xef1ed5546e415390ULL, \
+		0xa8b4afbdc5a6aca4ULL, 0x1b9ae303c601f3cbULL, \
+		0x56ed3e2f9e224471ULL, 0xe5c372919d851b1eULL, \
+		0xa26908783662e42aULL, 0x114744c635c5bb45ULL, \
+		0x2d3dfdab61ad1a42ULL, 0x9e13b115620a452dULL, \
+		0xd9b9cbfcc9edba19ULL, 0x6a978742ca4ae576ULL, \
+		0xa14cb926613cf817ULL, 0x1262f598629ba778ULL, \
+		0x55c88f71c97c584cULL, 0xe6e6c3cfcadb0723ULL, \
+		0xda9c7aa29eb3a624ULL, 0x69b2361c9d14f94bULL, \
+		0x2e184cf536f3067fULL, 0x9d36004b35545910ULL, \
+		0x2b769f17cf112238ULL, 0x9858d3a9ccb67d57ULL, \
+		0xdff2a94067518263ULL, 0x6cdce5fe64f6dd0cULL, \
+		0x50a65c93309e7c0bULL, 0xe388102d33392364ULL, \
+		0xa4226ac498dedc50ULL, 0x170c267a9b79833fULL, \
+		0xdcd7181e300f9e5eULL, 0x6ff954a033a8c131ULL, \
+		0x28532e49984f3e05ULL, 0x9b7d62f79be8616aULL, \
+		0xa707db9acf80c06dULL, 0x14299724cc279f02ULL, \
+		0x5383edcd67c06036ULL, 0xe0ada17364673f59ULL} \
+}
+
+/*
+ * Return the initial CRC seed. Use the value returned from this API as the
+ * "crc" parameter to the first call to add data.
+ */
+static inline uint64_t fman_crc64_init(void)
+{
+	return FMAN_CRC64_ECMA_182.initial;
+}
+
+/* Updates the CRC with arbitrary data */
+static inline uint64_t fman_crc64_update(uint64_t crc,
+					 void *data, unsigned int len)
+{
+	uint8_t *p = data;
+	while (len--)
+		crc = FMAN_CRC64_ECMA_182.table[(crc ^ *(p++)) & 0xff] ^
+				(crc >> 8);
+	return crc;
+}
+
+/* Shorthands for updating the CRC with 8/16/32 bits of data.
+ * IMPORTANT NOTE: the typed "data" arguments should not be mistaken for
+ * host-endian numerical values, the assumption is that these values contain
+ * big-endian (ie. network byte order) data.
+ */
+static inline uint64_t fman_crc64_compute_32bit(uint32_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_16bit(uint16_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_8bit(uint8_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+
+/*
+ * Finalise the CRC (using 2's complement)
+ */
+static inline uint64_t fman_crc64_finish(uint64_t seed)
+{
+	return ~seed;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_FMAN_CRC64_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 07/40] bus/dpaa: enable DPAA IOCTL portal driver
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (5 preceding siblings ...)
  2017-07-04 14:43   ` [PATCH v2 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
@ 2017-07-04 14:43   ` Shreyansh Jain
  2017-07-04 14:43   ` [PATCH v2 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
                     ` (34 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Userspace applications interact with DPAA blocks using this IOCTL driver.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile             |   4 +-
 drivers/bus/dpaa/base/qbman/process.c | 331 ++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h    |  88 +++++++++
 drivers/bus/dpaa/include/process.h    | 107 +++++++++++
 4 files changed, 529 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/process.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 94849b8..22218e2 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -51,6 +51,7 @@ CFLAGS += -D _GNU_SOURCE
 
 CFLAGS += -I$(RTE_BUS_DPAA)/
 CFLAGS += -I$(RTE_BUS_DPAA)/include
+CFLAGS += -I$(RTE_BUS_DPAA)/base/qbman
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -68,6 +69,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
-	base/fman/netcfg_layer.c
+	base/fman/netcfg_layer.c \
+	base/qbman/process.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
new file mode 100644
index 0000000..b8ec539
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -0,0 +1,331 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <assert.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+
+#include "process.h"
+
+#include <fsl_usd.h>
+
+/* As higher-level drivers will be built on top of this (dma_mem, qbman, ...),
+ * it's preferable that the process driver itself not provide any exported API.
+ * As such, combined with the fact that none of these operations are
+ * performance critical, it is justified to use lazy initialisation, so that's
+ * what the lock is for.
+ */
+static int fd = -1;
+static pthread_mutex_t fd_init_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static int check_fd(void)
+{
+	int ret;
+
+	if (fd >= 0)
+		return 0;
+	ret = pthread_mutex_lock(&fd_init_lock);
+	assert(!ret);
+	/* check again with the lock held */
+	if (fd < 0)
+		fd = open(PROCESS_PATH, O_RDWR);
+	ret = pthread_mutex_unlock(&fd_init_lock);
+	assert(!ret);
+	return (fd >= 0) ? 0 : -ENODEV;
+}
+
+#define DPAA_IOCTL_MAGIC 'u'
+struct dpaa_ioctl_id_alloc {
+	uint32_t base; /* Return value, the start of the allocated range */
+	enum dpaa_id_type id_type; /* what kind of resource(s) to allocate */
+	uint32_t num; /* how many IDs to allocate (and return value) */
+	uint32_t align; /* must be a power of 2, 0 is treated like 1 */
+	int partial; /* whether to allow less than 'num' */
+};
+
+struct dpaa_ioctl_id_release {
+	/* Input; */
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+struct dpaa_ioctl_id_reserve {
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+#define DPAA_IOCTL_ID_ALLOC \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x01, struct dpaa_ioctl_id_alloc)
+#define DPAA_IOCTL_ID_RELEASE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x02, struct dpaa_ioctl_id_release)
+#define DPAA_IOCTL_ID_RESERVE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x0A, struct dpaa_ioctl_id_reserve)
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial)
+{
+	struct dpaa_ioctl_id_alloc id = {
+		.id_type = id_type,
+		.num = num,
+		.align = align,
+		.partial = partial
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	ret = ioctl(fd, DPAA_IOCTL_ID_ALLOC, &id);
+	if (ret)
+		return ret;
+	for (ret = 0; ret < (int)id.num; ret++)
+		base[ret] = id.base + ret;
+	return id.num;
+}
+
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_release id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret) {
+		fprintf(stderr, "Process FD failure\n");
+		return;
+	}
+	ret = ioctl(fd, DPAA_IOCTL_ID_RELEASE, &id);
+	if (ret)
+		fprintf(stderr, "Process FD ioctl failure type %d base 0x%x num %d\n",
+			id_type, base, num);
+}
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_reserve id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	return ioctl(fd, DPAA_IOCTL_ID_RESERVE, &id);
+}
+
+/***************************************/
+/* Mapping and using QMan/BMan portals */
+/***************************************/
+
+#define DPAA_IOCTL_PORTAL_MAP \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x07, struct dpaa_ioctl_portal_map)
+#define DPAA_IOCTL_PORTAL_UNMAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x08, struct dpaa_portal_map)
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_MAP, params);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_MAP)");
+		return ret;
+	}
+	return 0;
+}
+
+int process_portal_unmap(struct dpaa_portal_map *map)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_UNMAP, map);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_UNMAP)");
+		return ret;
+	}
+	return 0;
+}
+
+#define DPAA_IOCTL_PORTAL_IRQ_MAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x09, struct dpaa_ioctl_irq_map)
+
+int process_portal_irq_map(int ifd, struct dpaa_ioctl_irq_map *map)
+{
+	map->fd = fd;
+	return ioctl(ifd, DPAA_IOCTL_PORTAL_IRQ_MAP, map);
+}
+
+int process_portal_irq_unmap(int ifd)
+{
+	return close(ifd);
+}
+
+struct dpaa_ioctl_raw_portal {
+	/* inputs */
+	enum dpaa_portal_type type; /* Type of portal to allocate */
+
+	uint8_t enable_stash; /* set to non zero to turn on stashing */
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+#define DPAA_IOCTL_ALLOC_RAW_PORTAL \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x0C, struct dpaa_ioctl_raw_portal)
+
+#define DPAA_IOCTL_FREE_RAW_PORTAL \
+	_IOR(DPAA_IOCTL_MAGIC, 0x0D, struct dpaa_ioctl_raw_portal)
+
+static int process_portal_allocate(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_ALLOC_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_ALLOC_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+static int process_portal_free(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_FREE_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_FREE_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.enable_stash = portal->enable_stash;
+	input.cpu = portal->cpu;
+	input.cache = portal->cache;
+	input.window = portal->window;
+	input.sdest = portal->sdest;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int qman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.enable_stash = 0;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int bman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
new file mode 100644
index 0000000..4ff48c6
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_USD_H
+#define __FSL_USD_H
+
+#include <compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define QBMAN_ANY_PORTAL_IDX 0xffffffff
+
+/* Obtain and free raw (unitialized) portals */
+
+struct dpaa_raw_portal {
+	/* inputs */
+
+	/* set to non zero to turn on stashing */
+	uint8_t enable_stash;
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int qman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int bman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_USD_H */
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
new file mode 100644
index 0000000..989ddcd
--- /dev/null
+++ b/drivers/bus/dpaa/include/process.h
@@ -0,0 +1,107 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __PROCESS_H
+#define	__PROCESS_H
+
+#include <compat.h>
+
+/* The process device underlies process-wide user/kernel interactions, such as
+ * mapping dma_mem memory and providing accompanying ioctl()s. (This isn't used
+ * for portals, which use one UIO device each.).
+ */
+#define PROCESS_PATH		"/dev/fsl-usdpaa"
+
+/* Allocation of resource IDs uses a generic interface. This enum is used to
+ * distinguish between the type of underlying object being manipulated.
+ */
+enum dpaa_id_type {
+	dpaa_id_fqid,
+	dpaa_id_bpid,
+	dpaa_id_qpool,
+	dpaa_id_cgrid,
+	dpaa_id_max /* <-- not a valid type, represents the number of types */
+};
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial);
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+/* Mapping and using QMan/BMan portals */
+enum dpaa_portal_type {
+	dpaa_portal_qman,
+	dpaa_portal_bman,
+};
+
+struct dpaa_ioctl_portal_map {
+	/* Input parameter, is a qman or bman portal required. */
+	enum dpaa_portal_type type;
+	/* Specifes a specific portal index to map or 0xffffffff
+	 * for don't care.
+	 */
+	uint32_t index;
+
+	/* Return value if the map succeeds, this gives the mapped
+	 * cache-inhibited (cinh) and cache-enabled (cena) addresses.
+	 */
+	struct dpaa_portal_map {
+		void *cinh;
+		void *cena;
+	} addr;
+	/* Qman-specific return values */
+	u16 channel;
+	uint32_t pools;
+};
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params);
+int process_portal_unmap(struct dpaa_portal_map *map);
+
+struct dpaa_ioctl_irq_map {
+	enum dpaa_portal_type type; /* Type of portal to map */
+	int fd; /* File descriptor that contains the portal */
+	void *portal_cinh; /* Cache inhibited area to identify the portal */
+};
+
+int process_portal_irq_map(int fd,  struct dpaa_ioctl_irq_map *irq);
+int process_portal_irq_unmap(int fd);
+
+#endif	/*  __PROCESS_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 08/40] bus/dpaa: add layer for interrupt emulation using pthread
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (6 preceding siblings ...)
  2017-07-04 14:43   ` [PATCH v2 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
@ 2017-07-04 14:43   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
                     ` (33 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

An interrupt manager is implemented by emulating over pthreads.
Handlers are registered by QBMAN layer for being notified about
any interrupt request from DPAA blocks in userspace.

Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile              |   3 +-
 drivers/bus/dpaa/base/qbman/dpaa_sys.c | 136 +++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h |  65 ++++++++++++++++
 3 files changed, 203 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 22218e2..193ffc1 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -70,6 +70,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
-	base/qbman/process.c
+	base/qbman/process.c \
+	base/qbman/dpaa_sys.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.c b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
new file mode 100644
index 0000000..0017da5
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
@@ -0,0 +1,136 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <process.h>
+#include "dpaa_sys.h"
+
+struct process_interrupt {
+	int irq;
+	irqreturn_t (*isr)(int irq, void *arg);
+	unsigned long flags;
+	const char *name;
+	void *arg;
+	struct list_head node;
+};
+
+static COMPAT_LIST_HEAD(process_irq_list);
+static pthread_mutex_t process_irq_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static void process_interrupt_install(struct process_interrupt *irq)
+{
+	int ret;
+	/* Add the irq to the end of the list */
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_add_tail(&irq->node, &process_irq_list);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static void process_interrupt_remove(struct process_interrupt *irq)
+{
+	int ret;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_del(&irq->node);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static struct process_interrupt *process_interrupt_find(int irq_num)
+{
+	int ret;
+	struct process_interrupt *i = NULL;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_for_each_entry(i, &process_irq_list, node) {
+		if (i->irq == irq_num)
+			goto done;
+	}
+done:
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+	return i;
+}
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name,
+		      void *arg __maybe_unused)
+{
+	struct process_interrupt *irq_node =
+		kmalloc(sizeof(*irq_node), GFP_KERNEL);
+
+	if (!irq_node)
+		return -ENOMEM;
+	irq_node->irq = irq;
+	irq_node->isr = isr;
+	irq_node->flags = flags;
+	irq_node->name = name;
+	irq_node->arg = arg;
+	process_interrupt_install(irq_node);
+	return 0;
+}
+
+int qbman_free_irq(int irq, __maybe_unused void *arg)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (!irq_node)
+		return -EINVAL;
+	process_interrupt_remove(irq_node);
+	kfree(irq_node);
+	return 0;
+}
+
+/* This is the interface from the platform-specific driver code to obtain
+ * interrupt handlers that have been registered.
+ */
+void qbman_invoke_irq(int irq)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (irq_node)
+		irq_node->isr(irq, irq_node->arg);
+}
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.h b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
new file mode 100644
index 0000000..c53035a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
@@ -0,0 +1,65 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_SYS_H
+#define __DPAA_SYS_H
+
+#include <of.h>
+
+/* For 2-element tables related to cache-inhibited and cache-enabled mappings */
+#define DPAA_PORTAL_CE 0
+#define DPAA_PORTAL_CI 1
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+#define DPAA_ASSERT(x) ASSERT(x)
+#else
+#define DPAA_ASSERT(x)	do {  } while (0)
+#endif
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name, void *arg);
+int qbman_free_irq(int irq, void *arg);
+
+void qbman_invoke_irq(int irq);
+
+#endif /* __DPAA_SYS_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 09/40] bus/dpaa: add routines for managing a RB tree
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (7 preceding siblings ...)
  2017-07-04 14:43   ` [PATCH v2 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
                     ` (32 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

QMAN frames are managed over a RB tree data structure.
This patch introduces necessary routines for implementing a RB tree.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/include/dpaa_rbtree.h | 143 +++++++++++++++++++++++++++++++++
 1 file changed, 143 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h

diff --git a/drivers/bus/dpaa/include/dpaa_rbtree.h b/drivers/bus/dpaa/include/dpaa_rbtree.h
new file mode 100644
index 0000000..fff2110
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_rbtree.h
@@ -0,0 +1,143 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_RBTREE_H
+#define __DPAA_RBTREE_H
+
+#include <rte_common.h>
+/************/
+/* RB-trees */
+/************/
+
+/* Linux has a good RB-tree implementation, that we can't use (GPL). It also has
+ * a flat/hooked-in interface that virtually requires license-contamination in
+ * order to write a caller-compatible implementation. Instead, I've created an
+ * RB-tree encapsulation on top of linux's primitives (it does some of the work
+ * the client logic would normally do), and this gives us something we can
+ * reimplement on LWE. Unfortunately there's no good+free RB-tree
+ * implementations out there that are license-compatible and "flat" (ie. no
+ * dynamic allocation). I did find a malloc-based one that I could convert, but
+ * that will be a task for later on. For now, LWE's RB-tree is implemented using
+ * an ordered linked-list.
+ *
+ * Note, the only linux-esque type is "struct rb_node", because it's used
+ * statically in the exported header, so it can't be opaque. Our version doesn't
+ * include a "rb_parent_color" field because we're doing linked-list instead of
+ * a true rb-tree.
+ */
+
+struct rb_node {
+	struct rb_node *prev, *next;
+};
+
+struct dpa_rbtree {
+	struct rb_node *head, *tail;
+};
+
+#define DPAA_RBTREE { NULL, NULL }
+static inline void dpa_rbtree_init(struct dpa_rbtree *tree)
+{
+	tree->head = tree->tail = NULL;
+}
+
+#define QMAN_NODE2OBJ(ptr, type, node_field) \
+	(type *)((char *)ptr - offsetof(type, node_field))
+
+#define IMPLEMENT_DPAA_RBTREE(name, type, node_field, val_field) \
+static inline int name##_push(struct dpa_rbtree *tree, type *obj) \
+{ \
+	struct rb_node *node = tree->head; \
+	if (!node) { \
+		tree->head = tree->tail = &obj->node_field; \
+		obj->node_field.prev = obj->node_field.next = NULL; \
+		return 0; \
+	} \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (obj->val_field == item->val_field) \
+			return -EBUSY; \
+		if (obj->val_field < item->val_field) { \
+			if (tree->head == node) \
+				tree->head = &obj->node_field; \
+			else \
+				node->prev->next = &obj->node_field; \
+			obj->node_field.prev = node->prev; \
+			obj->node_field.next = node; \
+			node->prev = &obj->node_field; \
+			return 0; \
+		} \
+		node = node->next; \
+	} \
+	obj->node_field.prev = tree->tail; \
+	obj->node_field.next = NULL; \
+	tree->tail->next = &obj->node_field; \
+	tree->tail = &obj->node_field; \
+	return 0; \
+} \
+static inline void name##_del(struct dpa_rbtree *tree, type *obj) \
+{ \
+	if (tree->head == &obj->node_field) { \
+		if (tree->tail == &obj->node_field) \
+			/* Only item in the list */ \
+			tree->head = tree->tail = NULL; \
+		else { \
+			/* Is the head, next != NULL */ \
+			tree->head = tree->head->next; \
+			tree->head->prev = NULL; \
+		} \
+	} else { \
+		if (tree->tail == &obj->node_field) { \
+			/* Is the tail, prev != NULL */ \
+			tree->tail = tree->tail->prev; \
+			tree->tail->next = NULL; \
+		} else { \
+			/* Is neither the head nor the tail */ \
+			obj->node_field.prev->next = obj->node_field.next; \
+			obj->node_field.next->prev = obj->node_field.prev; \
+		} \
+	} \
+} \
+static inline type *name##_find(struct dpa_rbtree *tree, u32 val) \
+{ \
+	struct rb_node *node = tree->head; \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (val == item->val_field) \
+			return item; \
+		if (val < item->val_field) \
+			return NULL; \
+		node = node->next; \
+	} \
+	return NULL; \
+}
+
+#endif /* __DPAA_RBTREE_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 10/40] bus/dpaa: add QMAN interface driver
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (8 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
                     ` (31 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Queue Manager (QMan) is a hardware queue management block that
allows software and accelerators on the datapath to enqueue and dequeue
frames in order to communicate.

This part of QBMAN DPAA Block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |    4 +
 drivers/bus/dpaa/base/qbman/qman_driver.c |  271 ++++++
 drivers/bus/dpaa/base/qbman/qman_priv.h   |  314 +++++++
 drivers/bus/dpaa/include/fsl_qman.h       | 1283 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   13 +
 5 files changed, 1885 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 193ffc1..f1120bd 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -71,6 +71,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman_driver.c \
 	base/qbman/dpaa_sys.c
 
+# Link Pthread
+LDLIBS += -lpthread
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
new file mode 100644
index 0000000..80dde20
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -0,0 +1,271 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "qman_priv.h"
+#include <sys/ioctl.h>
+#include <rte_branch_prediction.h>
+
+/* Global variable containing revision id (even on non-control plane systems
+ * where CCSR isn't available).
+ */
+u16 qman_ip_rev;
+u16 qm_channel_pool1 = QMAN_CHANNEL_POOL1;
+u16 qm_channel_caam = QMAN_CHANNEL_CAAM;
+u16 qm_channel_pme = QMAN_CHANNEL_PME;
+
+/* Ccsr map address to access ccsrbased register */
+void *qman_ccsr_map;
+/* The qman clock frequency */
+u32 qman_clk;
+
+static __thread int fd = -1;
+static __thread struct qm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_qman
+};
+
+static int fsl_qman_portal_init(uint32_t index, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu\n");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!\n");
+		return -EINVAL;
+	}
+
+	/* Allocate and map a qman portal */
+	map.index = index;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	pcfg.channel = map.channel;
+	pcfg.pools = map.pools;
+	pcfg.index = map.index;
+
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+
+	fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("QMan irq init failed\n");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
+	pcfg.is_shared = is_shared;
+	pcfg.node = NULL;
+	pcfg.irq = fd;
+
+	irq_map.type = dpaa_portal_qman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_qman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int qman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_qman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int qman_thread_finish(void)
+{
+	return fsl_qman_portal_finish();
+}
+
+void qman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int qman_global_init(void)
+{
+	const struct device_node *dt_node;
+	int ret = 0;
+	size_t lenp;
+	const u32 *chanid;
+	static int ccsr_map_fd;
+	const uint32_t *qman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	const u32 *clk;
+
+	static int done;
+
+	if (done)
+		return -EBUSY;
+
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman-portal");
+	if (!dt_node) {
+		pr_err("No qman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,qman-portal-1.0.0"))
+		pr_err("QMan rev1.0 on P4080 rev1 is not supported!\n");
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.1") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.1.0"))
+		qman_ip_rev = QMAN_REV11;
+	else if	(of_device_is_compatible(dt_node, "fsl,qman-portal-1.2") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.2.0"))
+		qman_ip_rev = QMAN_REV12;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-2.0.0"))
+		qman_ip_rev = QMAN_REV20;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.1"))
+		qman_ip_rev = QMAN_REV30;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.3"))
+		qman_ip_rev = QMAN_REV31;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.1"))
+		qman_ip_rev = QMAN_REV32;
+	else
+		qman_ip_rev = QMAN_REV11;
+
+	if (!qman_ip_rev) {
+		pr_err("Unknown qman portal version\n");
+		return -ENODEV;
+	}
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30) {
+		qm_channel_pool1 = QMAN_CHANNEL_POOL1_REV3;
+		qm_channel_caam = QMAN_CHANNEL_CAAM_REV3;
+		qm_channel_pme = QMAN_CHANNEL_PME_REV3;
+	}
+
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,pool-channel-range");
+	if (!dt_node) {
+		pr_err("No qman pool channel range available\n");
+		return -ENODEV;
+	}
+	chanid = of_get_property(dt_node, "fsl,pool-channel-range", &lenp);
+	if (!chanid) {
+		pr_err("Can not get pool-channel-range property\n");
+		return -EINVAL;
+	}
+
+	/* get ccsr base */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman");
+	if (!dt_node) {
+		pr_err("No qman device node available\n");
+		return -ENODEV;
+	}
+	qman_addr = of_get_address(dt_node, 0, &regs_size, NULL);
+	if (!qman_addr) {
+		pr_err("of_get_address cannot return qman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(dt_node, qman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open("/dev/mem", O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for qman ccsr map\n");
+		return ccsr_map_fd;
+	}
+
+	qman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (qman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map qman ccsr base\n");
+		return -EINVAL;
+	}
+
+	clk = of_get_property(dt_node, "clock-frequency", NULL);
+	if (!clk)
+		pr_warn("Can't find Qman clock frequency\n");
+	else
+		qman_clk = be32_to_cpu(*clk);
+
+	return ret;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
new file mode 100644
index 0000000..e9826c2
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -0,0 +1,314 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __QMAN_PRIV_H
+#define __QMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_qman.h>
+
+#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)
+#error "_ARM64 requires _FSL_QMAN_FQ_LOOKUP"
+#endif
+
+/* Congestion Groups */
+/*
+ * This wrapper represents a bit-array for the state of the 256 QMan congestion
+ * groups. Is also used as a *mask* for congestion groups, eg. so we ignore
+ * those that don't concern us. We harness the structure and accessor details
+ * already used in the management command to query congestion groups.
+ */
+struct qman_cgrs {
+	struct __qm_mcr_querycongestion q;
+};
+
+static inline void qman_cgrs_init(struct qman_cgrs *c)
+{
+	memset(c, 0, sizeof(*c));
+}
+
+static inline void qman_cgrs_fill(struct qman_cgrs *c)
+{
+	memset(c, 0xff, sizeof(*c));
+}
+
+static inline int qman_cgrs_get(struct qman_cgrs *c, int num)
+{
+	return QM_MCR_QUERYCONGESTION(&c->q, num);
+}
+
+static inline void qman_cgrs_set(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] |= (0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline void qman_cgrs_unset(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] &= ~(0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline int qman_cgrs_next(struct qman_cgrs *c, int num)
+{
+	while ((++num < (int)__CGR_NUM) && !qman_cgrs_get(c, num))
+		;
+	return num;
+}
+
+static inline void qman_cgrs_cp(struct qman_cgrs *dest,
+				const struct qman_cgrs *src)
+{
+	*dest = *src;
+}
+
+static inline void qman_cgrs_and(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) & *(_b++);
+}
+
+static inline void qman_cgrs_xor(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) ^ *(_b++);
+}
+
+/* used by CCSR and portal interrupt code */
+enum qm_isr_reg {
+	qm_isr_status = 0,
+	qm_isr_enable = 1,
+	qm_isr_disable = 2,
+	qm_isr_inhibit = 3
+};
+
+struct qm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	struct device_node *node;
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* If the caller enables DQRR stashing (and thus wishes to operate the
+	 * portal from only one cpu), this is the logical CPU that the portal
+	 * will stash to. Whether stashing is enabled or not, this setting is
+	 * also used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* The portal's dedicated channel id, use this value for initialising
+	 * frame queues to target this portal when scheduled.
+	 */
+	u16 channel;
+	/* A mask of which pool channels this portal has dequeue access to
+	 * (using QM_SDQCR_CHANNELS_POOL(n) for the bitmask).
+	 */
+	u32 pools;
+
+};
+
+/* Revision info (for errata and feature handling) */
+#define QMAN_REV11 0x0101
+#define QMAN_REV12 0x0102
+#define QMAN_REV20 0x0200
+#define QMAN_REV30 0x0300
+#define QMAN_REV31 0x0301
+#define QMAN_REV32 0x0302
+extern u16 qman_ip_rev; /* 0 if uninitialised, otherwise QMAN_REVx */
+extern u32 qman_clk;
+
+int qm_set_wpm(int wpm);
+int qm_get_wpm(int *wpm);
+
+struct qman_portal *qman_create_affine_portal(
+			const struct qm_portal_config *config,
+			const struct qman_cgrs *cgrs);
+const struct qm_portal_config *qman_destroy_affine_portal(void);
+
+struct qm_portal_config *qm_get_unused_portal(void);
+struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx);
+
+void qm_put_unused_portal(struct qm_portal_config *pcfg);
+void qm_set_liodns(struct qm_portal_config *pcfg);
+
+/* This CGR feature is supported by h/w and required by unit-tests and the
+ * debugfs hooks, so is implemented in the driver. However it allows an explicit
+ * corruption of h/w fields by s/w that are usually incorruptible (because the
+ * counters are usually maintained entirely within h/w). As such, we declare
+ * this API internally.
+ */
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result);
+
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+/* If the fq object pointer is greater than the size of context_b field,
+ * than a lookup table is required.
+ */
+int qman_setup_fq_lookup_table(size_t num_entries);
+#endif
+
+/*   QMan s/w corenet portal, low-level i/face	 */
+
+/*
+ * For Choose one SOURCE. Choose one COUNT. Choose one
+ * dequeue TYPE. Choose TOKEN (8-bit).
+ * If SOURCE == CHANNELS,
+ *   Choose CHANNELS_DEDICATED and/or CHANNELS_POOL(n).
+ *   You can choose DEDICATED_PRECEDENCE if the portal channel should have
+ *   priority.
+ * If SOURCE == SPECIFICWQ,
+ *     Either select the work-queue ID with SPECIFICWQ_WQ(), or select the
+ *     channel (SPECIFICWQ_DEDICATED or SPECIFICWQ_POOL()) and specify the
+ *     work-queue priority (0-7) with SPECIFICWQ_WQ() - either way, you get the
+ *     same value.
+ */
+#define QM_SDQCR_SOURCE_CHANNELS	0x0
+#define QM_SDQCR_SOURCE_SPECIFICWQ	0x40000000
+#define QM_SDQCR_COUNT_EXACT1		0x0
+#define QM_SDQCR_COUNT_UPTO3		0x20000000
+#define QM_SDQCR_DEDICATED_PRECEDENCE	0x10000000
+#define QM_SDQCR_TYPE_MASK		0x03000000
+#define QM_SDQCR_TYPE_NULL		0x0
+#define QM_SDQCR_TYPE_PRIO_QOS		0x01000000
+#define QM_SDQCR_TYPE_ACTIVE_QOS	0x02000000
+#define QM_SDQCR_TYPE_ACTIVE		0x03000000
+#define QM_SDQCR_TOKEN_MASK		0x00ff0000
+#define QM_SDQCR_TOKEN_SET(v)		(((v) & 0xff) << 16)
+#define QM_SDQCR_TOKEN_GET(v)		(((v) >> 16) & 0xff)
+#define QM_SDQCR_CHANNELS_DEDICATED	0x00008000
+#define QM_SDQCR_SPECIFICWQ_MASK	0x000000f7
+#define QM_SDQCR_SPECIFICWQ_DEDICATED	0x00000000
+#define QM_SDQCR_SPECIFICWQ_POOL(n)	((n) << 4)
+#define QM_SDQCR_SPECIFICWQ_WQ(n)	(n)
+
+#define QM_VDQCR_FQID_MASK		0x00ffffff
+#define QM_VDQCR_FQID(n)		((n) & QM_VDQCR_FQID_MASK)
+
+#define QM_EQCR_VERB_VBIT		0x80
+#define QM_EQCR_VERB_CMD_MASK		0x61	/* but only one value; */
+#define QM_EQCR_VERB_CMD_ENQUEUE	0x01
+#define QM_EQCR_VERB_COLOUR_MASK	0x18	/* 4 possible values; */
+#define QM_EQCR_VERB_COLOUR_GREEN	0x00
+#define QM_EQCR_VERB_COLOUR_YELLOW	0x08
+#define QM_EQCR_VERB_COLOUR_RED		0x10
+#define QM_EQCR_VERB_COLOUR_OVERRIDE	0x18
+#define QM_EQCR_VERB_INTERRUPT		0x04	/* on command consumption */
+#define QM_EQCR_VERB_ORP		0x02	/* enable order restoration */
+#define QM_EQCR_DCA_ENABLE		0x80
+#define QM_EQCR_DCA_PARK		0x40
+#define QM_EQCR_DCA_IDXMASK		0x0f	/* "DQRR::idx" goes here */
+#define QM_EQCR_SEQNUM_NESN		0x8000	/* Advance NESN */
+#define QM_EQCR_SEQNUM_NLIS		0x4000	/* More fragments to come */
+#define QM_EQCR_SEQNUM_SEQMASK		0x3fff	/* sequence number goes here */
+#define QM_EQCR_FQID_NULL		0	/* eg. for an ORP seqnum hole */
+
+#define QM_MCC_VERB_VBIT		0x80
+#define QM_MCC_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_MCC_VERB_INITFQ_PARKED	0x40
+#define QM_MCC_VERB_INITFQ_SCHED	0x41
+#define QM_MCC_VERB_QUERYFQ		0x44
+#define QM_MCC_VERB_QUERYFQ_NP		0x45	/* "non-programmable" fields */
+#define QM_MCC_VERB_QUERYWQ		0x46
+#define QM_MCC_VERB_QUERYWQ_DEDICATED	0x47
+#define QM_MCC_VERB_ALTER_SCHED		0x48	/* Schedule FQ */
+#define QM_MCC_VERB_ALTER_FE		0x49	/* Force Eligible FQ */
+#define QM_MCC_VERB_ALTER_RETIRE	0x4a	/* Retire FQ */
+#define QM_MCC_VERB_ALTER_OOS		0x4b	/* Take FQ out of service */
+#define QM_MCC_VERB_ALTER_FQXON		0x4d	/* FQ XON */
+#define QM_MCC_VERB_ALTER_FQXOFF	0x4e	/* FQ XOFF */
+#define QM_MCC_VERB_INITCGR		0x50
+#define QM_MCC_VERB_MODIFYCGR		0x51
+#define QM_MCC_VERB_CGRTESTWRITE	0x52
+#define QM_MCC_VERB_QUERYCGR		0x58
+#define QM_MCC_VERB_QUERYCONGESTION	0x59
+
+/*
+ * Used by all portal interrupt registers except 'inhibit'
+ * Channels with frame availability
+ */
+#define QM_PIRQ_DQAVAIL	0x0000ffff
+
+/* The DQAVAIL interrupt fields break down into these bits; */
+#define QM_DQAVAIL_PORTAL	0x8000		/* Portal channel */
+#define QM_DQAVAIL_POOL(n)	(0x8000 >> (n))	/* Pool channel, n==[1..15] */
+#define QM_DQAVAIL_MASK		0xffff
+/* This mask contains all the "irqsource" bits visible to API users */
+#define QM_PIRQ_VISIBLE	(QM_PIRQ_SLOW | QM_PIRQ_DQRI)
+
+/* These are qm_<reg>_<verb>(). So for example, qm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define qm_isr_status_read(qm)		__qm_isr_read(qm, qm_isr_status)
+#define qm_isr_status_clear(qm, m)	__qm_isr_write(qm, qm_isr_status, m)
+#define qm_isr_enable_read(qm)		__qm_isr_read(qm, qm_isr_enable)
+#define qm_isr_enable_write(qm, v)	__qm_isr_write(qm, qm_isr_enable, v)
+#define qm_isr_disable_read(qm)		__qm_isr_read(qm, qm_isr_disable)
+#define qm_isr_disable_write(qm, v)	__qm_isr_write(qm, qm_isr_disable, v)
+/* TODO: unfortunate name-clash here, reword? */
+#define qm_isr_inhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 1)
+#define qm_isr_uninhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 0)
+
+#define QMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+
+#endif /* _QMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
new file mode 100644
index 0000000..740ee25
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -0,0 +1,1283 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_QMAN_H
+#define __FSL_QMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <dpaa_rbtree.h>
+
+/* FQ lookups (turn this on for 64bit user-space) */
+#if (__WORDSIZE == 64)
+#define CONFIG_FSL_QMAN_FQ_LOOKUP
+/* if FQ lookups are supported, this controls the number of initialised,
+ * s/w-consumed FQs that can be supported at any one time.
+ */
+#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)
+#endif
+
+/* Last updated for v00.800 of the BG */
+
+/* Hardware constants */
+#define QM_CHANNEL_SWPORTAL0 0
+#define QMAN_CHANNEL_POOL1 0x21
+#define QMAN_CHANNEL_CAAM 0x80
+#define QMAN_CHANNEL_PME 0xa0
+#define QMAN_CHANNEL_POOL1_REV3 0x401
+#define QMAN_CHANNEL_CAAM_REV3 0x840
+#define QMAN_CHANNEL_PME_REV3 0x860
+extern u16 qm_channel_pool1;
+extern u16 qm_channel_caam;
+extern u16 qm_channel_pme;
+enum qm_dc_portal {
+	qm_dc_portal_fman0 = 0,
+	qm_dc_portal_fman1 = 1,
+	qm_dc_portal_caam = 2,
+	qm_dc_portal_pme = 3
+};
+
+/* Portal processing (interrupt) sources */
+#define QM_PIRQ_CCSCI	0x00200000	/* CEETM Congestion State Change */
+#define QM_PIRQ_CSCI	0x00100000	/* Congestion State Change */
+#define QM_PIRQ_EQCI	0x00080000	/* Enqueue Command Committed */
+#define QM_PIRQ_EQRI	0x00040000	/* EQCR Ring (below threshold) */
+#define QM_PIRQ_DQRI	0x00020000	/* DQRR Ring (non-empty) */
+#define QM_PIRQ_MRI	0x00010000	/* MR Ring (non-empty) */
+/*
+ * This mask contains all the interrupt sources that need handling except DQRI,
+ * ie. that if present should trigger slow-path processing.
+ */
+#define QM_PIRQ_SLOW	(QM_PIRQ_CSCI | QM_PIRQ_EQCI | QM_PIRQ_EQRI | \
+			QM_PIRQ_MRI | QM_PIRQ_CCSCI)
+
+/* For qman_static_dequeue_*** APIs */
+#define QM_SDQCR_CHANNELS_POOL_MASK	0x00007fff
+/* for n in [1,15] */
+#define QM_SDQCR_CHANNELS_POOL(n)	(0x00008000 >> (n))
+/* for conversion from n of qm_channel */
+static inline u32 QM_SDQCR_CHANNELS_POOL_CONV(u16 channel)
+{
+	return QM_SDQCR_CHANNELS_POOL(channel + 1 - qm_channel_pool1);
+}
+
+/* For qman_volatile_dequeue(); Choose one PRECEDENCE. EXACT is optional. Use
+ * NUMFRAMES(n) (6-bit) or NUMFRAMES_TILLEMPTY to fill in the frame-count. Use
+ * FQID(n) to fill in the frame queue ID.
+ */
+#define QM_VDQCR_PRECEDENCE_VDQCR	0x0
+#define QM_VDQCR_PRECEDENCE_SDQCR	0x80000000
+#define QM_VDQCR_EXACT			0x40000000
+#define QM_VDQCR_NUMFRAMES_MASK		0x3f000000
+#define QM_VDQCR_NUMFRAMES_SET(n)	(((n) & 0x3f) << 24)
+#define QM_VDQCR_NUMFRAMES_GET(n)	(((n) >> 24) & 0x3f)
+#define QM_VDQCR_NUMFRAMES_TILLEMPTY	QM_VDQCR_NUMFRAMES_SET(0)
+
+/* --- QMan data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct qm_eqcr_entry;	/* EQCR (EnQueue Command Ring) entries */
+struct qm_dqrr_entry;	/* DQRR (DeQueue Response Ring) entries */
+struct qm_mr_entry;	/* MR (Message Ring) entries */
+struct qm_mc_command;	/* MC (Management Command) command */
+struct qm_mc_result;	/* MC result */
+
+#define QM_FD_FORMAT_SG		0x4
+#define QM_FD_FORMAT_LONG	0x2
+#define QM_FD_FORMAT_COMPOUND	0x1
+enum qm_fd_format {
+	/*
+	 * 'contig' implies a contiguous buffer, whereas 'sg' implies a
+	 * scatter-gather table. 'big' implies a 29-bit length with no offset
+	 * field, otherwise length is 20-bit and offset is 9-bit. 'compound'
+	 * implies a s/g-like table, where each entry itself represents a frame
+	 * (contiguous or scatter-gather) and the 29-bit "length" is
+	 * interpreted purely for congestion calculations, ie. a "congestion
+	 * weight".
+	 */
+	qm_fd_contig = 0,
+	qm_fd_contig_big = QM_FD_FORMAT_LONG,
+	qm_fd_sg = QM_FD_FORMAT_SG,
+	qm_fd_sg_big = QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG,
+	qm_fd_compound = QM_FD_FORMAT_COMPOUND
+};
+
+/* Capitalised versions are un-typed but can be used in static expressions */
+#define QM_FD_CONTIG	0
+#define QM_FD_CONTIG_BIG QM_FD_FORMAT_LONG
+#define QM_FD_SG	QM_FD_FORMAT_SG
+#define QM_FD_SG_BIG	(QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG)
+#define QM_FD_COMPOUND	QM_FD_FORMAT_COMPOUND
+
+/* "Frame Descriptor (FD)" */
+struct qm_fd {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 dd:2;	/* dynamic debug */
+			u8 liodn_offset:6;
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 eliodn_offset:4;
+			u8 __reserved:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u8 liodn_offset:6;
+			u8 dd:2;	/* dynamic debug */
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 __reserved:4;
+			u8 eliodn_offset:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#endif
+		};
+		struct {
+			u64 __notaddress:24;
+			/* More efficient address accessor */
+			u64 addr:40;
+		};
+		u64 opaque_addr;
+	};
+	/* The 'format' field indicates the interpretation of the remaining 29
+	 * bits of the 32-bit word. For packing reasons, it is duplicated in the
+	 * other union elements. Note, union'd structs are difficult to use with
+	 * static initialisation under gcc, in which case use the "opaque" form
+	 * with one of the macros.
+	 */
+	union {
+		/* For easier/faster copying of this part of the fd (eg. from a
+		 * DQRR entry to an EQCR entry) copy 'opaque'
+		 */
+		u32 opaque;
+		/* If 'format' is _contig or _sg, 20b length and 9b offset */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format format:3;
+			u16 offset:9;
+			u32 length20:20;
+#else
+			u32 length20:20;
+			u16 offset:9;
+			enum qm_fd_format format:3;
+#endif
+		};
+		/* If 'format' is _contig_big or _sg_big, 29b length */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format1:3;
+			u32 length29:29;
+#else
+			u32 length29:29;
+			enum qm_fd_format _format1:3;
+#endif
+		};
+		/* If 'format' is _compound, 29b "congestion weight" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format2:3;
+			u32 cong_weight:29;
+#else
+			u32 cong_weight:29;
+			enum qm_fd_format _format2:3;
+#endif
+		};
+	};
+	union {
+		u32 cmd;
+		u32 status;
+	};
+} __attribute__((aligned(8)));
+#define QM_FD_DD_NULL		0x00
+#define QM_FD_PID_MASK		0x3f
+static inline u64 qm_fd_addr_get64(const struct qm_fd *fd)
+{
+	return fd->addr;
+}
+
+static inline dma_addr_t qm_fd_addr(const struct qm_fd *fd)
+{
+	return (dma_addr_t)fd->addr;
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_fd_addr_set64(fd, v) \
+	do { \
+		struct qm_fd *__fd931 = (fd); \
+		__fd931->addr = v; \
+	} while (0)
+
+/* For static initialisation of FDs (which is complicated by the use of unions
+ * in "struct qm_fd"), use the following macros. Note that;
+ * - 'dd', 'pid' and 'bpid' are ignored because there's no static initialisation
+ *   use-case),
+ * - use capitalised QM_FD_*** formats for static initialisation.
+ */
+#define QM_FD_FMT_20(cmd, addr_hi, addr_lo, fmt, off, len) \
+	{ 0, 0, 0, 0, 0, addr_hi, addr_lo, \
+	{ (((fmt) & 0x7) << 29) | (((off) & 0x1ff) << 20) | ((len) & 0xfffff) }, \
+	{ cmd } }
+#define QM_FD_FMT_29(cmd, addr_hi, addr_lo, fmt, len) \
+	{ 0, 0, 0, 0, 0, addr_hi, addr_lo, \
+	{ (((fmt) & 0x7) << 29) | ((len) & 0x1fffffff) }, \
+	{ cmd } }
+
+
+/* Scatter/Gather table entry */
+struct qm_sg_entry {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1[3];
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u8 __reserved1[3];
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:24;
+			u64 addr:40;
+#else
+			u64 addr:40;
+			u64 __notaddress:24;
+#endif
+		};
+		u64 opaque;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 extension:1;	/* Extension bit */
+			u32 final:1;		/* Final bit */
+			u32 length:30;
+#else
+			u32 length:30;
+			u32 final:1;		/* Final bit */
+			u32 extension:1;	/* Extension bit */
+#endif
+		};
+		u32 val;
+	};
+	u8 __reserved2;
+	u8 bpid;
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved3:3;
+			u16 offset:13;
+#else
+			u16 offset:13;
+			u16 __reserved3:3;
+#endif
+		};
+		u16 val_off;
+	};
+} __packed;
+static inline u64 qm_sg_entry_get64(const struct qm_sg_entry *sg)
+{
+	return be64_to_cpu(sg->opaque);
+}
+
+static inline dma_addr_t qm_sg_addr(const struct qm_sg_entry *sg)
+{
+	return (dma_addr_t)be64_to_cpu(sg->opaque);
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_sg_entry_set64(sg, v) \
+	do { \
+		struct qm_sg_entry *__sg931 = (sg); \
+		__sg931->opaque = cpu_to_be64(v); \
+	} while (0)
+
+/* See 1.5.8.1: "Enqueue Command" */
+struct qm_eqcr_entry {
+	u8 __dont_write_directly__verb;
+	u8 dca;
+	u16 seqnum;
+	u32 orp;	/* 24-bit */
+	u32 fqid;	/* 24-bit */
+	u32 tag;
+	struct qm_fd fd;
+	u8 __reserved3[32];
+} __packed;
+
+
+/* "Frame Dequeue Response" */
+struct qm_dqrr_entry {
+	u8 verb;
+	u8 stat;
+	u16 seqnum;	/* 15-bit */
+	u8 tok;
+	u8 __reserved2[3];
+	u32 fqid;	/* 24-bit */
+	u32 contextB;
+	struct qm_fd fd;
+	u8 __reserved4[32];
+};
+
+#define QM_DQRR_VERB_VBIT		0x80
+#define QM_DQRR_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_DQRR_VERB_FRAME_DEQUEUE	0x60	/* "this format" */
+#define QM_DQRR_STAT_FQ_EMPTY		0x80	/* FQ empty */
+#define QM_DQRR_STAT_FQ_HELDACTIVE	0x40	/* FQ held active */
+#define QM_DQRR_STAT_FQ_FORCEELIGIBLE	0x20	/* FQ was force-eligible'd */
+#define QM_DQRR_STAT_FD_VALID		0x10	/* has a non-NULL FD */
+#define QM_DQRR_STAT_UNSCHEDULED	0x02	/* Unscheduled dequeue */
+#define QM_DQRR_STAT_DQCR_EXPIRED	0x01	/* VDQCR or PDQCR expired*/
+
+
+/* "ERN Message Response" */
+/* "FQ State Change Notification" */
+struct qm_mr_entry {
+	u8 verb;
+	union {
+		struct {
+			u8 dca;
+			u16 seqnum;
+			u8 rc;		/* Rejection Code */
+			u32 orp:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed ern;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+			u8 __reserved1:4;
+			enum qm_dc_portal portal:2;
+#else
+			enum qm_dc_portal portal:3;
+			u8 __reserved1:3;
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+#endif
+			u16 __reserved2;
+			u8 rc;		/* Rejection Code */
+			u32 __reserved3:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed dcern;
+		struct {
+			u8 fqs;		/* Frame Queue Status */
+			u8 __reserved1[6];
+			u32 fqid;	/* 24-bit */
+			u32 contextB;
+			u8 __reserved2[16];
+		} __packed fq;		/* FQRN/FQRNI/FQRL/FQPN */
+	};
+	u8 __reserved2[32];
+} __packed;
+#define QM_MR_VERB_VBIT			0x80
+/*
+ * ERNs originating from direct-connect portals ("dcern") use 0x20 as a verb
+ * which would be invalid as a s/w enqueue verb. A s/w ERN can be distinguished
+ * from the other MR types by noting if the 0x20 bit is unset.
+ */
+#define QM_MR_VERB_TYPE_MASK		0x27
+#define QM_MR_VERB_DC_ERN		0x20
+#define QM_MR_VERB_FQRN			0x21
+#define QM_MR_VERB_FQRNI		0x22
+#define QM_MR_VERB_FQRL			0x23
+#define QM_MR_VERB_FQPN			0x24
+#define QM_MR_RC_MASK			0xf0	/* contains one of; */
+#define QM_MR_RC_CGR_TAILDROP		0x00
+#define QM_MR_RC_WRED			0x10
+#define QM_MR_RC_ERROR			0x20
+#define QM_MR_RC_ORPWINDOW_EARLY	0x30
+#define QM_MR_RC_ORPWINDOW_LATE		0x40
+#define QM_MR_RC_FQ_TAILDROP		0x50
+#define QM_MR_RC_ORPWINDOW_RETIRED	0x60
+#define QM_MR_RC_ORP_ZERO		0x70
+#define QM_MR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+#define QM_MR_DCERN_COLOUR_GREEN	0x00
+#define QM_MR_DCERN_COLOUR_YELLOW	0x01
+#define QM_MR_DCERN_COLOUR_RED		0x02
+#define QM_MR_DCERN_COLOUR_OVERRIDE	0x03
+/*
+ * An identical structure of FQD fields is present in the "Init FQ" command and
+ * the "Query FQ" result, it's suctioned out into the "struct qm_fqd" type.
+ * Within that, the 'stashing' and 'taildrop' pieces are also factored out, the
+ * latter has two inlines to assist with converting to/from the mant+exp
+ * representation.
+ */
+struct qm_fqd_stashing {
+	/* See QM_STASHING_EXCL_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 exclusive;
+	u8 __reserved1:2;
+	/* Numbers of cachelines */
+	u8 annotation_cl:2;
+	u8 data_cl:2;
+	u8 context_cl:2;
+#else
+	u8 context_cl:2;
+	u8 data_cl:2;
+	u8 annotation_cl:2;
+	u8 __reserved1:2;
+	u8 exclusive;
+#endif
+} __packed;
+struct qm_fqd_taildrop {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved1:3;
+	u16 mant:8;
+	u16 exp:5;
+#else
+	u16 exp:5;
+	u16 mant:8;
+	u16 __reserved1:3;
+#endif
+} __packed;
+struct qm_fqd_oac {
+	/* "Overhead Accounting Control", see QM_OAC_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 oac:2; /* "Overhead Accounting Control" */
+	u8 __reserved1:6;
+#else
+	u8 __reserved1:6;
+	u8 oac:2; /* "Overhead Accounting Control" */
+#endif
+	/* Two's-complement value (-128 to +127) */
+	signed char oal; /* "Overhead Accounting Length" */
+} __packed;
+struct qm_fqd {
+	union {
+		u8 orpc;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1:2;
+			u8 orprws:3;
+			u8 oa:1;
+			u8 olws:2;
+#else
+			u8 olws:2;
+			u8 oa:1;
+			u8 orprws:3;
+			u8 __reserved1:2;
+#endif
+		} __packed;
+	};
+	u8 cgid;
+	u16 fq_ctrl;	/* See QM_FQCTRL_<...> */
+	union {
+		u16 dest_wq;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 channel:13; /* qm_channel */
+			u16 wq:3;
+#else
+			u16 wq:3;
+			u16 channel:13; /* qm_channel */
+#endif
+		} __packed dest;
+	};
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#else
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#endif
+	/*
+	 * For "Initialize Frame Queue" commands, the write-enable mask
+	 * determines whether 'td' or 'oac_init' is observed. For query
+	 * commands, this field is always 'td', and 'oac_query' (below) reflects
+	 * the Overhead ACcounting values.
+	 */
+	union {
+		uint16_t opaque_td;
+		struct qm_fqd_taildrop td;
+		struct qm_fqd_oac oac_init;
+	};
+	u32 context_b;
+	union {
+		/* Treat it as 64-bit opaque */
+		u64 opaque;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 hi;
+			u32 lo;
+#else
+			u32 lo;
+			u32 hi;
+#endif
+		};
+		/* Treat it as s/w portal stashing config */
+		/* see "FQD Context_A field used for [...]" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			struct qm_fqd_stashing stashing;
+			/*
+			 * 48-bit address of FQ context to
+			 * stash, must be cacheline-aligned
+			 */
+			u16 context_hi;
+			u32 context_lo;
+#else
+			u32 context_lo;
+			u16 context_hi;
+			struct qm_fqd_stashing stashing;
+#endif
+		} __packed;
+	} context_a;
+	struct qm_fqd_oac oac_query;
+} __packed;
+/* 64-bit converters for context_hi/lo */
+static inline u64 qm_fqd_stashing_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.context_hi << 32) |
+		(u64)fqd->context_a.context_lo;
+}
+
+static inline dma_addr_t qm_fqd_stashing_addr(const struct qm_fqd *fqd)
+{
+	return (dma_addr_t)qm_fqd_stashing_get64(fqd);
+}
+
+static inline u64 qm_fqd_context_a_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.hi << 32) |
+		(u64)fqd->context_a.lo;
+}
+
+static inline void qm_fqd_stashing_set64(struct qm_fqd *fqd, u64 addr)
+{
+		fqd->context_a.context_hi = upper_32_bits(addr);
+		fqd->context_a.context_lo = lower_32_bits(addr);
+}
+
+static inline void qm_fqd_context_a_set64(struct qm_fqd *fqd, u64 addr)
+{
+	fqd->context_a.hi = upper_32_bits(addr);
+	fqd->context_a.lo = lower_32_bits(addr);
+}
+
+/* convert a threshold value into mant+exp representation */
+static inline int qm_fqd_taildrop_set(struct qm_fqd_taildrop *td, u32 val,
+				      int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	if (val > 0xe0000000)
+		return -ERANGE;
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	td->exp = e;
+	td->mant = val;
+	return 0;
+}
+
+/* and the other direction */
+static inline u32 qm_fqd_taildrop_get(const struct qm_fqd_taildrop *td)
+{
+	return (u32)td->mant << td->exp;
+}
+
+
+/* See "Frame Queue Descriptor (FQD)" */
+/* Frame Queue Descriptor (FQD) field 'fq_ctrl' uses these constants */
+#define QM_FQCTRL_MASK		0x07ff	/* 'fq_ctrl' flags; */
+#define QM_FQCTRL_CGE		0x0400	/* Congestion Group Enable */
+#define QM_FQCTRL_TDE		0x0200	/* Tail-Drop Enable */
+#define QM_FQCTRL_ORP		0x0100	/* ORP Enable */
+#define QM_FQCTRL_CTXASTASHING	0x0080	/* Context-A stashing */
+#define QM_FQCTRL_CPCSTASH	0x0040	/* CPC Stash Enable */
+#define QM_FQCTRL_FORCESFDR	0x0008	/* High-priority SFDRs */
+#define QM_FQCTRL_AVOIDBLOCK	0x0004	/* Don't block active */
+#define QM_FQCTRL_HOLDACTIVE	0x0002	/* Hold active in portal */
+#define QM_FQCTRL_PREFERINCACHE	0x0001	/* Aggressively cache FQD */
+#define QM_FQCTRL_LOCKINCACHE	QM_FQCTRL_PREFERINCACHE /* older naming */
+
+/* See "FQD Context_A field used for [...] */
+/* Frame Queue Descriptor (FQD) field 'CONTEXT_A' uses these constants */
+#define QM_STASHING_EXCL_ANNOTATION	0x04
+#define QM_STASHING_EXCL_DATA		0x02
+#define QM_STASHING_EXCL_CTX		0x01
+
+/* See "Intra Class Scheduling" */
+/* FQD field 'OAC' (Overhead ACcounting) uses these constants */
+#define QM_OAC_ICS		0x2 /* Accounting for Intra-Class Scheduling */
+#define QM_OAC_CG		0x1 /* Accounting for Congestion Groups */
+
+/*
+ * This struct represents the 32-bit "WR_PARM_[GYR]" parameters in CGR fields
+ * and associated commands/responses. The WRED parameters are calculated from
+ * these fields as follows;
+ *   MaxTH = MA * (2 ^ Mn)
+ *   Slope = SA / (2 ^ Sn)
+ *    MaxP = 4 * (Pn + 1)
+ */
+struct qm_cgr_wr_parm {
+	union {
+		u32 word;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 MA:8;
+			u32 Mn:5;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Sn:6;
+			u32 Pn:6;
+#else
+			u32 Pn:6;
+			u32 Sn:6;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Mn:5;
+			u32 MA:8;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This struct represents the 13-bit "CS_THRES" CGR field. In the corresponding
+ * management commands, this is padded to a 16-bit structure field, so that's
+ * how we represent it here. The congestion state threshold is calculated from
+ * these fields as follows;
+ *   CS threshold = TA * (2 ^ Tn)
+ */
+struct qm_cgr_cs_thres {
+	union {
+		u16 hword;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved:3;
+			u16 TA:8;
+			u16 Tn:5;
+#else
+			u16 Tn:5;
+			u16 TA:8;
+			u16 __reserved:3;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This identical structure of CGR fields is present in the "Init/Modify CGR"
+ * commands and the "Query CGR" result. It's suctioned out here into its own
+ * struct.
+ */
+struct __qm_mc_cgr {
+	struct qm_cgr_wr_parm wr_parm_g;
+	struct qm_cgr_wr_parm wr_parm_y;
+	struct qm_cgr_wr_parm wr_parm_r;
+	u8 wr_en_g;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_y;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_r;	/* boolean, use QM_CGR_EN */
+	u8 cscn_en;	/* boolean, use QM_CGR_EN */
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+#else
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+#endif
+		};
+		u32 cscn_targ;	/* use QM_CGR_TARG_* */
+	};
+	u8 cstd_en;	/* boolean, use QM_CGR_EN */
+	u8 cs;		/* boolean, only used in query response */
+	union {
+		struct qm_cgr_cs_thres cs_thres;
+		/* use qm_cgr_cs_thres_set64() */
+		u16 __cs_thres;
+	};
+	u8 mode;	/* QMAN_CGR_MODE_FRAME not supported in rev1.0 */
+} __packed;
+#define QM_CGR_EN		0x01 /* For wr_en_*, cscn_en, cstd_en */
+#define QM_CGR_TARG_UDP_CTRL_WRITE_BIT	0x8000 /* value written to portal bit*/
+#define QM_CGR_TARG_UDP_CTRL_DCP	0x4000 /* 0: SWP, 1: DCP */
+#define QM_CGR_TARG_PORTAL(n)	(0x80000000 >> (n)) /* s/w portal, 0-9 */
+#define QM_CGR_TARG_FMAN0	0x00200000 /* direct-connect portal: fman0 */
+#define QM_CGR_TARG_FMAN1	0x00100000 /*			   : fman1 */
+/* Convert CGR thresholds to/from "cs_thres" format */
+static inline u64 qm_cgr_cs_thres_get64(const struct qm_cgr_cs_thres *th)
+{
+	return (u64)th->TA << th->Tn;
+}
+
+static inline int qm_cgr_cs_thres_set64(struct qm_cgr_cs_thres *th, u64 val,
+					int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	th->Tn = e;
+	th->TA = val;
+	return 0;
+}
+
+/* See 1.5.8.5.1: "Initialize FQ" */
+/* See 1.5.8.5.2: "Query FQ" */
+/* See 1.5.8.5.3: "Query FQ Non-Programmable Fields" */
+/* See 1.5.8.5.4: "Alter FQ State Commands " */
+/* See 1.5.8.6.1: "Initialize/Modify CGR" */
+/* See 1.5.8.6.2: "CGR Test Write" */
+/* See 1.5.8.6.3: "Query CGR" */
+/* See 1.5.8.6.4: "Query Congestion Group State" */
+struct qm_mcc_initfq {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	u32 fqid;	/* 24-bit */
+	u16 count;	/* Initialises 'count+1' FQDs */
+	struct qm_fqd fqd; /* the FQD fields go here */
+	u8 __reserved3[30];
+} __packed;
+struct qm_mcc_queryfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_queryfq_np {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_alterfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2;
+	u8 count;	/* number of consecutive FQID */
+	u8 __reserved3[10];
+	u32 context_b;	/* frame queue context b */
+	u8 __reserved4[40];
+} __packed;
+struct qm_mcc_initcgr {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	struct __qm_mc_cgr cgr;	/* CGR fields */
+	u8 __reserved2[2];
+	u8 cgid;
+	u8 __reserved4[32];
+} __packed;
+struct qm_mcc_cgrtestwrite {
+	u8 __reserved1[2];
+	u8 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u8 __reserved2[23];
+	u8 cgid;
+	u8 __reserved3[32];
+} __packed;
+struct qm_mcc_querycgr {
+	u8 __reserved1[30];
+	u8 cgid;
+	u8 __reserved2[32];
+} __packed;
+struct qm_mcc_querycongestion {
+	u8 __reserved[63];
+} __packed;
+struct qm_mcc_querywq {
+	u8 __reserved;
+	/* select channel if verb != QUERYWQ_DEDICATED */
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved1:3;
+#else
+			u16 __reserved1:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved2[60];
+} __packed;
+
+struct qm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct qm_mcc_initfq initfq;
+		struct qm_mcc_queryfq queryfq;
+		struct qm_mcc_queryfq_np queryfq_np;
+		struct qm_mcc_alterfq alterfq;
+		struct qm_mcc_initcgr initcgr;
+		struct qm_mcc_cgrtestwrite cgrtestwrite;
+		struct qm_mcc_querycgr querycgr;
+		struct qm_mcc_querycongestion querycongestion;
+		struct qm_mcc_querywq querywq;
+	};
+} __packed;
+
+/* INITFQ-specific flags */
+#define QM_INITFQ_WE_MASK		0x01ff	/* 'Write Enable' flags; */
+#define QM_INITFQ_WE_OAC		0x0100
+#define QM_INITFQ_WE_ORPC		0x0080
+#define QM_INITFQ_WE_CGID		0x0040
+#define QM_INITFQ_WE_FQCTRL		0x0020
+#define QM_INITFQ_WE_DESTWQ		0x0010
+#define QM_INITFQ_WE_ICSCRED		0x0008
+#define QM_INITFQ_WE_TDTHRESH		0x0004
+#define QM_INITFQ_WE_CONTEXTB		0x0002
+#define QM_INITFQ_WE_CONTEXTA		0x0001
+/* INITCGR/MODIFYCGR-specific flags */
+#define QM_CGR_WE_MASK			0x07ff	/* 'Write Enable Mask'; */
+#define QM_CGR_WE_WR_PARM_G		0x0400
+#define QM_CGR_WE_WR_PARM_Y		0x0200
+#define QM_CGR_WE_WR_PARM_R		0x0100
+#define QM_CGR_WE_WR_EN_G		0x0080
+#define QM_CGR_WE_WR_EN_Y		0x0040
+#define QM_CGR_WE_WR_EN_R		0x0020
+#define QM_CGR_WE_CSCN_EN		0x0010
+#define QM_CGR_WE_CSCN_TARG		0x0008
+#define QM_CGR_WE_CSTD_EN		0x0004
+#define QM_CGR_WE_CS_THRES		0x0002
+#define QM_CGR_WE_MODE			0x0001
+
+struct qm_mcr_initfq {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_queryfq {
+	u8 __reserved1[8];
+	struct qm_fqd fqd;	/* the FQD fields are here */
+	u8 __reserved2[30];
+} __packed;
+struct qm_mcr_queryfq_np {
+	u8 __reserved1;
+	u8 state;	/* QM_MCR_NP_STATE_*** */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 __reserved2;
+	u32 fqd_link:24;
+	u16 __reserved3:2;
+	u16 odp_seq:14;
+	u16 __reserved4:2;
+	u16 orp_nesn:14;
+	u16 __reserved5:1;
+	u16 orp_ea_hseq:15;
+	u16 __reserved6:1;
+	u16 orp_ea_tseq:15;
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+	u8 __reserved11[5];
+	u8 __reserved12:7;
+	u8 is:1;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#else
+	u8 __reserved2;
+	u32 fqd_link:24;
+
+	u16 odp_seq:14;
+	u16 __reserved3:2;
+
+	u16 orp_nesn:14;
+	u16 __reserved4:2;
+
+	u16 orp_ea_hseq:15;
+	u16 __reserved5:1;
+
+	u16 orp_ea_tseq:15;
+	u16 __reserved6:1;
+
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+
+	u8 __reserved11[5];
+	u8 is:1;
+	u8 __reserved12:7;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#endif
+} __packed;
+
+struct qm_mcr_alterfq {
+	u8 fqs;		/* Frame Queue Status */
+	u8 __reserved1[61];
+} __packed;
+struct qm_mcr_initcgr {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_cgrtestwrite {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	u32 __reserved3:24;
+	u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u32 __reserved4:24;
+	u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+	u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+	u16 lgt;	/* Last Group Tick */
+	u16 wr_prob_g;
+	u16 wr_prob_y;
+	u16 wr_prob_r;
+	u8 __reserved5[8];
+} __packed;
+struct qm_mcr_querycgr {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved3:24;
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 __reserved3:24;
+#endif
+		};
+		u64 i_bcnt;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved4:24;
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 __reserved4:24;
+#endif
+		};
+		u64 a_bcnt;
+	};
+	union {
+		u32 cscn_targ_swp[4];
+		u8 __reserved5[16];
+	};
+} __packed;
+
+struct __qm_mcr_querycongestion {
+	u32 state[8];
+};
+
+struct qm_mcr_querycongestion {
+	u8 __reserved[30];
+	/* Access this struct using QM_MCR_QUERYCONGESTION() */
+	struct __qm_mcr_querycongestion state;
+} __packed;
+struct qm_mcr_querywq {
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved:3;
+#else
+			u16 __reserved:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved[28];
+	u32 wq_len[8];
+} __packed;
+
+struct qm_mc_result {
+	u8 verb;
+	u8 result;
+	union {
+		struct qm_mcr_initfq initfq;
+		struct qm_mcr_queryfq queryfq;
+		struct qm_mcr_queryfq_np queryfq_np;
+		struct qm_mcr_alterfq alterfq;
+		struct qm_mcr_initcgr initcgr;
+		struct qm_mcr_cgrtestwrite cgrtestwrite;
+		struct qm_mcr_querycgr querycgr;
+		struct qm_mcr_querycongestion querycongestion;
+		struct qm_mcr_querywq querywq;
+	};
+} __packed;
+
+#define QM_MCR_VERB_RRID		0x80
+#define QM_MCR_VERB_MASK		QM_MCC_VERB_MASK
+#define QM_MCR_VERB_INITFQ_PARKED	QM_MCC_VERB_INITFQ_PARKED
+#define QM_MCR_VERB_INITFQ_SCHED	QM_MCC_VERB_INITFQ_SCHED
+#define QM_MCR_VERB_QUERYFQ		QM_MCC_VERB_QUERYFQ
+#define QM_MCR_VERB_QUERYFQ_NP		QM_MCC_VERB_QUERYFQ_NP
+#define QM_MCR_VERB_QUERYWQ		QM_MCC_VERB_QUERYWQ
+#define QM_MCR_VERB_QUERYWQ_DEDICATED	QM_MCC_VERB_QUERYWQ_DEDICATED
+#define QM_MCR_VERB_ALTER_SCHED		QM_MCC_VERB_ALTER_SCHED
+#define QM_MCR_VERB_ALTER_FE		QM_MCC_VERB_ALTER_FE
+#define QM_MCR_VERB_ALTER_RETIRE	QM_MCC_VERB_ALTER_RETIRE
+#define QM_MCR_VERB_ALTER_OOS		QM_MCC_VERB_ALTER_OOS
+#define QM_MCR_RESULT_NULL		0x00
+#define QM_MCR_RESULT_OK		0xf0
+#define QM_MCR_RESULT_ERR_FQID		0xf1
+#define QM_MCR_RESULT_ERR_FQSTATE	0xf2
+#define QM_MCR_RESULT_ERR_NOTEMPTY	0xf3	/* OOS fails if FQ is !empty */
+#define QM_MCR_RESULT_ERR_BADCHANNEL	0xf4
+#define QM_MCR_RESULT_PENDING		0xf8
+#define QM_MCR_RESULT_ERR_BADCOMMAND	0xff
+#define QM_MCR_NP_STATE_FE		0x10
+#define QM_MCR_NP_STATE_R		0x08
+#define QM_MCR_NP_STATE_MASK		0x07	/* Reads FQD::STATE; */
+#define QM_MCR_NP_STATE_OOS		0x00
+#define QM_MCR_NP_STATE_RETIRED		0x01
+#define QM_MCR_NP_STATE_TEN_SCHED	0x02
+#define QM_MCR_NP_STATE_TRU_SCHED	0x03
+#define QM_MCR_NP_STATE_PARKED		0x04
+#define QM_MCR_NP_STATE_ACTIVE		0x05
+#define QM_MCR_NP_PTR_MASK		0x07ff	/* for RA[12] & OD[123] */
+#define QM_MCR_NP_RA1_NRA(v)		(((v) >> 14) & 0x3)	/* FQD::NRA */
+#define QM_MCR_NP_RA2_IT(v)		(((v) >> 14) & 0x1)	/* FQD::IT */
+#define QM_MCR_NP_OD1_NOD(v)		(((v) >> 14) & 0x3)	/* FQD::NOD */
+#define QM_MCR_NP_OD3_NPC(v)		(((v) >> 14) & 0x3)	/* FQD::NPC */
+#define QM_MCR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MCR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+/* This extracts the state for congestion group 'n' from a query response.
+ * Eg.
+ *   u8 cgr = [...];
+ *   struct qm_mc_result *res = [...];
+ *   printf("congestion group %d congestion state: %d\n", cgr,
+ *       QM_MCR_QUERYCONGESTION(&res->querycongestion.state, cgr));
+ */
+#define __CGR_WORD(num)		(num >> 5)
+#define __CGR_SHIFT(num)	(num & 0x1f)
+#define __CGR_NUM		(sizeof(struct __qm_mcr_querycongestion) << 3)
+static inline int QM_MCR_QUERYCONGESTION(struct __qm_mcr_querycongestion *p,
+					 u8 cgr)
+{
+	return be32_to_cpu(p->state[__CGR_WORD(cgr)]) &
+	       (0x80000000 >> __CGR_SHIFT(cgr));
+}
+
+	/* Portal and Frame Queues */
+/* Represents a managed portal */
+struct qman_portal;
+
+/*
+ * This object type represents QMan frame queue descriptors (FQD), it is
+ * cacheline-aligned, and initialised by qman_create_fq(). The structure is
+ * defined further down.
+ */
+struct qman_fq;
+
+/*
+ * This object type represents a QMan congestion group, it is defined further
+ * down.
+ */
+struct qman_cgr;
+
+/*
+ * This enum, and the callback type that returns it, are used when handling
+ * dequeued frames via DQRR. Note that for "null" callbacks registered with the
+ * portal object (for handling dequeues that do not demux because context_b is
+ * NULL), the return value *MUST* be qman_cb_dqrr_consume.
+ */
+enum qman_cb_dqrr_result {
+	/* DQRR entry can be consumed */
+	qman_cb_dqrr_consume,
+	/* Like _consume, but requests parking - FQ must be held-active */
+	qman_cb_dqrr_park,
+	/* Does not consume, for DCA mode only. This allows out-of-order
+	 * consumes by explicit calls to qman_dca() and/or the use of implicit
+	 * DCA via EQCR entries.
+	 */
+	qman_cb_dqrr_defer,
+	/*
+	 * Stop processing without consuming this ring entry. Exits the current
+	 * qman_p_poll_dqrr() or interrupt-handling, as appropriate. If within
+	 * an interrupt handler, the callback would typically call
+	 * qman_irqsource_remove(QM_PIRQ_DQRI) before returning this value,
+	 * otherwise the interrupt will reassert immediately.
+	 */
+	qman_cb_dqrr_stop,
+	/* Like qman_cb_dqrr_stop, but consumes the current entry. */
+	qman_cb_dqrr_consume_stop
+};
+
+typedef enum qman_cb_dqrr_result (*qman_cb_dqrr)(struct qman_portal *qm,
+					struct qman_fq *fq,
+					const struct qm_dqrr_entry *dqrr);
+
+/*
+ * This callback type is used when handling ERNs, FQRNs and FQRLs via MR. They
+ * are always consumed after the callback returns.
+ */
+typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq,
+				const struct qm_mr_entry *msg);
+
+/* This callback type is used when handling DCP ERNs */
+typedef void (*qman_cb_dc_ern)(struct qman_portal *qm,
+				const struct qm_mr_entry *msg);
+/*
+ * s/w-visible states. Ie. tentatively scheduled + truly scheduled + active +
+ * held-active + held-suspended are just "sched". Things like "retired" will not
+ * be assumed until it is complete (ie. QMAN_FQ_STATE_CHANGING is set until
+ * then, to indicate it's completing and to gate attempts to retry the retire
+ * command). Note, park commands do not set QMAN_FQ_STATE_CHANGING because it's
+ * technically impossible in the case of enqueue DCAs (which refer to DQRR ring
+ * index rather than the FQ that ring entry corresponds to), so repeated park
+ * commands are allowed (if you're silly enough to try) but won't change FQ
+ * state, and the resulting park notifications move FQs from "sched" to
+ * "parked".
+ */
+enum qman_fq_state {
+	qman_fq_state_oos,
+	qman_fq_state_parked,
+	qman_fq_state_sched,
+	qman_fq_state_retired
+};
+
+
+/*
+ * Frame queue objects (struct qman_fq) are stored within memory passed to
+ * qman_create_fq(), as this allows stashing of caller-provided demux callback
+ * pointers at no extra cost to stashing of (driver-internal) FQ state. If the
+ * caller wishes to add per-FQ state and have it benefit from dequeue-stashing,
+ * they should;
+ *
+ * (a) extend the qman_fq structure with their state; eg.
+ *
+ *     // myfq is allocated and driver_fq callbacks filled in;
+ *     struct my_fq {
+ *	   struct qman_fq base;
+ *	   int an_extra_field;
+ *	   [ ... add other fields to be associated with each FQ ...]
+ *     } *myfq = some_my_fq_allocator();
+ *     struct qman_fq *fq = qman_create_fq(fqid, flags, &myfq->base);
+ *
+ *     // in a dequeue callback, access extra fields from 'fq' via a cast;
+ *     struct my_fq *myfq = (struct my_fq *)fq;
+ *     do_something_with(myfq->an_extra_field);
+ *     [...]
+ *
+ * (b) when and if configuring the FQ for context stashing, specify how ever
+ *     many cachelines are required to stash 'struct my_fq', to accelerate not
+ *     only the QMan driver but the callback as well.
+ */
+
+struct qman_fq_cb {
+	qman_cb_dqrr dqrr;	/* for dequeued frames */
+	qman_cb_mr ern;		/* for s/w ERNs */
+	qman_cb_mr fqs;		/* frame-queue state changes*/
+};
+
+struct qman_fq {
+	/* Caller of qman_create_fq() provides these demux callbacks */
+	struct qman_fq_cb cb;
+	/*
+	 * These are internal to the driver, don't touch. In particular, they
+	 * may change, be removed, or extended (so you shouldn't rely on
+	 * sizeof(qman_fq) being a constant).
+	 */
+	spinlock_t fqlock;
+	u32 fqid;
+	/* DPDK Interface */
+	void *dpaa_intf;
+
+	volatile unsigned long flags;
+	enum qman_fq_state state;
+	int cgr_groupid;
+	struct rb_node node;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	u32 key;
+#endif
+};
+
+/*
+ * This callback type is used when handling congestion group entry/exit.
+ * 'congested' is non-zero on congestion-entry, and zero on congestion-exit.
+ */
+typedef void (*qman_cb_cgr)(struct qman_portal *qm,
+			    struct qman_cgr *cgr, int congested);
+
+struct qman_cgr {
+	/* Set these prior to qman_create_cgr() */
+	u32 cgrid; /* 0..255, but u32 to allow specials like -1, 256, etc.*/
+	qman_cb_cgr cb;
+	/* These are private to the driver */
+	u16 chan; /* portal channel this object is created on */
+	struct list_head node;
+};
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_QMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index 4ff48c6..b0d953f 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -47,6 +47,10 @@
 extern "C" {
 #endif
 
+/* Thread-entry/exit hooks; */
+int qman_thread_init(void);
+int qman_thread_finish(void);
+
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
 /* Obtain and free raw (unitialized) portals */
@@ -81,6 +85,15 @@ int qman_free_raw_portal(struct dpaa_raw_portal *portal);
 int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
 int bman_free_raw_portal(struct dpaa_raw_portal *portal);
 
+/* Post-process interrupts. NB, the kernel IRQ handler disables the interrupt
+ * line before notifying us, and this post-processing re-enables it once
+ * processing is complete. As such, it is essential to call this before going
+ * into another blocking read/select/poll.
+ */
+void qman_thread_irq(void);
+
+/* Global setup */
+int qman_global_init(void);
 #ifdef __cplusplus
 }
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 11/40] bus/dpaa: add QMan driver core routines
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (9 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
                     ` (30 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |    2 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |   88 ++
 drivers/bus/dpaa/base/qbman/qman.c        | 2402 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h        |  888 +++++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c |   12 +
 drivers/bus/dpaa/base/qbman/qman_priv.h   |   11 -
 drivers/bus/dpaa/include/fsl_qman.h       |  767 ++++++++-
 drivers/bus/dpaa/include/fsl_usd.h        |    1 +
 8 files changed, 4148 insertions(+), 23 deletions(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index f1120bd..ad68828 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -71,7 +71,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
+	base/qbman/dpaa_alloc.c \
 	base/qbman/dpaa_sys.c
 
 # Link Pthread
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
new file mode 100644
index 0000000..690576a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2009-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "dpaa_sys.h"
+#include <process.h>
+#include <fsl_qman.h>
+
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_fqid, result, count, align, partial);
+}
+
+void qman_release_fqid_range(u32 fqid, u32 count)
+{
+	process_release(dpaa_id_fqid, fqid, count);
+}
+
+int qman_reserve_fqid_range(u32 fqid, unsigned int count)
+{
+	return process_reserve(dpaa_id_fqid, fqid, count);
+}
+
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_qpool, result, count, align, partial);
+}
+
+void qman_release_pool_range(u32 pool, u32 count)
+{
+	process_release(dpaa_id_qpool, pool, count);
+}
+
+int qman_reserve_pool_range(u32 pool, u32 count)
+{
+	return process_reserve(dpaa_id_qpool, pool, count);
+}
+
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_cgrid, result, count, align, partial);
+}
+
+void qman_release_cgrid_range(u32 cgrid, u32 count)
+{
+	process_release(dpaa_id_cgrid, cgrid, count);
+}
+
+int qman_reserve_cgrid_range(u32 cgrid, u32 count)
+{
+	return process_reserve(dpaa_id_cgrid, cgrid, count);
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
new file mode 100644
index 0000000..829e671
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -0,0 +1,2402 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define DQRR_MAXFILL	15
+#define EQCR_ITHRESH	4	/* if EQCR congests, interrupt threshold */
+#define IRQNAME		"QMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "QMan portal %d" */
+/* maximum number of DQRR entries to process in qman_poll() */
+#define FSL_QMAN_POLL_LIMIT 8
+
+/* Lock/unlock frame queues, subject to the "LOCKED" flag. This is about
+ * inter-processor locking only. Note, FQLOCK() is always called either under a
+ * local_irq_save() or from interrupt context - hence there's no need for irq
+ * protection (and indeed, attempting to nest irq-protection doesn't work, as
+ * the "irq en/disable" machinery isn't recursive...).
+ */
+#define FQLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_lock(&__fq478->fqlock); \
+	} while (0)
+#define FQUNLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_unlock(&__fq478->fqlock); \
+	} while (0)
+
+static inline void fq_set(struct qman_fq *fq, u32 mask)
+{
+	dpaa_set_bits(mask, &fq->flags);
+}
+
+static inline void fq_clear(struct qman_fq *fq, u32 mask)
+{
+	dpaa_clear_bits(mask, &fq->flags);
+}
+
+static inline int fq_isset(struct qman_fq *fq, u32 mask)
+{
+	return fq->flags & mask;
+}
+
+static inline int fq_isclear(struct qman_fq *fq, u32 mask)
+{
+	return !(fq->flags & mask);
+}
+
+struct qman_portal {
+	struct qm_portal p;
+	/* PORTAL_BITS_*** - dynamic, strictly internal */
+	unsigned long bits;
+	/* interrupt sources processed by portal_isr(), configurable */
+	unsigned long irq_sources;
+	u32 use_eqcr_ci_stashing;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* only 1 volatile dequeue at a time */
+	struct qman_fq *vdqcr_owned;
+	u32 sdqcr;
+	int dqrr_disable_ref;
+	/* A portal-specific handler for DCP ERNs. If this is NULL, the global
+	 * handler is called instead.
+	 */
+	qman_cb_dc_ern cb_dc_ern;
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct qm_portal_config *config;
+	struct dpa_rbtree retire_table;
+	char irqname[MAX_IRQNAME];
+	/* 2-element array. cgrs[0] is mask, cgrs[1] is snapshot. */
+	struct qman_cgrs *cgrs;
+	/* linked-list of CSCN handlers. */
+	struct list_head cgr_cbs;
+	/* list lock */
+	spinlock_t cgr_lock;
+	/* track if memory was allocated by the driver */
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* Keep a shadow copy of the DQRR on LE systems as the SW needs to
+	 * do byte swaps of DQRR read only memory.  First entry must be aligned
+	 * to 2 ** 10 to ensure DQRR index calculations based shadow copy
+	 * address (6 bits for address shift + 4 bits for the DQRR size).
+	 */
+	struct qm_dqrr_entry shadow_dqrr[QM_DQRR_SIZE]
+		    __attribute__((aligned(1024)));
+#endif
+};
+
+/* Global handler for DCP ERNs. Used when the portal receiving the message does
+ * not have a portal-specific handler.
+ */
+static qman_cb_dc_ern cb_dc_ern;
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static u16 affine_channels[NR_CPUS];
+static DEFINE_PER_CPU(struct qman_portal, qman_affine_portal);
+
+static inline struct qman_portal *get_affine_portal(void)
+{
+	return &get_cpu_var(qman_affine_portal);
+}
+
+/* This gives a FQID->FQ lookup to cover the fact that we can't directly demux
+ * retirement notifications (the fact they are sometimes h/w-consumed means that
+ * contextB isn't always a s/w demux - and as we can't know which case it is
+ * when looking at the notification, we have to use the slow lookup for all of
+ * them). NB, it's possible to have multiple FQ objects refer to the same FQID
+ * (though at most one of them should be the consumer), so this table isn't for
+ * all FQs - FQs are added when retirement commands are issued, and removed when
+ * they complete, which also massively reduces the size of this table.
+ */
+IMPLEMENT_DPAA_RBTREE(fqtree, struct qman_fq, node, fqid);
+/*
+ * This is what everything can wait on, even if it migrates to a different cpu
+ * to the one whose affine portal it is waiting on.
+ */
+static DECLARE_WAIT_QUEUE_HEAD(affine_queue);
+
+static inline int table_push_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	int ret = fqtree_push(&p->retire_table, fq);
+
+	if (ret)
+		pr_err("ERROR: double FQ-retirement %d\n", fq->fqid);
+	return ret;
+}
+
+static inline void table_del_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	fqtree_del(&p->retire_table, fq);
+}
+
+static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
+{
+	return fqtree_find(&p->retire_table, fqid);
+}
+
+static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to HW format */
+	fqd->fq_ctrl = cpu_to_be16(fqd->fq_ctrl);
+	fqd->dest_wq = cpu_to_be16(fqd->dest_wq);
+	fqd->ics_cred = cpu_to_be16(fqd->ics_cred);
+	fqd->context_b = cpu_to_be32(fqd->context_b);
+	fqd->context_a.opaque = cpu_to_be64(fqd->context_a.opaque);
+	fqd->opaque_td = cpu_to_be16(fqd->opaque_td);
+}
+
+static inline void hw_fqd_to_cpu(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to CPU format */
+	fqd->fq_ctrl = be16_to_cpu(fqd->fq_ctrl);
+	fqd->dest_wq = be16_to_cpu(fqd->dest_wq);
+	fqd->ics_cred = be16_to_cpu(fqd->ics_cred);
+	fqd->context_b = be32_to_cpu(fqd->context_b);
+	fqd->context_a.opaque = be64_to_cpu(fqd->context_a.opaque);
+}
+
+static inline void cpu_to_hw_fd(struct qm_fd *fd)
+{
+	fd->addr = cpu_to_be40(fd->addr);
+	fd->status = cpu_to_be32(fd->status);
+	fd->opaque = cpu_to_be32(fd->opaque);
+}
+
+static inline void hw_fd_to_cpu(struct qm_fd *fd)
+{
+	fd->addr = be40_to_cpu(fd->addr);
+	fd->status = be32_to_cpu(fd->status);
+	fd->opaque = be32_to_cpu(fd->opaque);
+}
+
+/* In the case that slow- and fast-path handling are both done by qman_poll()
+ * (ie. because there is no interrupt handling), we ought to balance how often
+ * we do the fast-path poll versus the slow-path poll. We'll use two decrementer
+ * sources, so we call the fast poll 'n' times before calling the slow poll
+ * once. The idle decrementer constant is used when the last slow-poll detected
+ * no work to do, and the busy decrementer constant when the last slow-poll had
+ * work to do.
+ */
+#define SLOW_POLL_IDLE   1000
+#define SLOW_POLL_BUSY   10
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is);
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit);
+
+/* Portal interrupt handler */
+static irqreturn_t portal_isr(__always_unused int irq, void *ptr)
+{
+	struct qman_portal *p = ptr;
+	/*
+	 * The CSCI/CCSCI source is cleared inside __poll_portal_slow(), because
+	 * it could race against a Query Congestion State command also given
+	 * as part of the handling of this interrupt source. We mustn't
+	 * clear it a second time in this top-level function.
+	 */
+	u32 clear = QM_DQAVAIL_MASK | (p->irq_sources &
+		~(QM_PIRQ_CSCI | QM_PIRQ_CCSCI));
+	u32 is = qm_isr_status_read(&p->p) & p->irq_sources;
+	/* DQRR-handling if it's interrupt-driven */
+	if (is & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+	/* Handling of anything else that's interrupt-driven */
+	clear |= __poll_portal_slow(p, is);
+	qm_isr_status_clear(&p->p, clear);
+	return IRQ_HANDLED;
+}
+
+/* This inner version is used privately by qman_create_affine_portal(), as well
+ * as by the exported qman_stop_dequeues().
+ */
+static inline void qman_stop_dequeues_ex(struct qman_portal *p)
+{
+	if (!(p->dqrr_disable_ref++))
+		qm_dqrr_set_maxfill(&p->p, 0);
+}
+
+static int drain_mr_fqrni(struct qm_portal *p)
+{
+	const struct qm_mr_entry *msg;
+loop:
+	msg = qm_mr_current(p);
+	if (!msg) {
+		/*
+		 * if MR was full and h/w had other FQRNI entries to produce, we
+		 * need to allow it time to produce those entries once the
+		 * existing entries are consumed. A worst-case situation
+		 * (fully-loaded system) means h/w sequencers may have to do 3-4
+		 * other things before servicing the portal's MR pump, each of
+		 * which (if slow) may take ~50 qman cycles (which is ~200
+		 * processor cycles). So rounding up and then multiplying this
+		 * worst-case estimate by a factor of 10, just to be
+		 * ultra-paranoid, goes as high as 10,000 cycles. NB, we consume
+		 * one entry at a time, so h/w has an opportunity to produce new
+		 * entries well before the ring has been fully consumed, so
+		 * we're being *really* paranoid here.
+		 */
+		u64 now, then = mfatb();
+
+		do {
+			now = mfatb();
+		} while ((then + 10000) > now);
+		msg = qm_mr_current(p);
+		if (!msg)
+			return 0;
+	}
+	if ((msg->verb & QM_MR_VERB_TYPE_MASK) != QM_MR_VERB_FQRNI) {
+		/* We aren't draining anything but FQRNIs */
+		pr_err("Found verb 0x%x in MR\n", msg->verb);
+		return -1;
+	}
+	qm_mr_next(p);
+	qm_mr_cci_consume(p, 1);
+	goto loop;
+}
+
+static inline int qm_eqcr_init(struct qm_portal *portal,
+			       enum qm_eqcr_pmode pmode,
+			       unsigned int eq_stash_thresh,
+			       int eq_stash_prio)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u32 cfg;
+	u8 pi;
+
+	eqcr->ring = portal->addr.ce + QM_CL_EQCR;
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	eqcr->cursor = eqcr->ring + pi;
+	eqcr->vbit = (qm_in(EQCR_PI_CINH) & QM_EQCR_SIZE) ?
+			QM_EQCR_VERB_VBIT : 0;
+	eqcr->available = QM_EQCR_SIZE - 1 -
+			qm_cyc_diff(QM_EQCR_SIZE, eqcr->ci, pi);
+	eqcr->ithresh = qm_in(EQCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+	eqcr->pmode = pmode;
+#endif
+	cfg = (qm_in(CFG) & 0x00ffffff) |
+		(eq_stash_thresh << 28) | /* QCSP_CFG: EST */
+		(eq_stash_prio << 26)	| /* QCSP_CFG: EP */
+		((pmode & 0x3) << 24);	/* QCSP_CFG::EPM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_eqcr_finish(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 pi, ci;
+	u32 cfg;
+
+	/*
+	 * Disable EQCI stashing because the QMan only
+	 * presents the value it previously stashed to
+	 * maintain coherency.  Setting the stash threshold
+	 * to 1 then 0 ensures that QMan has resyncronized
+	 * its internal copy so that the portal is clean
+	 * when it is reinitialized in the future
+	 */
+	cfg = (qm_in(CFG) & 0x0fffffff) |
+		(1 << 28); /* QCSP_CFG: EST */
+	qm_out(CFG, cfg);
+	cfg &= 0x0fffffff; /* stash threshold = 0 */
+	qm_out(CFG, cfg);
+
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+
+	/* Refresh EQCR CI cache value */
+	qm_cl_invalidate(EQCR_CI);
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (pi != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("loosing uncommitted EQCR entries\n");
+	if (ci != eqcr->ci)
+		pr_crit("missing existing EQCR completions\n");
+	if (eqcr->ci != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("EQCR destroyed unquiesced\n");
+}
+
+static inline int qm_dqrr_init(struct qm_portal *portal,
+			__maybe_unused const struct qm_portal_config *config,
+			enum qm_dqrr_dmode dmode,
+			__maybe_unused enum qm_dqrr_pmode pmode,
+			enum qm_dqrr_cmode cmode, u8 max_fill)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u32 cfg;
+
+	/* Make sure the DQRR will be idle when we enable */
+	qm_out(DQRR_SDQCR, 0);
+	qm_out(DQRR_VDQCR, 0);
+	qm_out(DQRR_PDQCR, 0);
+	dqrr->ring = portal->addr.ce + QM_CL_DQRR;
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->cursor = dqrr->ring + dqrr->ci;
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+	dqrr->vbit = (qm_in(DQRR_PI_CINH) & QM_DQRR_SIZE) ?
+			QM_DQRR_VERB_VBIT : 0;
+	dqrr->ithresh = qm_in(DQRR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	dqrr->dmode = dmode;
+	dqrr->pmode = pmode;
+	dqrr->cmode = cmode;
+#endif
+	/* Invalidate every ring entry before beginning */
+	for (cfg = 0; cfg < QM_DQRR_SIZE; cfg++)
+		dccivac(qm_cl(dqrr->ring, cfg));
+	cfg = (qm_in(CFG) & 0xff000f00) |
+		((max_fill & (QM_DQRR_SIZE - 1)) << 20) | /* DQRR_MF */
+		((dmode & 1) << 18) |			/* DP */
+		((cmode & 3) << 16) |			/* DCM */
+		0xa0 |					/* RE+SE */
+		(0 ? 0x40 : 0) |			/* Ignore RP */
+		(0 ? 0x10 : 0);				/* Ignore SP */
+	qm_out(CFG, cfg);
+	qm_dqrr_set_maxfill(portal, max_fill);
+	return 0;
+}
+
+static inline void qm_dqrr_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if ((dqrr->cmode != qm_dqrr_cdc) &&
+	    (dqrr->ci != DQRR_PTR2IDX(dqrr->cursor)))
+		pr_crit("Ignoring completed DQRR entries\n");
+#endif
+}
+
+static inline int qm_mr_init(struct qm_portal *portal,
+			     __maybe_unused enum qm_mr_pmode pmode,
+			     enum qm_mr_cmode cmode)
+{
+	register struct qm_mr *mr = &portal->mr;
+	u32 cfg;
+
+	mr->ring = portal->addr.ce + QM_CL_MR;
+	mr->pi = qm_in(MR_PI_CINH) & (QM_MR_SIZE - 1);
+	mr->ci = qm_in(MR_CI_CINH) & (QM_MR_SIZE - 1);
+	mr->cursor = mr->ring + mr->ci;
+	mr->fill = qm_cyc_diff(QM_MR_SIZE, mr->ci, mr->pi);
+	mr->vbit = (qm_in(MR_PI_CINH) & QM_MR_SIZE) ? QM_MR_VERB_VBIT : 0;
+	mr->ithresh = qm_in(MR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mr->pmode = pmode;
+	mr->cmode = cmode;
+#endif
+	cfg = (qm_in(CFG) & 0xfffff0ff) |
+		((cmode & 1) << 8);		/* QCSP_CFG:MM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+	const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+	DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+		mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+		if (!mr->pi)
+			mr->vbit ^= QM_MR_VERB_VBIT;
+		mr->fill++;
+		res = MR_INC(res);
+	}
+	dcbit_ro(res);
+}
+
+static inline
+struct qman_portal *qman_create_portal(
+			struct qman_portal *portal,
+			      const struct qm_portal_config *c,
+			      const struct qman_cgrs *cgrs)
+{
+	struct qm_portal *p;
+	char buf[16];
+	int ret;
+	u32 isdr;
+
+	p = &portal->p;
+
+	portal->use_eqcr_ci_stashing = ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	/*
+	 * If CI-stashing is used, the current defaults use a threshold of 3,
+	 * and stash with high-than-DQRR priority.
+	 */
+	if (qm_eqcr_init(p, qm_eqcr_pvb,
+			 portal->use_eqcr_ci_stashing ? 3 : 0, 1)) {
+		pr_err("Qman EQCR initialisation failed\n");
+		goto fail_eqcr;
+	}
+	if (qm_dqrr_init(p, c, qm_dqrr_dpush, qm_dqrr_pvb,
+			 qm_dqrr_cdc, DQRR_MAXFILL)) {
+		pr_err("Qman DQRR initialisation failed\n");
+		goto fail_dqrr;
+	}
+	if (qm_mr_init(p, qm_mr_pvb, qm_mr_cci)) {
+		pr_err("Qman MR initialisation failed\n");
+		goto fail_mr;
+	}
+	if (qm_mc_init(p)) {
+		pr_err("Qman MC initialisation failed\n");
+		goto fail_mc;
+	}
+
+	/* static interrupt-gating controls */
+	qm_dqrr_set_ithresh(p, 0);
+	qm_mr_set_ithresh(p, 0);
+	qm_isr_set_iperiod(p, 0);
+	portal->cgrs = kmalloc(2 * sizeof(*cgrs), GFP_KERNEL);
+	if (!portal->cgrs)
+		goto fail_cgrs;
+	/* initial snapshot is no-depletion */
+	qman_cgrs_init(&portal->cgrs[1]);
+	if (cgrs)
+		portal->cgrs[0] = *cgrs;
+	else
+		/* if the given mask is NULL, assume all CGRs can be seen */
+		qman_cgrs_fill(&portal->cgrs[0]);
+	INIT_LIST_HEAD(&portal->cgr_cbs);
+	spin_lock_init(&portal->cgr_lock);
+	portal->bits = 0;
+	portal->slowpoll = 0;
+	portal->sdqcr = QM_SDQCR_SOURCE_CHANNELS | QM_SDQCR_COUNT_UPTO3 |
+			QM_SDQCR_DEDICATED_PRECEDENCE | QM_SDQCR_TYPE_PRIO_QOS |
+			QM_SDQCR_TOKEN_SET(0xab) | QM_SDQCR_CHANNELS_DEDICATED;
+	portal->dqrr_disable_ref = 0;
+	portal->cb_dc_ern = NULL;
+	sprintf(buf, "qportal-%d", c->channel);
+	dpa_rbtree_init(&portal->retire_table);
+	isdr = 0xffffffff;
+	qm_isr_disable_write(p, isdr);
+	portal->irq_sources = 0;
+	qm_isr_enable_write(p, portal->irq_sources);
+	qm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, portal_isr, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need EQCR to be empty before continuing */
+	isdr &= ~QM_PIRQ_EQCI;
+	qm_isr_disable_write(p, isdr);
+	ret = qm_eqcr_get_fill(p);
+	if (ret) {
+		pr_err("Qman EQCR unclean\n");
+		goto fail_eqcr_empty;
+	}
+	isdr &= ~(QM_PIRQ_DQRI | QM_PIRQ_MRI);
+	qm_isr_disable_write(p, isdr);
+	if (qm_dqrr_current(p)) {
+		pr_err("Qman DQRR unclean\n");
+		qm_dqrr_cdc_consume_n(p, 0xffff);
+	}
+	if (qm_mr_current(p) && drain_mr_fqrni(p)) {
+		/* special handling, drain just in case it's a few FQRNIs */
+		if (drain_mr_fqrni(p))
+			goto fail_dqrr_mr_empty;
+	}
+	/* Success */
+	portal->config = c;
+	qm_isr_disable_write(p, 0);
+	qm_isr_uninhibit(p);
+	/* Write a sane SDQCR */
+	qm_dqrr_sdqcr_set(p, portal->sdqcr);
+	return portal;
+fail_dqrr_mr_empty:
+fail_eqcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->cgrs);
+	spin_lock_destroy(&portal->cgr_lock);
+fail_cgrs:
+	qm_mc_finish(p);
+fail_mc:
+	qm_mr_finish(p);
+fail_mr:
+	qm_dqrr_finish(p);
+fail_dqrr:
+	qm_eqcr_finish(p);
+fail_eqcr:
+	return NULL;
+}
+
+struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c,
+					      const struct qman_cgrs *cgrs)
+{
+	struct qman_portal *res;
+	struct qman_portal *portal = get_affine_portal();
+	/* A criteria for calling this function (from qman_driver.c) is that
+	 * we're already affine to the cpu and won't schedule onto another cpu.
+	 */
+
+	res = qman_create_portal(portal, c, cgrs);
+	if (res) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		affine_channels[c->cpu] =
+			c->channel;
+		spin_unlock(&affine_mask_lock);
+	}
+	return res;
+}
+
+static inline
+void qman_destroy_portal(struct qman_portal *qm)
+{
+	const struct qm_portal_config *pcfg;
+
+	/* Stop dequeues on the portal */
+	qm_dqrr_sdqcr_set(&qm->p, 0);
+
+	/*
+	 * NB we do this to "quiesce" EQCR. If we add enqueue-completions or
+	 * something related to QM_PIRQ_EQCI, this may need fixing.
+	 * Also, due to the prefetching model used for CI updates in the enqueue
+	 * path, this update will only invalidate the CI cacheline *after*
+	 * working on it, so we need to call this twice to ensure a full update
+	 * irrespective of where the enqueue processing was at when the teardown
+	 * began.
+	 */
+	qm_eqcr_cce_update(&qm->p);
+	qm_eqcr_cce_update(&qm->p);
+	pcfg = qm->config;
+
+	free_irq(pcfg->irq, qm);
+
+	kfree(qm->cgrs);
+	qm_mc_finish(&qm->p);
+	qm_mr_finish(&qm->p);
+	qm_dqrr_finish(&qm->p);
+	qm_eqcr_finish(&qm->p);
+
+	qm->config = NULL;
+
+	spin_lock_destroy(&qm->cgr_lock);
+}
+
+const struct qm_portal_config *qman_destroy_affine_portal(void)
+{
+	/* We don't want to redirect if we're a slave, use "raw" */
+	struct qman_portal *qm = get_affine_portal();
+	const struct qm_portal_config *pcfg;
+	int cpu;
+
+	pcfg = qm->config;
+	cpu = pcfg->cpu;
+
+	qman_destroy_portal(qm);
+
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int qman_get_portal_index(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+/* Inline helper to reduce nesting in __poll_portal_slow() */
+static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,
+				   const struct qm_mr_entry *msg, u8 verb)
+{
+	FQLOCK(fq);
+	switch (verb) {
+	case QM_MR_VERB_FQRL:
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_ORL));
+		fq_clear(fq, QMAN_FQ_STATE_ORL);
+		table_del_fq(p, fq);
+		break;
+	case QM_MR_VERB_FQRN:
+		DPAA_ASSERT((fq->state == qman_fq_state_parked) ||
+			    (fq->state == qman_fq_state_sched));
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_CHANGING));
+		fq_clear(fq, QMAN_FQ_STATE_CHANGING);
+		if (msg->fq.fqs & QM_MR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (msg->fq.fqs & QM_MR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		fq->state = qman_fq_state_retired;
+		break;
+	case QM_MR_VERB_FQPN:
+		DPAA_ASSERT(fq->state == qman_fq_state_sched);
+		DPAA_ASSERT(fq_isclear(fq, QMAN_FQ_STATE_CHANGING));
+		fq->state = qman_fq_state_parked;
+	}
+	FQUNLOCK(fq);
+}
+
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
+{
+	const struct qm_mr_entry *msg;
+	struct qm_mr_entry swapped_msg;
+
+	if (is & QM_PIRQ_CSCI) {
+		struct qman_cgrs rr, c;
+		struct qm_mc_result *mcr;
+		struct qman_cgr *cgr;
+
+		spin_lock(&p->cgr_lock);
+		/*
+		 * The CSCI bit must be cleared _before_ issuing the
+		 * Query Congestion State command, to ensure that a long
+		 * CGR State Change callback cannot miss an intervening
+		 * state change.
+		 */
+		qm_isr_status_clear(&p->p, QM_PIRQ_CSCI);
+		qm_mc_start(&p->p);
+		qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+		while (!(mcr = qm_mc_result(&p->p)))
+			cpu_relax();
+		/* mask out the ones I'm not interested in */
+		qman_cgrs_and(&rr, (const struct qman_cgrs *)
+			&mcr->querycongestion.state, &p->cgrs[0]);
+		/* check previous snapshot for delta, enter/exit congestion */
+		qman_cgrs_xor(&c, &rr, &p->cgrs[1]);
+		/* update snapshot */
+		qman_cgrs_cp(&p->cgrs[1], &rr);
+		/* Invoke callback */
+		list_for_each_entry(cgr, &p->cgr_cbs, node)
+			if (cgr->cb && qman_cgrs_get(&c, cgr->cgrid))
+				cgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid));
+		spin_unlock(&p->cgr_lock);
+	}
+
+	if (is & QM_PIRQ_EQRI) {
+		qm_eqcr_cce_update(&p->p);
+		qm_eqcr_set_ithresh(&p->p, 0);
+		wake_up(&affine_queue);
+	}
+
+	if (is & QM_PIRQ_MRI) {
+		struct qman_fq *fq;
+		u8 verb, num = 0;
+mr_loop:
+		qm_mr_pvb_update(&p->p);
+		msg = qm_mr_current(&p->p);
+		if (!msg)
+			goto mr_done;
+		swapped_msg = *msg;
+		hw_fd_to_cpu(&swapped_msg.ern.fd);
+		verb = msg->verb & QM_MR_VERB_TYPE_MASK;
+		/* The message is a software ERN iff the 0x20 bit is set */
+		if (verb & 0x20) {
+			switch (verb) {
+			case QM_MR_VERB_FQRNI:
+				/* nada, we drop FQRNIs on the floor */
+				break;
+			case QM_MR_VERB_FQRN:
+			case QM_MR_VERB_FQRL:
+				/* Lookup in the retirement table */
+				fq = table_find_fq(p,
+						   be32_to_cpu(msg->fq.fqid));
+				BUG_ON(!fq);
+				fq_state_change(p, fq, &swapped_msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_FQPN:
+				/* Parked */
+				fq = (void *)(uintptr_t)
+					be32_to_cpu(msg->fq.contextB);
+				fq_state_change(p, fq, msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_DC_ERN:
+				/* DCP ERN */
+				if (p->cb_dc_ern)
+					p->cb_dc_ern(p, msg);
+				else if (cb_dc_ern)
+					cb_dc_ern(p, msg);
+				else {
+					static int warn_once;
+
+					if (!warn_once) {
+						pr_crit("Leaking DCP ERNs!\n");
+						warn_once = 1;
+					}
+				}
+				break;
+			default:
+				pr_crit("Invalid MR verb 0x%02x\n", verb);
+			}
+		} else {
+			/* Its a software ERN */
+			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+			fq->cb.ern(p, fq, &swapped_msg);
+		}
+		num++;
+		qm_mr_next(&p->p);
+		goto mr_loop;
+mr_done:
+		qm_mr_cci_consume(&p->p, num);
+	}
+	/*
+	 * QM_PIRQ_CSCI/CCSCI has already been cleared, as part of its specific
+	 * processing. If that interrupt source has meanwhile been re-asserted,
+	 * we mustn't clear it here (or in the top-level interrupt handler).
+	 */
+	return is & (QM_PIRQ_EQCI | QM_PIRQ_EQRI | QM_PIRQ_MRI);
+}
+
+/*
+ * remove some slowish-path stuff from the "fast path" and make sure it isn't
+ * inlined.
+ */
+static noinline void clear_vdqcr(struct qman_portal *p, struct qman_fq *fq)
+{
+	p->vdqcr_owned = NULL;
+	FQLOCK(fq);
+	fq_clear(fq, QMAN_FQ_STATE_VDQCR);
+	FQUNLOCK(fq);
+	wake_up(&affine_queue);
+}
+
+/*
+ * The only states that would conflict with other things if they ran at the
+ * same time on the same cpu are:
+ *
+ *   (i) setting/clearing vdqcr_owned, and
+ *  (ii) clearing the NE (Not Empty) flag.
+ *
+ * Both are safe. Because;
+ *
+ *   (i) this clearing can only occur after qman_set_vdq() has set the
+ *	 vdqcr_owned field (which it does before setting VDQCR), and
+ *	 qman_volatile_dequeue() blocks interrupts and preemption while this is
+ *	 done so that we can't interfere.
+ *  (ii) the NE flag is only cleared after qman_retire_fq() has set it, and as
+ *	 with (i) that API prevents us from interfering until it's safe.
+ *
+ * The good thing is that qman_set_vdq() and qman_retire_fq() run far
+ * less frequently (ie. per-FQ) than __poll_portal_fast() does, so the nett
+ * advantage comes from this function not having to "lock" anything at all.
+ *
+ * Note also that the callbacks are invoked at points which are safe against the
+ * above potential conflicts, but that this function itself is not re-entrant
+ * (this is because the function tracks one end of each FIFO in the portal and
+ * we do *not* want to lock that). So the consequence is that it is safe for
+ * user callbacks to call into any QMan API.
+ */
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit)
+{
+	const struct qm_dqrr_entry *dq;
+	struct qman_fq *fq;
+	enum qman_cb_dqrr_result res;
+	unsigned int limit = 0;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+	do {
+		qm_dqrr_pvb_update(&p->p);
+		dq = qm_dqrr_current(&p->p);
+		if (!dq)
+			break;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* If running on an LE system the fields of the
+	 * dequeue entry must be swapper.  Because the
+	 * QMan HW will ignore writes the DQRR entry is
+	 * copied and the index stored within the copy
+	 */
+		shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+		*shadow = *dq;
+		dq = shadow;
+		shadow->fqid = be32_to_cpu(shadow->fqid);
+		shadow->contextB = be32_to_cpu(shadow->contextB);
+		shadow->seqnum = be16_to_cpu(shadow->seqnum);
+		hw_fd_to_cpu(&shadow->fd);
+#endif
+
+		if (dq->stat & QM_DQRR_STAT_UNSCHEDULED) {
+			/*
+			 * VDQCR: don't trust context_b as the FQ may have
+			 * been configured for h/w consumption and we're
+			 * draining it post-retirement.
+			 */
+			fq = p->vdqcr_owned;
+			/*
+			 * We only set QMAN_FQ_STATE_NE when retiring, so we
+			 * only need to check for clearing it when doing
+			 * volatile dequeues.  It's one less thing to check
+			 * in the critical path (SDQCR).
+			 */
+			if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+				fq_clear(fq, QMAN_FQ_STATE_NE);
+			/*
+			 * This is duplicated from the SDQCR code, but we
+			 * have stuff to do before *and* after this callback,
+			 * and we don't want multiple if()s in the critical
+			 * path (SDQCR).
+			 */
+			res = fq->cb.dqrr(p, fq, dq);
+			if (res == qman_cb_dqrr_stop)
+				break;
+			/* Check for VDQCR completion */
+			if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+				clear_vdqcr(p, fq);
+		} else {
+			/* SDQCR: context_b points to the FQ */
+			fq = (void *)(uintptr_t)dq->contextB;
+			/* Now let the callback do its stuff */
+			res = fq->cb.dqrr(p, fq, dq);
+			/*
+			 * The callback can request that we exit without
+			 * consuming this entry nor advancing;
+			 */
+			if (res == qman_cb_dqrr_stop)
+				break;
+		}
+		/* Interpret 'dq' from a driver perspective. */
+		/*
+		 * Parking isn't possible unless HELDACTIVE was set. NB,
+		 * FORCEELIGIBLE implies HELDACTIVE, so we only need to
+		 * check for HELDACTIVE to cover both.
+		 */
+		DPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||
+			    (res != qman_cb_dqrr_park));
+		/* just means "skip it, I'll consume it myself later on" */
+		if (res != qman_cb_dqrr_defer)
+			qm_dqrr_cdc_consume_1ptr(&p->p, dq,
+						 res == qman_cb_dqrr_park);
+		/* Move forward */
+		qm_dqrr_next(&p->p);
+		/*
+		 * Entry processed and consumed, increment our counter.  The
+		 * callback can request that we exit after consuming the
+		 * entry, and we also exit if we reach our processing limit,
+		 * so loop back only if neither of these conditions is met.
+		 */
+	} while (++limit < poll_limit && res != qman_cb_dqrr_consume_stop);
+
+	return limit;
+}
+
+u16 qman_affine_channel(int cpu)
+{
+	if (cpu < 0) {
+		struct qman_portal *portal = get_affine_portal();
+
+		cpu = portal->config->cpu;
+	}
+	BUG_ON(!CPU_ISSET(cpu, &affine_mask));
+	return affine_channels[cpu];
+}
+
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)
+{
+	struct qman_portal *p = get_affine_portal();
+	const struct qm_dqrr_entry *dq;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+
+	qm_dqrr_pvb_update(&p->p);
+	dq = qm_dqrr_current(&p->p);
+	if (!dq)
+		return NULL;
+
+	if (!(dq->stat & QM_DQRR_STAT_FD_VALID)) {
+		/* Invalid DQRR - put the portal and consume the DQRR.
+		 * Return NULL to user as no packet is seen.
+		 */
+		qman_dqrr_consume(fq, (struct qm_dqrr_entry *)dq);
+		return NULL;
+	}
+
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+	*shadow = *dq;
+	dq = shadow;
+	shadow->fqid = be32_to_cpu(shadow->fqid);
+	shadow->contextB = be32_to_cpu(shadow->contextB);
+	shadow->seqnum = be16_to_cpu(shadow->seqnum);
+	hw_fd_to_cpu(&shadow->fd);
+#endif
+
+	if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+		fq_clear(fq, QMAN_FQ_STATE_NE);
+
+	return (struct qm_dqrr_entry *)dq;
+}
+
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+		clear_vdqcr(p, fq);
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, 0);
+	qm_dqrr_next(&p->p);
+}
+
+int qman_poll_dqrr(unsigned int limit)
+{
+	struct qman_portal *p = get_affine_portal();
+	int ret;
+
+	ret = __poll_portal_fast(p, limit);
+	return ret;
+}
+
+void qman_poll(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if ((~p->irq_sources) & QM_PIRQ_SLOW) {
+		if (!(p->slowpoll--)) {
+			u32 is = qm_isr_status_read(&p->p) & ~p->irq_sources;
+			u32 active = __poll_portal_slow(p, is);
+
+			if (active) {
+				qm_isr_status_clear(&p->p, active);
+				p->slowpoll = SLOW_POLL_BUSY;
+			} else
+				p->slowpoll = SLOW_POLL_IDLE;
+		}
+	}
+	if ((~p->irq_sources) & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+}
+
+void qman_stop_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qman_stop_dequeues_ex(p);
+}
+
+void qman_start_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	DPAA_ASSERT(p->dqrr_disable_ref > 0);
+	if (!(--p->dqrr_disable_ref))
+		qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
+}
+
+void qman_static_dequeue_add(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr |= pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+void qman_static_dequeue_del(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr &= ~pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+u32 qman_static_dequeue_get(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->sdqcr;
+}
+
+void qman_dca(struct qm_dqrr_entry *dq, int park_request)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, park_request);
+}
+
+/* Frame queue API */
+static const char *mcr_result_str(u8 result)
+{
+	switch (result) {
+	case QM_MCR_RESULT_NULL:
+		return "QM_MCR_RESULT_NULL";
+	case QM_MCR_RESULT_OK:
+		return "QM_MCR_RESULT_OK";
+	case QM_MCR_RESULT_ERR_FQID:
+		return "QM_MCR_RESULT_ERR_FQID";
+	case QM_MCR_RESULT_ERR_FQSTATE:
+		return "QM_MCR_RESULT_ERR_FQSTATE";
+	case QM_MCR_RESULT_ERR_NOTEMPTY:
+		return "QM_MCR_RESULT_ERR_NOTEMPTY";
+	case QM_MCR_RESULT_PENDING:
+		return "QM_MCR_RESULT_PENDING";
+	case QM_MCR_RESULT_ERR_BADCOMMAND:
+		return "QM_MCR_RESULT_ERR_BADCOMMAND";
+	}
+	return "<unknown MCR result>";
+}
+
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
+{
+	struct qm_fqd fqd;
+	struct qm_mcr_queryfq_np np;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID) {
+		int ret = qman_alloc_fqid(&fqid);
+
+		if (ret)
+			return ret;
+	}
+	spin_lock_init(&fq->fqlock);
+	fq->fqid = fqid;
+	fq->flags = flags;
+	fq->state = qman_fq_state_oos;
+	fq->cgr_groupid = 0;
+
+	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
+		return 0;
+	/* Everything else is AS_IS support */
+	p = get_affine_portal();
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(&fqd);
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ_NP);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ_NP failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	np = mcr->queryfq_np;
+	/* Phew, have queryfq and queryfq_np results, stitch together
+	 * the FQ object from those.
+	 */
+	fq->cgr_groupid = fqd.cgid;
+	switch (np.state & QM_MCR_NP_STATE_MASK) {
+	case QM_MCR_NP_STATE_OOS:
+		break;
+	case QM_MCR_NP_STATE_RETIRED:
+		fq->state = qman_fq_state_retired;
+		if (np.frm_cnt)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		break;
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+		fq->state = qman_fq_state_sched;
+		if (np.state & QM_MCR_NP_STATE_R)
+			fq_set(fq, QMAN_FQ_STATE_CHANGING);
+		break;
+	case QM_MCR_NP_STATE_PARKED:
+		fq->state = qman_fq_state_parked;
+		break;
+	default:
+		DPAA_ASSERT(NULL == "invalid FQ state");
+	}
+	if (fqd.fq_ctrl & QM_FQCTRL_CGE)
+		fq->state |= QMAN_FQ_STATE_CGR_EN;
+	return 0;
+err:
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID)
+		qman_release_fqid(fqid);
+	return -EIO;
+}
+
+void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
+{
+	/*
+	 * We don't need to lock the FQ as it is a pre-condition that the FQ be
+	 * quiesced. Instead, run some checks.
+	 */
+	switch (fq->state) {
+	case qman_fq_state_parked:
+		DPAA_ASSERT(flags & QMAN_FQ_DESTROY_PARKED);
+	case qman_fq_state_oos:
+		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
+			qman_release_fqid(fq->fqid);
+
+		return;
+	default:
+		break;
+	}
+	DPAA_ASSERT(NULL == "qman_free_fq() on unquiesced FQ!");
+}
+
+u32 qman_fq_fqid(struct qman_fq *fq)
+{
+	return fq->fqid;
+}
+
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags)
+{
+	if (state)
+		*state = fq->state;
+	if (flags)
+		*flags = fq->flags;
+}
+
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	u8 res, myverb = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		QM_MCC_VERB_INITFQ_SCHED : QM_MCC_VERB_INITFQ_PARKED;
+
+	if ((fq->state != qman_fq_state_oos) &&
+	    (fq->state != qman_fq_state_parked))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	if (opts && (opts->we_mask & QM_INITFQ_WE_OAC)) {
+		/* And can't be set at the same time as TDTHRESH */
+		if (opts->we_mask & QM_INITFQ_WE_TDTHRESH)
+			return -EINVAL;
+	}
+	/* Issue an INITFQ_[PARKED|SCHED] management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     ((fq->state != qman_fq_state_oos) &&
+				(fq->state != qman_fq_state_parked)))) {
+		FQUNLOCK(fq);
+		return -EBUSY;
+	}
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initfq = *opts;
+	mcc->initfq.fqid = cpu_to_be32(fq->fqid);
+	mcc->initfq.count = 0;
+	/*
+	 * If the FQ does *not* have the TO_DCPORTAL flag, context_b is set as a
+	 * demux pointer. Otherwise, the caller-provided value is allowed to
+	 * stand, don't overwrite it.
+	 */
+	if (fq_isclear(fq, QMAN_FQ_FLAG_TO_DCPORTAL)) {
+		dma_addr_t phys_fq;
+
+		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+		/*
+		 *  and the physical address - NB, if the user wasn't trying to
+		 * set CONTEXTA, clear the stashing settings.
+		 */
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_CONTEXTA)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;
+			memset(&mcc->initfq.fqd.context_a, 0,
+			       sizeof(mcc->initfq.fqd.context_a));
+		} else {
+			phys_fq = rte_mem_virt2phy(fq);
+			qm_fqd_stashing_set64(&mcc->initfq.fqd, phys_fq);
+		}
+	}
+	if (flags & QMAN_INITFQ_FLAG_LOCAL) {
+		mcc->initfq.fqd.dest.channel = p->config->channel;
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_DESTWQ)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_DESTWQ;
+			mcc->initfq.fqd.dest.wq = 4;
+		}
+	}
+	mcc->initfq.we_mask = cpu_to_be16(mcc->initfq.we_mask);
+	cpu_to_hw_fqd(&mcc->initfq.fqd);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		FQUNLOCK(fq);
+		return -EIO;
+	}
+	if (opts) {
+		if (opts->we_mask & QM_INITFQ_WE_FQCTRL) {
+			if (opts->fqd.fq_ctrl & QM_FQCTRL_CGE)
+				fq_set(fq, QMAN_FQ_STATE_CGR_EN);
+			else
+				fq_clear(fq, QMAN_FQ_STATE_CGR_EN);
+		}
+		if (opts->we_mask & QM_INITFQ_WE_CGID)
+			fq->cgr_groupid = opts->fqd.cgid;
+	}
+	fq->state = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		qman_fq_state_sched : qman_fq_state_parked;
+	FQUNLOCK(fq);
+	return 0;
+}
+
+int qman_schedule_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_parked)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTERFQ_SCHED management command */
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state != qman_fq_state_parked))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_SCHED);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_SCHED);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_sched;
+out:
+	FQUNLOCK(fq);
+
+	return ret;
+}
+
+int qman_retire_fq(struct qman_fq *fq, u32 *flags)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int rval;
+	u8 res;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_sched))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_retired) ||
+				(fq->state == qman_fq_state_oos))) {
+		rval = -EBUSY;
+		goto out;
+	}
+	rval = table_push_fq(p, fq);
+	if (rval)
+		goto out;
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_RETIRE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_RETIRE);
+	res = mcr->result;
+	/*
+	 * "Elegant" would be to treat OK/PENDING the same way; set CHANGING,
+	 * and defer the flags until FQRNI or FQRN (respectively) show up. But
+	 * "Friendly" is to process OK immediately, and not set CHANGING. We do
+	 * friendly, otherwise the caller doesn't necessarily have a fully
+	 * "retired" FQ on return even if the retirement was immediate. However
+	 * this does mean some code duplication between here and
+	 * fq_state_change().
+	 */
+	if (likely(res == QM_MCR_RESULT_OK)) {
+		rval = 0;
+		/* Process 'fq' right away, we'll ignore FQRNI */
+		if (mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		if (flags)
+			*flags = fq->flags;
+		fq->state = qman_fq_state_retired;
+		if (fq->cb.fqs) {
+			/*
+			 * Another issue with supporting "immediate" retirement
+			 * is that we're forced to drop FQRNIs, because by the
+			 * time they're seen it may already be "too late" (the
+			 * fq may have been OOS'd and free()'d already). But if
+			 * the upper layer wants a callback whether it's
+			 * immediate or not, we have to fake a "MR" entry to
+			 * look like an FQRNI...
+			 */
+			struct qm_mr_entry msg;
+
+			msg.verb = QM_MR_VERB_FQRNI;
+			msg.fq.fqs = mcr->alterfq.fqs;
+			msg.fq.fqid = fq->fqid;
+			msg.fq.contextB = (u32)(uintptr_t)fq;
+			fq->cb.fqs(p, fq, &msg);
+		}
+	} else if (res == QM_MCR_RESULT_PENDING) {
+		rval = 1;
+		fq_set(fq, QMAN_FQ_STATE_CHANGING);
+	} else {
+		rval = -EIO;
+		table_del_fq(p, fq);
+	}
+out:
+	FQUNLOCK(fq);
+	return rval;
+}
+
+int qman_oos_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_retired)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_BLOCKOOS)) ||
+		     (fq->state != qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_OOS);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_OOS);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_oos;
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_fq_flow_control(struct qman_fq *fq, int xon)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+	u8 myverb;
+
+	if ((fq->state == qman_fq_state_oos) ||
+	    (fq->state == qman_fq_state_retired) ||
+		(fq->state == qman_fq_state_parked))
+		return -EINVAL;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTER_FQXON or ALTER_FQXOFF management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_parked) ||
+			(fq->state == qman_fq_state_oos) ||
+			(fq->state == qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = fq->fqid;
+	mcc->alterfq.count = 0;
+	myverb = xon ? QM_MCC_VERB_ALTER_FQXON : QM_MCC_VERB_ALTER_FQXOFF;
+
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(fqd);
+	if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_fq_has_pkts(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	int ret = 0;
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		ret = !!mcr->queryfq_np.frm_cnt;
+	return ret;
+}
+
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		*np = mcr->queryfq_np;
+		np->fqd_link = be24_to_cpu(np->fqd_link);
+		np->odp_seq = be16_to_cpu(np->odp_seq);
+		np->orp_nesn = be16_to_cpu(np->orp_nesn);
+		np->orp_ea_hseq  = be16_to_cpu(np->orp_ea_hseq);
+		np->orp_ea_tseq  = be16_to_cpu(np->orp_ea_tseq);
+		np->orp_ea_hptr = be24_to_cpu(np->orp_ea_hptr);
+		np->orp_ea_tptr = be24_to_cpu(np->orp_ea_tptr);
+		np->pfdr_hptr = be24_to_cpu(np->pfdr_hptr);
+		np->pfdr_tptr = be24_to_cpu(np->pfdr_tptr);
+		np->ics_surp = be16_to_cpu(np->ics_surp);
+		np->byte_cnt = be32_to_cpu(np->byte_cnt);
+		np->frm_cnt = be24_to_cpu(np->frm_cnt);
+		np->ra1_sfdr = be16_to_cpu(np->ra1_sfdr);
+		np->ra2_sfdr = be16_to_cpu(np->ra2_sfdr);
+		np->od1_sfdr = be16_to_cpu(np->od1_sfdr);
+		np->od2_sfdr = be16_to_cpu(np->od2_sfdr);
+		np->od3_sfdr = be16_to_cpu(np->od3_sfdr);
+	}
+	if (res == QM_MCR_RESULT_ERR_FQID)
+		return -ERANGE;
+	else if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res, myverb;
+
+	myverb = (query_dedicated) ? QM_MCR_VERB_QUERYWQ_DEDICATED :
+				 QM_MCR_VERB_QUERYWQ;
+	mcc = qm_mc_start(&p->p);
+	mcc->querywq.channel.id = cpu_to_be16(wq->channel.id);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		int i, array_len;
+
+		wq->channel.id = be16_to_cpu(mcr->querywq.channel.id);
+		array_len = ARRAY_SIZE(mcr->querywq.wq_len);
+		for (i = 0; i < array_len; i++)
+			wq->wq_len[i] = be32_to_cpu(mcr->querywq.wq_len[i]);
+	}
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERYWQ failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->cgrtestwrite.cgid = cgr->cgrid;
+	mcc->cgrtestwrite.i_bcnt_hi = (u8)(i_bcnt >> 32);
+	mcc->cgrtestwrite.i_bcnt_lo = (u32)i_bcnt;
+	qm_mc_commit(&p->p, QM_MCC_VERB_CGRTESTWRITE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_CGRTESTWRITE);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*result = mcr->cgrtestwrite;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("CGR TEST WRITE failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->querycgr.cgid = cgr->cgrid;
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCGR);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYCGR);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*cgrd = mcr->querycgr;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CGR failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	cgrd->cgr.wr_parm_g.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_g.word);
+	cgrd->cgr.wr_parm_y.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_y.word);
+	cgrd->cgr.wr_parm_r.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_r.word);
+	cgrd->cgr.cscn_targ =  be32_to_cpu(cgrd->cgr.cscn_targ);
+	cgrd->cgr.__cs_thres = be16_to_cpu(cgrd->cgr.__cs_thres);
+	for (i = 0; i < ARRAY_SIZE(cgrd->cscn_targ_swp); i++)
+		cgrd->cscn_targ_swp[i] =
+			be32_to_cpu(cgrd->cscn_targ_swp[i]);
+	return 0;
+}
+
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion)
+{
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	qm_mc_start(&p->p);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			QM_MCC_VERB_QUERYCONGESTION);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*congestion = mcr->querycongestion;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CONGESTION failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	for (i = 0; i < ARRAY_SIZE(congestion->state.state); i++)
+		congestion->state.state[i] =
+			be32_to_cpu(congestion->state.state[i]);
+	return 0;
+}
+
+int qman_set_vdq(struct qman_fq *fq, u16 num)
+{
+	struct qman_portal *p = get_affine_portal();
+	uint32_t vdqcr;
+	int ret = -EBUSY;
+
+	vdqcr = QM_VDQCR_EXACT;
+	vdqcr |= QM_VDQCR_NUMFRAMES_SET(num);
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired)) {
+		ret = -EINVAL;
+		goto out;
+	}
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR)) {
+		ret = -EBUSY;
+		goto out;
+	}
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (!ret)
+		qm_dqrr_vdqcr_set(&p->p, vdqcr);
+
+out:
+	return ret;
+}
+
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
+			  u32 vdqcr)
+{
+	struct qman_portal *p;
+	int ret = -EBUSY;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired))
+		return -EINVAL;
+	if (vdqcr & QM_VDQCR_FQID_MASK)
+		return -EINVAL;
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+		return -EBUSY;
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	p = get_affine_portal();
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (ret)
+		return ret;
+
+	/* VDQCR is set */
+	qm_dqrr_vdqcr_set(&p->p, vdqcr);
+	return 0;
+}
+
+static noinline void update_eqcr_ci(struct qman_portal *p, u8 avail)
+{
+	if (avail)
+		qm_eqcr_cce_prefetch(&p->p);
+	else
+		qm_eqcr_cce_update(&p->p);
+}
+
+int qman_eqcr_is_empty(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	u8 avail;
+
+	update_eqcr_ci(p, 0);
+	avail = qm_eqcr_get_fill(&p->p);
+	return (avail == 0);
+}
+
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine)
+{
+	if (affine) {
+		struct qman_portal *p = get_affine_portal();
+
+		p->cb_dc_ern = handler;
+	} else
+		cb_dc_ern = handler;
+}
+
+static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
+					struct qman_fq *fq,
+					const struct qm_fd *fd,
+					u32 flags)
+{
+	struct qm_eqcr_entry *eq;
+	u8 avail;
+
+	if (p->use_eqcr_ci_stashing) {
+		/*
+		 * The stashing case is easy, only update if we need to in
+		 * order to try and liberate ring entries.
+		 */
+		eq = qm_eqcr_start_stash(&p->p);
+	} else {
+		/*
+		 * The non-stashing case is harder, need to prefetch ahead of
+		 * time.
+		 */
+		avail = qm_eqcr_get_avail(&p->p);
+		if (avail < 2)
+			update_eqcr_ci(p, avail);
+		eq = qm_eqcr_start_no_stash(&p->p);
+	}
+
+	if (unlikely(!eq))
+		return NULL;
+
+	if (flags & QMAN_ENQUEUE_FLAG_DCA)
+		eq->dca = QM_EQCR_DCA_ENABLE |
+			((flags & QMAN_ENQUEUE_FLAG_DCA_PARK) ?
+					QM_EQCR_DCA_PARK : 0) |
+			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
+	eq->fqid = cpu_to_be32(fq->fqid);
+	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+	eq->fd = *fd;
+	cpu_to_hw_fd(&eq->fd);
+	return eq;
+}
+
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_CMD_ENQUEUE |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+	/* Factor the below out, it's used from qman_enqueue_orp() too */
+	return 0;
+}
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_portal *portal = &p->p;
+
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
+
+	u8 i, diff, old_ci, sent = 0;
+
+	/* Update the available entries if no entry is free */
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return 0;
+	}
+
+	/* try to send as many frames as possible */
+	while (eqcr->available && frames_to_send--) {
+		eq->fqid = cpu_to_be32(fq->fqid);
+		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+		eq->fd.opaque_addr = fd->opaque_addr;
+		eq->fd.addr = cpu_to_be40(fd->addr);
+		eq->fd.status = cpu_to_be32(fd->status);
+		eq->fd.opaque = cpu_to_be32(fd->opaque);
+
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		eqcr->available--;
+		sent++;
+		fd++;
+	}
+	lwsync();
+
+	/* In order for flushes to complete faster, all lines are recorded in
+	 * 32 bit word.
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		eq->__dont_write_directly__verb =
+			QM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;
+		prev_eq = eq;
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		if (unlikely((prev_eq + 1) != eq))
+			eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+	}
+
+	/* We need  to flush all the lines but without load/store operations
+	 * between them
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		dcbf(eq);
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+	}
+	/* Update cursor for the next call */
+	eqcr->cursor = eq;
+	return sent;
+}
+
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum)
+{
+	struct qman_portal *p  = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Process ORP-specifics here */
+	if (flags & QMAN_ENQUEUE_FLAG_NLIS)
+		orp_seqnum |= QM_EQCR_SEQNUM_NLIS;
+	else {
+		orp_seqnum &= ~QM_EQCR_SEQNUM_NLIS;
+		if (flags & QMAN_ENQUEUE_FLAG_NESN)
+			orp_seqnum |= QM_EQCR_SEQNUM_NESN;
+		else
+			/* No need to check 4 QMAN_ENQUEUE_FLAG_HOLE */
+			orp_seqnum &= ~QM_EQCR_SEQNUM_NESN;
+	}
+	eq->seqnum = cpu_to_be16(orp_seqnum);
+	eq->orp = cpu_to_be32(orp->fqid);
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_ORP |
+		((flags & (QMAN_ENQUEUE_FLAG_HOLE | QMAN_ENQUEUE_FLAG_NESN)) ?
+				0 : QM_EQCR_VERB_CMD_ENQUEUE) |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+
+	return 0;
+}
+
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+	u8 verb = QM_MCC_VERB_MODIFYCGR;
+
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initcgr = *opts;
+	mcc->initcgr.we_mask = cpu_to_be16(mcc->initcgr.we_mask);
+	mcc->initcgr.cgr.wr_parm_g.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_g.word);
+	mcc->initcgr.cgr.wr_parm_y.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_y.word);
+	mcc->initcgr.cgr.wr_parm_r.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_r.word);
+	mcc->initcgr.cgr.cscn_targ =  cpu_to_be32(mcc->initcgr.cgr.cscn_targ);
+	mcc->initcgr.cgr.__cs_thres = cpu_to_be16(mcc->initcgr.cgr.__cs_thres);
+
+	mcc->initcgr.cgid = cgr->cgrid;
+	if (flags & QMAN_CGR_FLAG_USE_INIT)
+		verb = QM_MCC_VERB_INITCGR;
+	qm_mc_commit(&p->p, verb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == verb);
+	res = mcr->result;
+	return (res == QM_MCR_RESULT_OK) ? 0 : -EIO;
+}
+
+#define TARG_MASK(n) (0x80000000 >> (n->config->channel - \
+					QM_CHANNEL_SWPORTAL0))
+#define TARG_DCP_MASK(n) (0x80000000 >> (10 + n))
+#define PORTAL_IDX(n) (n->config->channel - QM_CHANNEL_SWPORTAL0)
+
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret;
+	struct qman_portal *p;
+
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	p = get_affine_portal();
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	cgr->chan = p->config->channel;
+	spin_lock(&p->cgr_lock);
+
+	/* if no opts specified, just add it to the list */
+	if (!opts)
+		goto add_list;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		goto release_lock;
+	if (opts)
+		local_opts = *opts;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+			QM_CGR_TARG_UDP_CTRL_WRITE_BIT | PORTAL_IDX(p);
+	else
+		/* Overwrite TARG */
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+							TARG_MASK(p);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT, &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		goto release_lock;
+add_list:
+	list_add(&cgr->node, &p->cgr_cbs);
+
+	/* Determine if newly added object requires its callback to be called */
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret) {
+		/* we can't go back, so proceed and return success, but screen
+		 * and wail to the log file.
+		 */
+		pr_crit("CGR HW state partially modified\n");
+		ret = 0;
+		goto release_lock;
+	}
+	if (cgr->cb && cgr_state.cgr.cscn_en && qman_cgrs_get(&p->cgrs[1],
+							      cgr->cgrid))
+		cgr->cb(p, cgr, 1);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+	return ret;
+}
+
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcc_initcgr local_opts;
+	struct qm_mcr_querycgr cgr_state;
+	int ret;
+
+	if ((qman_ip_rev & 0xFF00) < QMAN_REV30) {
+		pr_warn("QMan version doesn't support CSCN => DCP portal\n");
+		return -EINVAL;
+	}
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		return ret;
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	if (opts)
+		local_opts = *opts;
+
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+				QM_CGR_TARG_UDP_CTRL_WRITE_BIT |
+				QM_CGR_TARG_UDP_CTRL_DCP | dcp_portal;
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+					TARG_DCP_MASK(dcp_portal);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT,
+				      &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+
+	return ret;
+}
+
+int qman_delete_cgr(struct qman_cgr *cgr)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret = 0;
+	struct qman_cgr *i;
+	struct qman_portal *p = get_affine_portal();
+
+	if (cgr->chan != p->config->channel) {
+		pr_crit("Attempting to delete cgr from different portal than"
+			" it was create: create 0x%x, delete 0x%x\n",
+			cgr->chan, p->config->channel);
+		ret = -EINVAL;
+		goto put_portal;
+	}
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	spin_lock(&p->cgr_lock);
+	list_del(&cgr->node);
+	/*
+	 * If there are no other CGR objects for this CGRID in the list,
+	 * update CSCN_TARG accordingly
+	 */
+	list_for_each_entry(i, &p->cgr_cbs, node)
+		if ((i->cgrid == cgr->cgrid) && i->cb)
+			goto release_lock;
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)  {
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+		goto release_lock;
+	}
+	/* Overwrite TARG */
+	local_opts.we_mask = QM_CGR_WE_CSCN_TARG;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl = PORTAL_IDX(p);
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ &
+							 ~(TARG_MASK(p));
+	ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+put_portal:
+	return ret;
+}
+
+int qman_shutdown_fq(u32 fqid)
+{
+	struct qman_portal *p;
+	struct qm_portal *low_p;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	u8 state;
+	int orl_empty, fq_empty, drain = 0;
+	u32 result;
+	u32 channel, wq;
+	u16 dest_wq;
+
+	p = get_affine_portal();
+	low_p = &p->p;
+
+	/* Determine the state of the FQID */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	state = mcr->queryfq_np.state & QM_MCR_NP_STATE_MASK;
+	if (state == QM_MCR_NP_STATE_OOS)
+		return 0; /* Already OOS, no need to do anymore checks */
+
+	/* Query which channel the FQ is using */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+
+	/* Need to store these since the MCR gets reused */
+	dest_wq = be16_to_cpu(mcr->queryfq.fqd.dest_wq);
+	channel = dest_wq & 0x7;
+	wq = dest_wq >> 3;
+
+	switch (state) {
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+	case QM_MCR_NP_STATE_PARKED:
+		orl_empty = 0;
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_RETIRE);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_RETIRE);
+		result = mcr->result; /* Make a copy as we reuse MCR below */
+
+		if (result == QM_MCR_RESULT_PENDING) {
+			/* Need to wait for the FQRN in the message ring, which
+			 * will only occur once the FQ has been drained.  In
+			 * order for the FQ to drain the portal needs to be set
+			 * to dequeue from the channel the FQ is scheduled on
+			 */
+			const struct qm_mr_entry *msg;
+			const struct qm_dqrr_entry *dqrr = NULL;
+			int found_fqrn = 0;
+			__maybe_unused u16 dequeue_wq = 0;
+
+			/* Flag that we need to drain FQ */
+			drain = 1;
+
+			if (channel >= qm_channel_pool1 &&
+			    channel < (u16)(qm_channel_pool1 + 15)) {
+				/* Pool channel, enable the bit in the portal */
+				dequeue_wq = (channel -
+					      qm_channel_pool1 + 1) << 4 | wq;
+			} else if (channel < qm_channel_pool1) {
+				/* Dedicated channel */
+				dequeue_wq = wq;
+			} else {
+				pr_info("Cannot recover FQ 0x%x,"
+					" it is scheduled on channel 0x%x",
+					fqid, channel);
+				return -EBUSY;
+			}
+			/* Set the sdqcr to drain this channel */
+			if (channel < qm_channel_pool1)
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+					  QM_SDQCR_CHANNELS_DEDICATED);
+			else
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+						  QM_SDQCR_CHANNELS_POOL_CONV
+						  (channel));
+			while (!found_fqrn) {
+				/* Keep draining DQRR while checking the MR*/
+				qm_dqrr_pvb_update(low_p);
+				dqrr = qm_dqrr_current(low_p);
+				while (dqrr) {
+					qm_dqrr_cdc_consume_1ptr(
+						low_p, dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+				/* Process message ring too */
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+				while (msg) {
+					if ((msg->verb &
+					     QM_MR_VERB_TYPE_MASK)
+					    == QM_MR_VERB_FQRN)
+						found_fqrn = 1;
+					qm_mr_next(low_p);
+					qm_mr_cci_consume_to_current(low_p);
+					qm_mr_pvb_update(low_p);
+					msg = qm_mr_current(low_p);
+				}
+				cpu_relax();
+			}
+		}
+		if (result != QM_MCR_RESULT_OK &&
+		    result !=  QM_MCR_RESULT_PENDING) {
+			/* error */
+			pr_err("qman_retire_fq failed on FQ 0x%x,"
+			       " result=0x%x\n", fqid, result);
+			return -1;
+		}
+		if (!(mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)) {
+			/* ORL had no entries, no need to wait until the
+			 * ERNs come in.
+			 */
+			orl_empty = 1;
+		}
+		/* Retirement succeeded, check to see if FQ needs
+		 * to be drained.
+		 */
+		if (drain || mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY) {
+			/* FQ is Not Empty, drain using volatile DQ commands */
+			fq_empty = 0;
+			do {
+				const struct qm_dqrr_entry *dqrr = NULL;
+				u32 vdqcr = fqid | QM_VDQCR_NUMFRAMES_SET(3);
+
+				qm_dqrr_vdqcr_set(low_p, vdqcr);
+
+				/* Wait for a dequeue to occur */
+				while (dqrr == NULL) {
+					qm_dqrr_pvb_update(low_p);
+					dqrr = qm_dqrr_current(low_p);
+					if (!dqrr)
+						cpu_relax();
+				}
+				/* Process the dequeues, making sure to
+				 * empty the ring completely.
+				 */
+				while (dqrr) {
+					if (dqrr->fqid == fqid &&
+					    dqrr->stat & QM_DQRR_STAT_FQ_EMPTY)
+						fq_empty = 1;
+					qm_dqrr_cdc_consume_1ptr(low_p,
+								 dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+			} while (fq_empty == 0);
+		}
+		qm_dqrr_sdqcr_set(low_p, 0);
+
+		/* Wait for the ORL to have been completely drained */
+		while (orl_empty == 0) {
+			const struct qm_mr_entry *msg;
+
+			qm_mr_pvb_update(low_p);
+			msg = qm_mr_current(low_p);
+			while (msg) {
+				if ((msg->verb & QM_MR_VERB_TYPE_MASK) ==
+				    QM_MR_VERB_FQRL)
+					orl_empty = 1;
+				qm_mr_next(low_p);
+				qm_mr_cci_consume_to_current(low_p);
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+			}
+			cpu_relax();
+		}
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result != QM_MCR_RESULT_OK) {
+			pr_err(
+			"OOS after drain Failed on FQID 0x%x, result 0x%x\n",
+			       fqid, mcr->result);
+			return -1;
+		}
+		return 0;
+
+	case QM_MCR_NP_STATE_RETIRED:
+		/* Send OOS Command */
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result) {
+			pr_err("OOS Failed on FQID 0x%x\n", fqid);
+			return -1;
+		}
+		return 0;
+
+	}
+	return -1;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.h b/drivers/bus/dpaa/base/qbman/qman.h
new file mode 100644
index 0000000..ee78d31
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.h
@@ -0,0 +1,888 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman_priv.h"
+
+/***************************/
+/* Portal register assists */
+/***************************/
+#define QM_REG_EQCR_PI_CINH	0x3000
+#define QM_REG_EQCR_CI_CINH	0x3040
+#define QM_REG_EQCR_ITR		0x3080
+#define QM_REG_DQRR_PI_CINH	0x3100
+#define QM_REG_DQRR_CI_CINH	0x3140
+#define QM_REG_DQRR_ITR		0x3180
+#define QM_REG_DQRR_DCAP	0x31C0
+#define QM_REG_DQRR_SDQCR	0x3200
+#define QM_REG_DQRR_VDQCR	0x3240
+#define QM_REG_DQRR_PDQCR	0x3280
+#define QM_REG_MR_PI_CINH	0x3300
+#define QM_REG_MR_CI_CINH	0x3340
+#define QM_REG_MR_ITR		0x3380
+#define QM_REG_CFG		0x3500
+#define QM_REG_ISR		0x3600
+#define QM_REG_IIR              0x36C0
+#define QM_REG_ITPR		0x3740
+
+/* Cache-enabled register offsets */
+#define QM_CL_EQCR		0x0000
+#define QM_CL_DQRR		0x1000
+#define QM_CL_MR		0x2000
+#define QM_CL_EQCR_PI_CENA	0x3000
+#define QM_CL_EQCR_CI_CENA	0x3040
+#define QM_CL_DQRR_PI_CENA	0x3100
+#define QM_CL_DQRR_CI_CENA	0x3140
+#define QM_CL_MR_PI_CENA	0x3300
+#define QM_CL_MR_CI_CENA	0x3340
+#define QM_CL_CR		0x3800
+#define QM_CL_RR0		0x3900
+#define QM_CL_RR1		0x3940
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __qm_in(qm, o)		be32_to_cpu(__raw_readl((qm)->ci  + (o)))
+#define __qm_out(qm, o, val)	__raw_writel((cpu_to_be32(val)), \
+					     (qm)->ci + (o))
+#define qm_in(reg)		__qm_in(&portal->addr, QM_REG_##reg)
+#define qm_out(reg, val)	__qm_out(&portal->addr, QM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __qm_cl_touch_ro(qm, o) dcbt_ro((qm)->ce + (o))
+#define __qm_cl_touch_rw(qm, o) dcbt_rw((qm)->ce + (o))
+#define __qm_cl_in(qm, o)	be32_to_cpu(__raw_readl((qm)->ce + (o)))
+#define __qm_cl_out(qm, o, val) \
+	do { \
+		u32 *__tmpclout = (qm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __qm_cl_invalidate(qm, o) dccivac((qm)->ce + (o))
+#define qm_cl_touch_ro(reg) __qm_cl_touch_ro(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_touch_rw(reg) __qm_cl_touch_rw(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_in(reg)	    __qm_cl_in(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_out(reg, val) __qm_cl_out(&portal->addr, QM_CL_##reg##_CENA, val)
+#define qm_cl_invalidate(reg)\
+	__qm_cl_invalidate(&portal->addr, QM_CL_##reg##_CENA)
+
+/* Cache-enabled ring access */
+#define qm_cl(base, idx)	((void *)base + ((idx) << 6))
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 qm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *     dmode == h/w dequeue mode.
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ *     dc == DCA (Discrete Consumption Acknowledgment), DQRR-only
+ *   As for "enum qm_dqrr_dmode", it should be self-explanatory.
+ */
+enum qm_eqcr_pmode {		/* matches QCSP_CFG::EPM */
+	qm_eqcr_pci = 0,	/* PI index, cache-inhibited */
+	qm_eqcr_pce = 1,	/* PI index, cache-enabled */
+	qm_eqcr_pvb = 2		/* valid-bit */
+};
+
+enum qm_dqrr_dmode {		/* matches QCSP_CFG::DP */
+	qm_dqrr_dpush = 0,	/* SDQCR  + VDQCR */
+	qm_dqrr_dpull = 1	/* PDQCR */
+};
+
+enum qm_dqrr_pmode {		/* s/w-only */
+	qm_dqrr_pci,		/* reads DQRR_PI_CINH */
+	qm_dqrr_pce,		/* reads DQRR_PI_CENA */
+	qm_dqrr_pvb		/* reads valid-bit */
+};
+
+enum qm_dqrr_cmode {		/* matches QCSP_CFG::DCM */
+	qm_dqrr_cci = 0,	/* CI index, cache-inhibited */
+	qm_dqrr_cce = 1,	/* CI index, cache-enabled */
+	qm_dqrr_cdc = 2		/* Discrete Consumption Acknowledgment */
+};
+
+enum qm_mr_pmode {		/* s/w-only */
+	qm_mr_pci,		/* reads MR_PI_CINH */
+	qm_mr_pce,		/* reads MR_PI_CENA */
+	qm_mr_pvb		/* reads valid-bit */
+};
+
+enum qm_mr_cmode {		/* matches QCSP_CFG::MM */
+	qm_mr_cci = 0,		/* CI index, cache-inhibited */
+	qm_mr_cce = 1		/* CI index, cache-enabled */
+};
+
+/* ------------------------- */
+/* --- Portal structures --- */
+
+#define QM_EQCR_SIZE		8
+#define QM_DQRR_SIZE		16
+#define QM_MR_SIZE		8
+
+struct qm_eqcr {
+	struct qm_eqcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	u32 busy;
+	enum qm_eqcr_pmode pmode;
+#endif
+};
+
+struct qm_dqrr {
+	const struct qm_dqrr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum qm_dqrr_dmode dmode;
+	enum qm_dqrr_pmode pmode;
+	enum qm_dqrr_cmode cmode;
+#endif
+};
+
+struct qm_mr {
+	const struct qm_mr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum qm_mr_pmode pmode;
+	enum qm_mr_cmode cmode;
+#endif
+};
+
+struct qm_mc {
+	struct qm_mc_command *cr;
+	struct qm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum {
+		/* Can be _mc_start()ed */
+		qman_mc_idle,
+		/* Can be _mc_commit()ed or _mc_abort()ed */
+		qman_mc_user,
+		/* Can only be _mc_retry()ed */
+		qman_mc_hw
+	} state;
+#endif
+};
+
+#define QM_PORTAL_ALIGNMENT ____cacheline_aligned
+
+struct qm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct qm_portal {
+	struct qm_addr addr;
+	struct qm_eqcr eqcr;
+	struct qm_dqrr dqrr;
+	struct qm_mr mr;
+	struct qm_mc mc;
+} QM_PORTAL_ALIGNMENT;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define EQCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_EQCR_SIZE << 6)))
+
+extern dma_addr_t rte_mem_virt2phy(const void *addr);
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 EQCR_PTR2IDX(struct qm_eqcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_EQCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void EQCR_INC(struct qm_eqcr *eqcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates fast
+	 * code with essentially no branching overheads. We increment to the
+	 * next EQCR pointer and handle overflow and 'vbit'.
+	 */
+	struct qm_eqcr_entry *partial = eqcr->cursor + 1;
+
+	eqcr->cursor = EQCR_CARRYCLEAR(partial);
+	if (partial != eqcr->cursor)
+		eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_no_stash(struct qm_portal
+								 *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available)
+		return NULL;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 1;
+#endif
+
+	return eqcr->cursor;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_stash(struct qm_portal
+								*portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return NULL;
+	}
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 1;
+#endif
+	return eqcr->cursor;
+}
+
+static inline void qm_eqcr_abort(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_pend_and_next(
+					struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+	DPAA_ASSERT(eqcr->pmode != qm_eqcr_pvb);
+	if (eqcr->available == 1)
+		return NULL;
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcr->cursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	return eqcr->cursor;
+}
+
+#define EQCR_COMMIT_CHECKS(eqcr) \
+do { \
+	DPAA_ASSERT(eqcr->busy); \
+	DPAA_ASSERT(eqcr->cursor->orp == (eqcr->cursor->orp & 0x00ffffff)); \
+	DPAA_ASSERT(eqcr->cursor->fqid == (eqcr->cursor->fqid & 0x00ffffff)); \
+} while (0)
+
+static inline void qm_eqcr_pci_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pci);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	hwsync();
+	qm_out(EQCR_PI_CINH, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	qm_cl_invalidate(EQCR_PI);
+	qm_cl_touch_rw(EQCR_PI);
+}
+
+static inline void qm_eqcr_pce_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	lwsync();
+	qm_cl_out(EQCR_PI, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pvb_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eqcursor;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pvb);
+	lwsync();
+	eqcursor = eqcr->cursor;
+	eqcursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline u8 qm_eqcr_cci_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline void qm_eqcr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	qm_cl_touch_ro(EQCR_CI);
+}
+
+static inline u8 qm_eqcr_cce_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline u8 qm_eqcr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->ithresh;
+}
+
+static inline void qm_eqcr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	eqcr->ithresh = ithresh;
+	qm_out(EQCR_ITR, ithresh);
+}
+
+static inline u8 qm_eqcr_get_avail(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->available;
+}
+
+static inline u8 qm_eqcr_get_fill(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return QM_EQCR_SIZE - 1 - eqcr->available;
+}
+
+#define DQRR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_DQRR_SIZE << 6)))
+
+static inline u8 DQRR_PTR2IDX(const struct qm_dqrr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_DQRR_SIZE - 1);
+}
+
+static inline const struct qm_dqrr_entry *DQRR_INC(
+						const struct qm_dqrr_entry *e)
+{
+	return DQRR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_dqrr_set_maxfill(struct qm_portal *portal, u8 mf)
+{
+	qm_out(CFG, (qm_in(CFG) & 0xff0fffff) |
+		((mf & (QM_DQRR_SIZE - 1)) << 20));
+}
+
+static inline const struct qm_dqrr_entry *qm_dqrr_current(
+						struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	if (!dqrr->fill)
+		return NULL;
+	return dqrr->cursor;
+}
+
+static inline u8 qm_dqrr_cursor(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return DQRR_PTR2IDX(dqrr->cursor);
+}
+
+static inline u8 qm_dqrr_next(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->fill);
+	dqrr->cursor = DQRR_INC(dqrr->cursor);
+	return --dqrr->fill;
+}
+
+static inline u8 qm_dqrr_pci_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pci);
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	qm_cl_invalidate(DQRR_PI);
+	qm_cl_touch_ro(DQRR_PI);
+}
+
+static inline u8 qm_dqrr_pce_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	dqrr->pi = qm_cl_in(DQRR_PI) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	const struct qm_dqrr_entry *res = qm_cl(dqrr->ring, dqrr->pi);
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_DQRR_VERB_VBIT) == dqrr->vbit) {
+		dqrr->pi = (dqrr->pi + 1) & (QM_DQRR_SIZE - 1);
+		if (!dqrr->pi)
+			dqrr->vbit ^= QM_DQRR_VERB_VBIT;
+		dqrr->fill++;
+	}
+}
+
+static inline void qm_dqrr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_rw(DQRR_CI);
+}
+
+static inline void qm_dqrr_cce_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cdc_consume_1(struct qm_portal *portal, u8 idx,
+					 int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |	/* S */
+		((park ? 1 : 0) << 6) |	/* PK */
+		idx);			/* DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_1ptr(struct qm_portal *portal,
+					    const struct qm_dqrr_entry *dq,
+					int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 idx = DQRR_PTR2IDX(dq);
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* DQRR_DCAP::S */
+		((park ? 1 : 0) << 6) |		/* DQRR_DCAP::PK */
+		idx);				/* DQRR_DCAP::DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_n(struct qm_portal *portal, u16 bitmask)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (1 << 8) |		/* DQRR_DCAP::S */
+		((u32)bitmask << 16));		/* DQRR_DCAP::DCAP_CI */
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+}
+
+static inline u8 qm_dqrr_cdc_cci(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+}
+
+static inline void qm_dqrr_cdc_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_ro(DQRR_CI);
+}
+
+static inline u8 qm_dqrr_cdc_cce(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_cl_in(DQRR_CI) & (QM_DQRR_SIZE - 1);
+}
+
+static inline u8 qm_dqrr_get_ci(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	return dqrr->ci;
+}
+
+static inline void qm_dqrr_park(struct qm_portal *portal, u8 idx)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		(idx & (QM_DQRR_SIZE - 1)));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_park_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		DQRR_PTR2IDX(dqrr->cursor));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_sdqcr_set(struct qm_portal *portal, u32 sdqcr)
+{
+	qm_out(DQRR_SDQCR, sdqcr);
+}
+
+static inline u32 qm_dqrr_sdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_SDQCR);
+}
+
+static inline void qm_dqrr_vdqcr_set(struct qm_portal *portal, u32 vdqcr)
+{
+	qm_out(DQRR_VDQCR, vdqcr);
+}
+
+static inline u32 qm_dqrr_vdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_VDQCR);
+}
+
+static inline u8 qm_dqrr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return dqrr->ithresh;
+}
+
+static inline void qm_dqrr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(DQRR_ITR, ithresh);
+}
+
+static inline u8 qm_dqrr_get_maxfill(struct qm_portal *portal)
+{
+	return (qm_in(CFG) & 0x00f00000) >> 20;
+}
+
+/* -------------- */
+/* --- MR API --- */
+
+#define MR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_MR_SIZE << 6)))
+
+static inline u8 MR_PTR2IDX(const struct qm_mr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_MR_SIZE - 1);
+}
+
+static inline const struct qm_mr_entry *MR_INC(const struct qm_mr_entry *e)
+{
+	return MR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_mr_finish(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (mr->ci != MR_PTR2IDX(mr->cursor))
+		pr_crit("Ignoring completed MR entries\n");
+}
+
+static inline const struct qm_mr_entry *qm_mr_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (!mr->fill)
+		return NULL;
+	return mr->cursor;
+}
+
+static inline u8 qm_mr_next(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->fill);
+	mr->cursor = MR_INC(mr->cursor);
+	return --mr->fill;
+}
+
+static inline void qm_mr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = (mr->ci + num) & (QM_MR_SIZE - 1);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = MR_PTR2IDX(mr->cursor);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(MR_ITR, ithresh);
+}
+
+/* ------------------------------ */
+/* --- Management command API --- */
+static inline int qm_mc_init(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + QM_CL_CR;
+	mc->rr = portal->addr.ce + QM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			QM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? QM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_idle;
+#endif
+	return 0;
+}
+
+static inline void qm_mc_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (mc->state != qman_mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct qm_mc_command *qm_mc_start(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void qm_mc_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_hw;
+#endif
+}
+
+static inline struct qm_mc_result *qm_mc_result(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= QM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_idle;
+#endif
+	return rr;
+}
+
+/* Portal interrupt register API */
+static inline void qm_isr_set_iperiod(struct qm_portal *portal, u16 iperiod)
+{
+	qm_out(ITPR, iperiod);
+}
+
+static inline u32 __qm_isr_read(struct qm_portal *portal, enum qm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 6));
+#else
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __qm_isr_write(struct qm_portal *portal, enum qm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 6), val);
+#else
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 2), val);
+#endif
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 80dde20..a7faf17 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -66,6 +66,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_qman_portal_init(uint32_t index, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct qman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -116,6 +117,14 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 	pcfg.node = NULL;
 	pcfg.irq = fd;
 
+	portal = qman_create_affine_portal(&pcfg, NULL);
+	if (!portal) {
+		pr_err("Qman portal initialisation failed (%d)\n",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	irq_map.type = dpaa_portal_qman;
 	irq_map.portal_cinh = map.addr.cinh;
 	process_portal_irq_map(fd, &irq_map);
@@ -124,10 +133,13 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 
 static int fsl_qman_portal_finish(void)
 {
+	__maybe_unused const struct qm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = qman_destroy_affine_portal();
+	BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index e9826c2..4ae2ea5 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -44,10 +44,6 @@
 #include "dpaa_sys.h"
 #include <fsl_qman.h>
 
-#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)
-#error "_ARM64 requires _FSL_QMAN_FQ_LOOKUP"
-#endif
-
 /* Congestion Groups */
 /*
  * This wrapper represents a bit-array for the state of the 256 QMan congestion
@@ -201,13 +197,6 @@ void qm_set_liodns(struct qm_portal_config *pcfg);
 int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
 		       struct qm_mcr_cgrtestwrite *result);
 
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
-/* If the fq object pointer is greater than the size of context_b field,
- * than a lookup table is required.
- */
-int qman_setup_fq_lookup_table(size_t num_entries);
-#endif
-
 /*   QMan s/w corenet portal, low-level i/face	 */
 
 /*
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 740ee25..9735e1d 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -46,15 +46,6 @@ extern "C" {
 
 #include <dpaa_rbtree.h>
 
-/* FQ lookups (turn this on for 64bit user-space) */
-#if (__WORDSIZE == 64)
-#define CONFIG_FSL_QMAN_FQ_LOOKUP
-/* if FQ lookups are supported, this controls the number of initialised,
- * s/w-consumed FQs that can be supported at any one time.
- */
-#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)
-#endif
-
 /* Last updated for v00.800 of the BG */
 
 /* Hardware constants */
@@ -1254,9 +1245,6 @@ struct qman_fq {
 	enum qman_fq_state state;
 	int cgr_groupid;
 	struct rb_node node;
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
-	u32 key;
-#endif
 };
 
 /*
@@ -1275,6 +1263,761 @@ struct qman_cgr {
 	struct list_head node;
 };
 
+/* Flags to qman_create_fq() */
+#define QMAN_FQ_FLAG_NO_ENQUEUE      0x00000001 /* can't enqueue */
+#define QMAN_FQ_FLAG_NO_MODIFY       0x00000002 /* can only enqueue */
+#define QMAN_FQ_FLAG_TO_DCPORTAL     0x00000004 /* consumed by CAAM/PME/Fman */
+#define QMAN_FQ_FLAG_LOCKED          0x00000008 /* multi-core locking */
+#define QMAN_FQ_FLAG_AS_IS           0x00000010 /* query h/w state */
+#define QMAN_FQ_FLAG_DYNAMIC_FQID    0x00000020 /* (de)allocate fqid */
+
+/* Flags to qman_destroy_fq() */
+#define QMAN_FQ_DESTROY_PARKED       0x00000001 /* FQ can be parked or OOS */
+
+/* Flags from qman_fq_state() */
+#define QMAN_FQ_STATE_CHANGING       0x80000000 /* 'state' is changing */
+#define QMAN_FQ_STATE_NE             0x40000000 /* retired FQ isn't empty */
+#define QMAN_FQ_STATE_ORL            0x20000000 /* retired FQ has ORL */
+#define QMAN_FQ_STATE_BLOCKOOS       0xe0000000 /* if any are set, no OOS */
+#define QMAN_FQ_STATE_CGR_EN         0x10000000 /* CGR enabled */
+#define QMAN_FQ_STATE_VDQCR          0x08000000 /* being volatile dequeued */
+
+/* Flags to qman_init_fq() */
+#define QMAN_INITFQ_FLAG_SCHED       0x00000001 /* schedule rather than park */
+#define QMAN_INITFQ_FLAG_LOCAL       0x00000004 /* set dest portal */
+
+/* Flags to qman_enqueue(). NB, the strange numbering is to align with hardware,
+ * bit-wise. (NB: the PME API is sensitive to these precise numberings too, so
+ * any change here should be audited in PME.)
+ */
+#define QMAN_ENQUEUE_FLAG_WATCH_CGR  0x00080000 /* watch congestion state */
+#define QMAN_ENQUEUE_FLAG_DCA        0x00008000 /* perform enqueue-DCA */
+#define QMAN_ENQUEUE_FLAG_DCA_PARK   0x00004000 /* If DCA, requests park */
+#define QMAN_ENQUEUE_FLAG_DCA_PTR(p)		/* If DCA, p is DQRR entry */ \
+		(((u32)(p) << 2) & 0x00000f00)
+#define QMAN_ENQUEUE_FLAG_C_GREEN    0x00000000 /* choose one C_*** flag */
+#define QMAN_ENQUEUE_FLAG_C_YELLOW   0x00000008
+#define QMAN_ENQUEUE_FLAG_C_RED      0x00000010
+#define QMAN_ENQUEUE_FLAG_C_OVERRIDE 0x00000018
+/* For the ORP-specific qman_enqueue_orp() variant;
+ * - this flag indicates "Not Last In Sequence", ie. all but the final fragment
+ *   of a frame.
+ */
+#define QMAN_ENQUEUE_FLAG_NLIS       0x01000000
+/* - this flag performs no enqueue but fills in an ORP sequence number that
+ *   would otherwise block it (eg. if a frame has been dropped).
+ */
+#define QMAN_ENQUEUE_FLAG_HOLE       0x02000000
+/* - this flag performs no enqueue but advances NESN to the given sequence
+ *   number.
+ */
+#define QMAN_ENQUEUE_FLAG_NESN       0x04000000
+
+/* Flags to qman_modify_cgr() */
+#define QMAN_CGR_FLAG_USE_INIT       0x00000001
+#define QMAN_CGR_MODE_FRAME          0x00000001
+
+/**
+ * qman_get_portal_index - get portal configuration index
+ */
+int qman_get_portal_index(void);
+
+/**
+ * qman_affine_channel - return the channel ID of an portal
+ * @cpu: the cpu whose affine portal is the subject of the query
+ *
+ * If @cpu is -1, the affine portal for the current CPU will be used. It is a
+ * bug to call this function for any value of @cpu (other than -1) that is not a
+ * member of the cpu mask.
+ */
+u16 qman_affine_channel(int cpu);
+
+/**
+ * qman_set_vdq - Issue a volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @num: Number of Frames requested for volatile dequeue
+ *
+ * This function will issue a volatile dequeue command to the QMAN.
+ */
+int qman_set_vdq(struct qman_fq *fq, u16 num);
+
+/**
+ * qman_dequeue - Get the DQRR entry after volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ *
+ * This function will return the DQRR entry after a volatile dequeue command
+ * is issued. It will keep returning NULL until there is no packet available on
+ * the DQRR.
+ */
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
+
+/**
+ * qman_dqrr_consume - Consume the DQRR entriy after volatile dequeue
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @dq: DQRR entry to consume. This is the one which is provided by the
+ *    'qbman_dequeue' command.
+ *
+ * This will consume the DQRR enrey and make it available for next volatile
+ * dequeue.
+ */
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq);
+
+/**
+ * qman_poll_dqrr - process DQRR (fast-path) entries
+ * @limit: the maximum number of DQRR entries to process
+ *
+ * Use of this function requires that DQRR processing not be interrupt-driven.
+ * Ie. the value returned by qman_irqsource_get() should not include
+ * QM_PIRQ_DQRI. If the current CPU is sharing a portal hosted on another CPU,
+ * this function will return -EINVAL, otherwise the return value is >=0 and
+ * represents the number of DQRR entries processed.
+ */
+int qman_poll_dqrr(unsigned int limit);
+
+/**
+ * qman_poll
+ *
+ * Dispatcher logic on a cpu can use this to trigger any maintenance of the
+ * affine portal. There are two classes of portal processing in question;
+ * fast-path (which involves demuxing dequeue ring (DQRR) entries and tracking
+ * enqueue ring (EQCR) consumption), and slow-path (which involves EQCR
+ * thresholds, congestion state changes, etc). This function does whatever
+ * processing is not triggered by interrupts.
+ *
+ * Note, if DQRR and some slow-path processing are poll-driven (rather than
+ * interrupt-driven) then this function uses a heuristic to determine how often
+ * to run slow-path processing - as slow-path processing introduces at least a
+ * minimum latency each time it is run, whereas fast-path (DQRR) processing is
+ * close to zero-cost if there is no work to be done.
+ */
+void qman_poll(void);
+
+/**
+ * qman_stop_dequeues - Stop h/w dequeuing to the s/w portal
+ *
+ * Disables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_stop_dequeues(void);
+
+/**
+ * qman_start_dequeues - (Re)start h/w dequeuing to the s/w portal
+ *
+ * Enables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_start_dequeues(void);
+
+/**
+ * qman_static_dequeue_add - Add pool channels to the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Adds a set of pool channels to the portal's static dequeue command register
+ * (SDQCR). The requested pools are limited to those the portal has dequeue
+ * access to.
+ */
+void qman_static_dequeue_add(u32 pools);
+
+/**
+ * qman_static_dequeue_del - Remove pool channels from the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Removes a set of pool channels from the portal's static dequeue command
+ * register (SDQCR). The requested pools are limited to those the portal has
+ * dequeue access to.
+ */
+void qman_static_dequeue_del(u32 pools);
+
+/**
+ * qman_static_dequeue_get - return the portal's current SDQCR
+ *
+ * Returns the portal's current static dequeue command register (SDQCR). The
+ * entire register is returned, so if only the currently-enabled pool channels
+ * are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
+ */
+u32 qman_static_dequeue_get(void);
+
+/**
+ * qman_dca - Perform a Discrete Consumption Acknowledgment
+ * @dq: the DQRR entry to be consumed
+ * @park_request: indicates whether the held-active @fq should be parked
+ *
+ * Only allowed in DCA-mode portals, for DQRR entries whose handler callback had
+ * previously returned 'qman_cb_dqrr_defer'. NB, as with the other APIs, this
+ * does not take a 'portal' argument but implies the core affine portal from the
+ * cpu that is currently executing the function. For reasons of locking, this
+ * function must be called from the same CPU as that which processed the DQRR
+ * entry in the first place.
+ */
+void qman_dca(struct qm_dqrr_entry *dq, int park_request);
+
+/**
+ * qman_eqcr_is_empty - Determine if portal's EQCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * enqueues for the local portal have been processed by Qman but can't use the
+ * QMAN_ENQUEUE_FLAG_WAIT_SYNC flag to do this from the final qman_enqueue().
+ * The function forces tracking of EQCR consumption (which normally doesn't
+ * happen until enqueue processing needs to find space to put new enqueue
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int qman_eqcr_is_empty(void);
+
+/**
+ * qman_set_dc_ern - Set the handler for DCP enqueue rejection notifications
+ * @handler: callback for processing DCP ERNs
+ * @affine: whether this handler is specific to the locally affine portal
+ *
+ * If a hardware block's interface to Qman (ie. its direct-connect portal, or
+ * DCP) is configured not to receive enqueue rejections, then any enqueues
+ * through that DCP that are rejected will be sent to a given software portal.
+ * If @affine is non-zero, then this handler will only be used for DCP ERNs
+ * received on the portal affine to the current CPU. If multiple CPUs share a
+ * portal and they all call this function, they will be setting the handler for
+ * the same portal! If @affine is zero, then this handler will be global to all
+ * portals handled by this instance of the driver. Only those portals that do
+ * not have their own affine handler will use the global handler.
+ */
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
+
+	/* FQ management */
+	/* ------------- */
+/**
+ * qman_create_fq - Allocates a FQ
+ * @fqid: the index of the FQD to encapsulate, must be "Out of Service"
+ * @flags: bit-mask of QMAN_FQ_FLAG_*** options
+ * @fq: memory for storing the 'fq', with callbacks filled in
+ *
+ * Creates a frame queue object for the given @fqid, unless the
+ * QMAN_FQ_FLAG_DYNAMIC_FQID flag is set in @flags, in which case a FQID is
+ * dynamically allocated (or the function fails if none are available). Once
+ * created, the caller should not touch the memory at 'fq' except as extended to
+ * adjacent memory for user-defined fields (see the definition of "struct
+ * qman_fq" for more info). NO_MODIFY is only intended for enqueuing to
+ * pre-existing frame-queues that aren't to be otherwise interfered with, it
+ * prevents all other modifications to the frame queue. The TO_DCPORTAL flag
+ * causes the driver to honour any contextB modifications requested in the
+ * qm_init_fq() API, as this indicates the frame queue will be consumed by a
+ * direct-connect portal (PME, CAAM, or Fman). When frame queues are consumed by
+ * software portals, the contextB field is controlled by the driver and can't be
+ * modified by the caller. If the AS_IS flag is specified, management commands
+ * will be used on portal @p to query state for frame queue @fqid and construct
+ * a frame queue object based on that, rather than assuming/requiring that it be
+ * Out of Service.
+ */
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
+
+/**
+ * qman_destroy_fq - Deallocates a FQ
+ * @fq: the frame queue object to release
+ * @flags: bit-mask of QMAN_FQ_FREE_*** options
+ *
+ * The memory for this frame queue object ('fq' provided in qman_create_fq()) is
+ * not deallocated but the caller regains ownership, to do with as desired. The
+ * FQ must be in the 'out-of-service' state unless the QMAN_FQ_FREE_PARKED flag
+ * is specified, in which case it may also be in the 'parked' state.
+ */
+void qman_destroy_fq(struct qman_fq *fq, u32 flags);
+
+/**
+ * qman_fq_fqid - Queries the frame queue ID of a FQ object
+ * @fq: the frame queue object to query
+ */
+u32 qman_fq_fqid(struct qman_fq *fq);
+
+/**
+ * qman_fq_state - Queries the state of a FQ object
+ * @fq: the frame queue object to query
+ * @state: pointer to state enum to return the FQ scheduling state
+ * @flags: pointer to state flags to receive QMAN_FQ_STATE_*** bitmask
+ *
+ * Queries the state of the FQ object, without performing any h/w commands.
+ * This captures the state, as seen by the driver, at the time the function
+ * executes.
+ */
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
+
+/**
+ * qman_init_fq - Initialises FQ fields, leaves the FQ "parked" or "scheduled"
+ * @fq: the frame queue object to modify, must be 'parked' or new.
+ * @flags: bit-mask of QMAN_INITFQ_FLAG_*** options
+ * @opts: the FQ-modification settings, as defined in the low-level API
+ *
+ * The @opts parameter comes from the low-level portal API. Select
+ * QMAN_INITFQ_FLAG_SCHED in @flags to cause the frame queue to be scheduled
+ * rather than parked. NB, @opts can be NULL.
+ *
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver;
+ * 1. the 'count' and 'fqid' fields are always ignored (this operation only
+ * affects one frame queue: @fq).
+ * 2. the QM_INITFQ_WE_CONTEXTB option of the 'we_mask' field and the associated
+ * 'fqd' structure's 'context_b' field are sometimes overwritten;
+ *   - if @fq was not created with QMAN_FQ_FLAG_TO_DCPORTAL, then context_b is
+ *     initialised to a value used by the driver for demux.
+ *   - if context_b is initialised for demux, so is context_a in case stashing
+ *     is requested (see item 4).
+ * (So caller control of context_b is only possible for TO_DCPORTAL frame queue
+ * objects.)
+ * 3. if @flags contains QMAN_INITFQ_FLAG_LOCAL, the 'fqd' structure's
+ * 'dest::channel' field will be overwritten to match the portal used to issue
+ * the command. If the WE_DESTWQ write-enable bit had already been set by the
+ * caller, the channel workqueue will be left as-is, otherwise the write-enable
+ * bit is set and the workqueue is set to a default of 4. If the "LOCAL" flag
+ * isn't set, the destination channel/workqueue fields and the write-enable bit
+ * are left as-is.
+ * 4. if the driver overwrites context_a/b for demux, then if
+ * QM_INITFQ_WE_CONTEXTA is set, the driver will only overwrite
+ * context_a.address fields and will leave the stashing fields provided by the
+ * user alone, otherwise it will zero out the context_a.stashing fields.
+ */
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
+
+/**
+ * qman_schedule_fq - Schedules a FQ
+ * @fq: the frame queue object to schedule, must be 'parked'
+ *
+ * Schedules the frame queue, which must be Parked, which takes it to
+ * Tentatively-Scheduled or Truly-Scheduled depending on its fill-level.
+ */
+int qman_schedule_fq(struct qman_fq *fq);
+
+/**
+ * qman_retire_fq - Retires a FQ
+ * @fq: the frame queue object to retire
+ * @flags: FQ flags (as per qman_fq_state) if retirement completes immediately
+ *
+ * Retires the frame queue. This returns zero if it succeeds immediately, +1 if
+ * the retirement was started asynchronously, otherwise it returns negative for
+ * failure. When this function returns zero, @flags is set to indicate whether
+ * the retired FQ is empty and/or whether it has any ORL fragments (to show up
+ * as ERNs). Otherwise the corresponding flags will be known when a subsequent
+ * FQRN message shows up on the portal's message ring.
+ *
+ * NB, if the retirement is asynchronous (the FQ was in the Truly Scheduled or
+ * Active state), the completion will be via the message ring as a FQRN - but
+ * the corresponding callback may occur before this function returns!! Ie. the
+ * caller should be prepared to accept the callback as the function is called,
+ * not only once it has returned.
+ */
+int qman_retire_fq(struct qman_fq *fq, u32 *flags);
+
+/**
+ * qman_oos_fq - Puts a FQ "out of service"
+ * @fq: the frame queue object to be put out-of-service, must be 'retired'
+ *
+ * The frame queue must be retired and empty, and if any order restoration list
+ * was released as ERNs at the time of retirement, they must all be consumed.
+ */
+int qman_oos_fq(struct qman_fq *fq);
+
+/**
+ * qman_fq_flow_control - Set the XON/XOFF state of a FQ
+ * @fq: the frame queue object to be set to XON/XOFF state, must not be 'oos',
+ * or 'retired' or 'parked' state
+ * @xon: boolean to set fq in XON or XOFF state
+ *
+ * The frame should be in Tentatively Scheduled state or Truly Schedule sate,
+ * otherwise the IFSI interrupt will be asserted.
+ */
+int qman_fq_flow_control(struct qman_fq *fq, int xon);
+
+/**
+ * qman_query_fq - Queries FQD fields (via h/w query command)
+ * @fq: the frame queue object to be queried
+ * @fqd: storage for the queried FQD fields
+ */
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd);
+
+/**
+ * qman_query_fq_has_pkts - Queries non-programmable FQD fields and returns '1'
+ * if packets are in the frame queue. If there are no packets on frame
+ * queue '0' is returned.
+ * @fq: the frame queue object to be queried
+ */
+int qman_query_fq_has_pkts(struct qman_fq *fq);
+
+/**
+ * qman_query_fq_np - Queries non-programmable FQD fields
+ * @fq: the frame queue object to be queried
+ * @np: storage for the queried FQD fields
+ */
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
+
+/**
+ * qman_query_wq - Queries work queue lengths
+ * @query_dedicated: If non-zero, query length of WQs in the channel dedicated
+ *		to this software portal. Otherwise, query length of WQs in a
+ *		channel  specified in wq.
+ * @wq: storage for the queried WQs lengths. Also specified the channel to
+ *	to query if query_dedicated is zero.
+ */
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
+
+/**
+ * qman_volatile_dequeue - Issue a volatile dequeue command
+ * @fq: the frame queue object to dequeue from
+ * @flags: a bit-mask of QMAN_VOLATILE_FLAG_*** options
+ * @vdqcr: bit mask of QM_VDQCR_*** options, as per qm_dqrr_vdqcr_set()
+ *
+ * Attempts to lock access to the portal's VDQCR volatile dequeue functionality.
+ * The function will block and sleep if QMAN_VOLATILE_FLAG_WAIT is specified and
+ * the VDQCR is already in use, otherwise returns non-zero for failure. If
+ * QMAN_VOLATILE_FLAG_FINISH is specified, the function will only return once
+ * the VDQCR command has finished executing (ie. once the callback for the last
+ * DQRR entry resulting from the VDQCR command has been called). If not using
+ * the FINISH flag, completion can be determined either by detecting the
+ * presence of the QM_DQRR_STAT_UNSCHEDULED and QM_DQRR_STAT_DQCR_EXPIRED bits
+ * in the "stat" field of the "struct qm_dqrr_entry" passed to the FQ's dequeue
+ * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
+ * "flags" retrieved from qman_fq_state().
+ */
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
+
+/**
+ * qman_enqueue - Enqueue a frame to a frame queue
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ *
+ * Fills an entry in the EQCR of portal @qm to enqueue the frame described by
+ * @fd. The descriptor details are copied from @fd to the EQCR entry, the 'pid'
+ * field is ignored. The return value is non-zero on error, such as ring full
+ * (and FLAG_WAIT not specified), congestion avoidance (FLAG_WATCH_CGR
+ * specified), etc. If the ring is full and FLAG_WAIT is specified, this
+ * function will block. If FLAG_INTERRUPT is set, the EQCI bit of the portal
+ * interrupt will assert when Qman consumes the EQCR entry (subject to "status
+ * disable", "enable", and "inhibit" registers). If FLAG_DCA is set, Qman will
+ * perform an implied "discrete consumption acknowledgment" on the dequeue
+ * ring's (DQRR) entry, at the ring index specified by the FLAG_DCA_IDX(x)
+ * macro. (As an alternative to issuing explicit DCA actions on DQRR entries,
+ * this implicit DCA can delay the release of a "held active" frame queue
+ * corresponding to a DQRR entry until Qman consumes the EQCR entry - providing
+ * order-preservation semantics in packet-forwarding scenarios.) If FLAG_DCA is
+ * set, then FLAG_DCA_PARK can also be set to imply that the DQRR consumption
+ * acknowledgment should "park request" the "held active" frame queue. Ie.
+ * when the portal eventually releases that frame queue, it will be left in the
+ * Parked state rather than Tentatively Scheduled or Truly Scheduled. If the
+ * portal is watching congestion groups, the QMAN_ENQUEUE_FLAG_WATCH_CGR flag
+ * is requested, and the FQ is a member of a congestion group, then this
+ * function returns -EAGAIN if the congestion group is currently congested.
+ * Note, this does not eliminate ERNs, as the async interface means we can be
+ * sending enqueue commands to an un-congested FQ that becomes congested before
+ * the enqueue commands are processed, but it does minimise needless thrashing
+ * of an already busy hardware resource by throttling many of the to-be-dropped
+ * enqueues "at the source".
+ */
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send);
+
+typedef int (*qman_cb_precommit) (void *arg);
+
+/**
+ * qman_enqueue_orp - Enqueue a frame to a frame queue using an ORP
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ * @orp: the frame queue object used as an order restoration point.
+ * @orp_seqnum: the sequence number of this frame in the order restoration path
+ *
+ * Similar to qman_enqueue(), but with the addition of an Order Restoration
+ * Point (@orp) and corresponding sequence number (@orp_seqnum) for this
+ * enqueue operation to employ order restoration. Each frame queue object acts
+ * as an Order Definition Point (ODP) by providing each frame dequeued from it
+ * with an incrementing sequence number, this value is generally ignored unless
+ * that sequence of dequeued frames will need order restoration later. Each
+ * frame queue object also encapsulates an Order Restoration Point (ORP), which
+ * is a re-assembly context for re-ordering frames relative to their sequence
+ * numbers as they are enqueued. The ORP does not have to be within the frame
+ * queue that receives the enqueued frame, in fact it is usually the frame
+ * queue from which the frames were originally dequeued. For the purposes of
+ * order restoration, multiple frames (or "fragments") can be enqueued for a
+ * single sequence number by setting the QMAN_ENQUEUE_FLAG_NLIS flag for all
+ * enqueues except the final fragment of a given sequence number. Ordering
+ * between sequence numbers is guaranteed, even if fragments of different
+ * sequence numbers are interlaced with one another. Fragments of the same
+ * sequence number will retain the order in which they are enqueued. If no
+ * enqueue is to performed, QMAN_ENQUEUE_FLAG_HOLE indicates that the given
+ * sequence number is to be "skipped" by the ORP logic (eg. if a frame has been
+ * dropped from a sequence), or QMAN_ENQUEUE_FLAG_NESN indicates that the given
+ * sequence number should become the ORP's "Next Expected Sequence Number".
+ *
+ * Side note: a frame queue object can be used purely as an ORP, without
+ * carrying any frames at all. Care should be taken not to deallocate a frame
+ * queue object that is being actively used as an ORP, as a future allocation
+ * of the frame queue object may start using the internal ORP before the
+ * previous use has finished.
+ */
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum);
+
+/**
+ * qman_alloc_fqid_range - Allocate a contiguous range of FQIDs
+ * @result: is set by the API to the base FQID of the allocated range
+ * @count: the number of FQIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count FQIDs
+ *
+ * Returns the number of frame queues allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * FQs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_fqid(u32 *result)
+{
+	int ret = qman_alloc_fqid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_fqid_range - Release the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of FQIDs
+ * that it can subsequently allocate from.
+ */
+void qman_release_fqid_range(u32 fqid, unsigned int count);
+static inline void qman_release_fqid(u32 fqid)
+{
+	qman_release_fqid_range(fqid, 1);
+}
+
+void qman_seed_fqid_range(u32 fqid, unsigned int count);
+
+int qman_shutdown_fq(u32 fqid);
+
+/**
+ * qman_reserve_fqid_range - Reserve the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ */
+int qman_reserve_fqid_range(u32 fqid, unsigned int count);
+static inline int qman_reserve_fqid(u32 fqid)
+{
+	return qman_reserve_fqid_range(fqid, 1);
+}
+
+/* Pool-channel management */
+/**
+ * qman_alloc_pool_range - Allocate a contiguous range of pool-channel IDs
+ * @result: is set by the API to the base pool-channel ID of the allocated range
+ * @count: the number of pool-channel IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of pool-channel IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_pool(u32 *result)
+{
+	int ret = qman_alloc_pool_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_pool_range - Release the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to deallocate
+ * @count: the number of pool-channel IDs in the range
+ */
+void qman_release_pool_range(u32 id, unsigned int count);
+static inline void qman_release_pool(u32 id)
+{
+	qman_release_pool_range(id, 1);
+}
+
+/**
+ * qman_reserve_pool_range - Reserve the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to reserve
+ * @count: the number of pool-channel IDs in the range
+ */
+int qman_reserve_pool_range(u32 id, unsigned int count);
+static inline int qman_reserve_pool(u32 id)
+{
+	return qman_reserve_pool_range(id, 1);
+}
+
+void qman_seed_pool_range(u32 id, unsigned int count);
+
+	/* CGR management */
+	/* -------------- */
+/**
+ * qman_create_cgr - Register a congestion group object
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: optional state of CGR settings
+ *
+ * Registers this object to receiving congestion entry/exit callbacks on the
+ * portal affine to the cpu portal on which this API is executed. If opts is
+ * NULL then only the callback (cgr->cb) function is registered. If @flags
+ * contains QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset
+ * any unspecified parameters) will be used rather than a modify hw hardware
+ * (which only modifies the specified parameters).
+ */
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_create_cgr_to_dcp - Register a congestion group object to DCP portal
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @dcp_portal: the DCP portal to which the cgr object is registered.
+ * @opts: optional state of CGR settings
+ *
+ */
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_delete_cgr - Deregisters a congestion group object
+ * @cgr: the 'cgr' object to deregister
+ *
+ * "Unplugs" this CGR object from the portal affine to the cpu on which this API
+ * is executed. This must be excuted on the same affine portal on which it was
+ * created.
+ */
+int qman_delete_cgr(struct qman_cgr *cgr);
+
+/**
+ * qman_modify_cgr - Modify CGR fields
+ * @cgr: the 'cgr' object to modify
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: the CGR-modification settings
+ *
+ * The @opts parameter comes from the low-level portal API, and can be NULL.
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver, in particular the 'cgrid' field is ignored (this operation
+ * only affects the given CGR object). If @flags contains
+ * QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset any
+ * unspecified parameters) will be used rather than a modify hw hardware (which
+ * only modifies the specified parameters).
+ */
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_query_cgr - Queries CGR fields
+ * @cgr: the 'cgr' object to query
+ * @result: storage for the queried congestion group record
+ */
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *result);
+
+/**
+ * qman_query_congestion - Queries the state of all congestion groups
+ * @congestion: storage for the queried state of all congestion groups
+ */
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
+
+/**
+ * qman_alloc_cgrid_range - Allocate a contiguous range of CGR IDs
+ * @result: is set by the API to the base CGR ID of the allocated range
+ * @count: the number of CGR IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of CGR IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_cgrid(u32 *result)
+{
+	int ret = qman_alloc_cgrid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_cgrid_range - Release the specified range of CGR IDs
+ * @id: the base CGR ID of the range to deallocate
+ * @count: the number of CGR IDs in the range
+ */
+void qman_release_cgrid_range(u32 id, unsigned int count);
+static inline void qman_release_cgrid(u32 id)
+{
+	qman_release_cgrid_range(id, 1);
+}
+
+/**
+ * qman_reserve_cgrid_range - Reserve the specified range of CGR ID
+ * @id: the base CGR ID of the range to reserve
+ * @count: the number of CGR IDs in the range
+ */
+int qman_reserve_cgrid_range(u32 id, unsigned int count);
+static inline int qman_reserve_cgrid(u32 id)
+{
+	return qman_reserve_cgrid_range(id, 1);
+}
+
+void qman_seed_cgrid_range(u32 id, unsigned int count);
+
+	/* Helpers */
+	/* ------- */
+/**
+ * qman_poll_fq_for_init - Check if an FQ has been initialised from OOS
+ * @fqid: the FQID that will be initialised by other s/w
+ *
+ * In many situations, a FQID is provided for communication between s/w
+ * entities, and whilst the consumer is responsible for initialising and
+ * scheduling the FQ, the producer(s) generally create a wrapper FQ object using
+ * and only call qman_enqueue() (no FQ initialisation, scheduling, etc). Ie;
+ *     qman_create_fq(..., QMAN_FQ_FLAG_NO_MODIFY, ...);
+ * However, data can not be enqueued to the FQ until it is initialised out of
+ * the OOS state - this function polls for that condition. It is particularly
+ * useful for users of IPC functions - each endpoint's Rx FQ is the other
+ * endpoint's Tx FQ, so each side can initialise and schedule their Rx FQ object
+ * and then use this API on the (NO_MODIFY) Tx FQ object in order to
+ * synchronise. The function returns zero for success, +1 if the FQ is still in
+ * the OOS state, or negative if there was an error.
+ */
+static inline int qman_poll_fq_for_init(struct qman_fq *fq)
+{
+	struct qm_mcr_queryfq_np np;
+	int err;
+
+	err = qman_query_fq_np(fq, &np);
+	if (err)
+		return err;
+	if ((np.state & QM_MCR_NP_STATE_MASK) == QM_MCR_NP_STATE_OOS)
+		return 1;
+	return 0;
+}
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+#define cpu_to_hw_sg(x) (x)
+#define hw_sg_to_cpu(x) (x)
+#else
+#define cpu_to_hw_sg(x)  __cpu_to_hw_sg(x)
+#define hw_sg_to_cpu(x)  __hw_sg_to_cpu(x)
+
+static inline void __cpu_to_hw_sg(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = cpu_to_be64(sgentry->opaque);
+	sgentry->val = cpu_to_be32(sgentry->val);
+	sgentry->val_off = cpu_to_be16(sgentry->val_off);
+}
+
+static inline void __hw_sg_to_cpu(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = be64_to_cpu(sgentry->opaque);
+	sgentry->val = be32_to_cpu(sgentry->val);
+	sgentry->val_off = be16_to_cpu(sgentry->val_off);
+}
+#endif
 
 #ifdef __cplusplus
 }
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index b0d953f..a4897b0 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -42,6 +42,7 @@
 #define __FSL_USD_H
 
 #include <compat.h>
+#include <fsl_qman.h>
 
 #ifdef __cplusplus
 extern "C" {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 12/40] bus/dpaa: add BMAN driver core
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (10 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 13/40] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
                     ` (29 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Buffer Manager (BMan) is a hardware buffer pool management block that
allows software and accelerators on the datapath to acquire and release
buffers in order to build frames.

This patch adds the core routines.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman_driver.c | 311 +++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_priv.h   | 125 ++++++++++
 drivers/bus/dpaa/include/fsl_bman.h       | 375 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   5 +
 5 files changed, 817 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index ad68828..24dfa13 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -71,6 +71,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
 	base/qbman/dpaa_alloc.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
new file mode 100644
index 0000000..fb3c50e
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -0,0 +1,311 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_branch_prediction.h>
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "bman_priv.h"
+#include <sys/ioctl.h>
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+u16 bman_ip_rev;
+u16 bman_pool_max;
+void *bman_ccsr_map;
+
+/*****************/
+/* Portal driver */
+/*****************/
+
+static __thread int fd = -1;
+static __thread struct bm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_bman
+};
+
+static int fsl_bman_portal_init(uint32_t idx, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!");
+		return -EINVAL;
+	}
+	/* Allocate and map a bman portal */
+	map.index = idx;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+	pcfg.is_shared = is_shared;
+	pcfg.index = map.index;
+	bman_depletion_fill(&pcfg.mask);
+
+	fd = open(BMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("BMan irq init failed");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+	/* Use the IRQ FD as a unique IRQ number */
+	pcfg.irq = fd;
+
+	/* Set the IRQ number */
+	irq_map.type = dpaa_portal_bman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_bman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int bman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_bman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int bman_thread_finish(void)
+{
+	return fsl_bman_portal_finish();
+}
+
+void bman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int bman_init_ccsr(const struct device_node *node)
+{
+	static int ccsr_map_fd;
+	uint64_t phys_addr;
+	const uint32_t *bman_addr;
+	uint64_t regs_size;
+
+	bman_addr = of_get_address(node, 0, &regs_size, NULL);
+	if (!bman_addr) {
+		pr_err("of_get_address cannot return BMan address");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(node, bman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open(BMAN_CCSR_MAP, O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for BMan CCSR map");
+		return ccsr_map_fd;
+	}
+
+	bman_ccsr_map = mmap(NULL, regs_size, PROT_READ |
+			     PROT_WRITE, MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (bman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map BMan CCSR base Bman: "
+		       "0x%x Phys: 0x%lx size 0x%lx",
+		       *bman_addr, phys_addr, regs_size);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int bman_global_init(void)
+{
+	const struct device_node *dt_node;
+	static int done;
+
+	if (done)
+		return -EBUSY;
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,bman-portal");
+	if (!dt_node) {
+		pr_err("No bman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,bman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,bman-portal-1.0.0")) {
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.0.8")) {
+		bman_ip_rev = BMAN_REV20;
+		bman_pool_max = 8;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.3")) {
+		bman_ip_rev = BMAN_REV21;
+		bman_pool_max = 64;
+	} else {
+		pr_warn("unknown BMan version in portal node,default "
+			"to rev1.0");
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	}
+
+	if (!bman_ip_rev) {
+		pr_err("Unknown bman portal version\n");
+		return -ENODEV;
+	}
+	{
+		const struct device_node *dn = of_find_compatible_node(NULL,
+							NULL, "fsl,bman");
+		if (!dn)
+			pr_err("No bman device node available");
+
+		if (bman_init_ccsr(dn))
+			pr_err("BMan CCSR map failed.");
+	}
+
+	done = 1;
+	return 0;
+}
+
+#define BMAN_POOL_CONTENT(n) (0x0600 + ((n) * 0x04))
+u32 bm_pool_free_buffers(u32 bpid)
+{
+	return in_be32(bman_ccsr_map + BMAN_POOL_CONTENT(bpid));
+}
+
+static u32 __generate_thresh(u32 val, int roundup)
+{
+	u32 e = 0;      /* co-efficient, exponent */
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	DPAA_ASSERT(e < 0x10);
+	return (val | (e << 8));
+}
+
+#define POOL_SWDET(n)       (0x0000 + ((n) * 0x04))
+#define POOL_HWDET(n)       (0x0100 + ((n) * 0x04))
+#define POOL_SWDXT(n)       (0x0200 + ((n) * 0x04))
+#define POOL_HWDXT(n)       (0x0300 + ((n) * 0x04))
+int bm_pool_set(u32 bpid, const u32 *thresholds)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	out_be32(bman_ccsr_map + POOL_SWDET(bpid),
+		 __generate_thresh(thresholds[0], 0));
+	out_be32(bman_ccsr_map + POOL_SWDXT(bpid),
+		 __generate_thresh(thresholds[1], 1));
+	out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+		 __generate_thresh(thresholds[2], 0));
+	out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+		 __generate_thresh(thresholds[3], 1));
+	return 0;
+}
+
+#define BMAN_LOW_DEFAULT_THRESH		0x40
+#define BMAN_HIGH_DEFAULT_THRESH		0x80
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	if (low_thresh && high_thresh) {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(low_thresh, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(high_thresh, 1));
+	} else {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(BMAN_LOW_DEFAULT_THRESH, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(BMAN_HIGH_DEFAULT_THRESH, 1));
+	}
+	return 0;
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman_priv.h b/drivers/bus/dpaa/base/qbman/bman_priv.h
new file mode 100644
index 0000000..07d9cec
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_priv.h
@@ -0,0 +1,125 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_PRIV_H
+#define __BMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_bman.h>
+
+/* Revision info (for errata and feature handling) */
+#define BMAN_REV10 0x0100
+#define BMAN_REV20 0x0200
+#define BMAN_REV21 0x0201
+
+#define BMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+#define BMAN_CCSR_MAP "/dev/mem"
+
+/* This mask contains all the "irqsource" bits visible to API users */
+#define BM_PIRQ_VISIBLE	(BM_PIRQ_RCRI | BM_PIRQ_BSCN)
+
+/* These are bm_<reg>_<verb>(). So for example, bm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define bm_isr_status_read(bm)		__bm_isr_read(bm, bm_isr_status)
+#define bm_isr_status_clear(bm, m)	__bm_isr_write(bm, bm_isr_status, m)
+#define bm_isr_enable_read(bm)		__bm_isr_read(bm, bm_isr_enable)
+#define bm_isr_enable_write(bm, v)	__bm_isr_write(bm, bm_isr_enable, v)
+#define bm_isr_disable_read(bm)		__bm_isr_read(bm, bm_isr_disable)
+#define bm_isr_disable_write(bm, v)	__bm_isr_write(bm, bm_isr_disable, v)
+#define bm_isr_inhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 1)
+#define bm_isr_uninhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 0)
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+extern u16 bman_pool_max;
+
+/* used by CCSR and portal interrupt code */
+enum bm_isr_reg {
+	bm_isr_status = 0,
+	bm_isr_enable = 1,
+	bm_isr_disable = 2,
+	bm_isr_inhibit = 3
+};
+
+struct bm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* This is used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* These are the buffer pool IDs that may be used via this portal. */
+	struct bman_depletion mask;
+
+};
+
+int bman_init_ccsr(const struct device_node *node);
+
+struct bman_portal *bman_create_affine_portal(
+			const struct bm_portal_config *config);
+const struct bm_portal_config *bman_destroy_affine_portal(void);
+
+/* Set depletion thresholds associated with a buffer pool. Requires that the
+ * operating system have access to Bman CCSR (ie. compiled in support and
+ * run-time access courtesy of the device-tree).
+ */
+int bm_pool_set(u32 bpid, const u32 *thresholds);
+
+/* Read the free buffer count for a given buffer */
+u32 bm_pool_free_buffers(u32 bpid);
+
+#endif /* __BMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
new file mode 100644
index 0000000..383106b
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -0,0 +1,375 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_BMAN_H
+#define __FSL_BMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* This wrapper represents a bit-array for the depletion state of the 64 Bman
+ * buffer pools.
+ */
+struct bman_depletion {
+	u32 state[2];
+};
+
+static inline void bman_depletion_init(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = 0;
+}
+
+static inline void bman_depletion_fill(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = ~0;
+}
+
+/* --- Bman data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct bm_rcr_entry;	/* RCR (Release Command Ring) entries */
+struct bm_mc_command;	/* MC (Management Command) command */
+struct bm_mc_result;	/* MC result */
+
+/* Code-reduction, define a wrapper for 48-bit buffers. In cases where a buffer
+ * pool id specific to this buffer is needed (BM_RCR_VERB_CMD_BPID_MULTI,
+ * BM_MCC_VERB_ACQUIRE), the 'bpid' field is used.
+ */
+struct bm_buffer {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1;
+			u8 bpid;
+			u16 hi; /* High 16-bits of 48-bit address */
+			u32 lo; /* Low 32-bits of 48-bit address */
+#else
+			u32 lo;
+			u16 hi;
+			u8 bpid;
+			u8 __reserved;
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:16;
+			u64 addr:48;
+#else
+			u64 addr:48;
+			u64 __notaddress:16;
+#endif
+		};
+		u64 opaque;
+	};
+} __attribute__((aligned(8)));
+static inline u64 bm_buffer_get64(const struct bm_buffer *buf)
+{
+	return buf->addr;
+}
+
+static inline dma_addr_t bm_buf_addr(const struct bm_buffer *buf)
+{
+	return (dma_addr_t)buf->addr;
+}
+
+#define bm_buffer_set64(buf, v) \
+	do { \
+		struct bm_buffer *__buf931 = (buf); \
+		__buf931->hi = upper_32_bits(v); \
+		__buf931->lo = lower_32_bits(v); \
+	} while (0)
+
+/* See 1.5.3.5.4: "Release Command" */
+struct bm_rcr_entry {
+	union {
+		struct {
+			u8 __dont_write_directly__verb;
+			u8 bpid; /* used with BM_RCR_VERB_CMD_BPID_SINGLE */
+			u8 __reserved1[62];
+		};
+		struct bm_buffer bufs[8];
+	};
+} __packed;
+#define BM_RCR_VERB_VBIT		0x80
+#define BM_RCR_VERB_CMD_MASK		0x70	/* one of two values; */
+#define BM_RCR_VERB_CMD_BPID_SINGLE	0x20
+#define BM_RCR_VERB_CMD_BPID_MULTI	0x30
+#define BM_RCR_VERB_BUFCOUNT_MASK	0x0f	/* values 1..8 */
+
+/* See 1.5.3.1: "Acquire Command" */
+/* See 1.5.3.2: "Query Command" */
+struct bm_mcc_acquire {
+	u8 bpid;
+	u8 __reserved1[62];
+} __packed;
+struct bm_mcc_query {
+	u8 __reserved2[63];
+} __packed;
+struct bm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct bm_mcc_acquire acquire;
+		struct bm_mcc_query query;
+	};
+} __packed;
+#define BM_MCC_VERB_VBIT		0x80
+#define BM_MCC_VERB_CMD_MASK		0x70	/* where the verb contains; */
+#define BM_MCC_VERB_CMD_ACQUIRE		0x10
+#define BM_MCC_VERB_CMD_QUERY		0x40
+#define BM_MCC_VERB_ACQUIRE_BUFCOUNT	0x0f	/* values 1..8 go here */
+
+/* See 1.5.3.3: "Acquire Response" */
+/* See 1.5.3.4: "Query Response" */
+struct bm_pool_state {
+	u8 __reserved1[32];
+	/* "availability state" and "depletion state" */
+	struct {
+		u8 __reserved1[8];
+		/* Access using bman_depletion_***() */
+		struct bman_depletion state;
+	} as, ds;
+};
+
+struct bm_mc_result {
+	union {
+		struct {
+			u8 verb;
+			u8 __reserved1[63];
+		};
+		union {
+			struct {
+				u8 __reserved1;
+				u8 bpid;
+				u8 __reserved2[62];
+			};
+			struct bm_buffer bufs[8];
+		} acquire;
+		struct bm_pool_state query;
+	};
+} __packed;
+#define BM_MCR_VERB_VBIT		0x80
+#define BM_MCR_VERB_CMD_MASK		BM_MCC_VERB_CMD_MASK
+#define BM_MCR_VERB_CMD_ACQUIRE		BM_MCC_VERB_CMD_ACQUIRE
+#define BM_MCR_VERB_CMD_QUERY		BM_MCC_VERB_CMD_QUERY
+#define BM_MCR_VERB_CMD_ERR_INVALID	0x60
+#define BM_MCR_VERB_CMD_ERR_ECC		0x70
+#define BM_MCR_VERB_ACQUIRE_BUFCOUNT	BM_MCC_VERB_ACQUIRE_BUFCOUNT /* 0..8 */
+
+/* Portal and Buffer Pools */
+/* Represents a managed portal */
+struct bman_portal;
+
+/* This object type represents Bman buffer pools. */
+struct bman_pool;
+
+/* This struct specifies parameters for a bman_pool object. */
+struct bman_pool_params {
+	/* index of the buffer pool to encapsulate (0-63), ignored if
+	 * BMAN_POOL_FLAG_DYNAMIC_BPID is set.
+	 */
+	u32 bpid;
+	/* bit-mask of BMAN_POOL_FLAG_*** options */
+	u32 flags;
+	/* depletion-entry/exit thresholds, if BMAN_POOL_FLAG_THRESH is set. NB:
+	 * this is only allowed if BMAN_POOL_FLAG_DYNAMIC_BPID is used *and*
+	 * when run in the control plane (which controls Bman CCSR). This array
+	 * matches the definition of bm_pool_set().
+	 */
+	u32 thresholds[4];
+};
+
+/* Flags to bman_new_pool() */
+#define BMAN_POOL_FLAG_NO_RELEASE    0x00000001 /* can't release to pool */
+#define BMAN_POOL_FLAG_ONLY_RELEASE  0x00000002 /* can only release to pool */
+#define BMAN_POOL_FLAG_DYNAMIC_BPID  0x00000008 /* (de)allocate bpid */
+#define BMAN_POOL_FLAG_THRESH        0x00000010 /* set depletion thresholds */
+
+/* Flags to bman_release() */
+#define BMAN_RELEASE_FLAG_NOW        0x00000008 /* issue immediate release */
+
+
+/**
+ * bman_get_portal_index - get portal configuration index
+ */
+int bman_get_portal_index(void);
+
+/**
+ * bman_rcr_is_empty - Determine if portal's RCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * releases for the local portal have been processed by Bman but can't use the
+ * BMAN_RELEASE_FLAG_WAIT_SYNC flag to do this from the final bman_release().
+ * The function forces tracking of RCR consumption (which normally doesn't
+ * happen until release processing needs to find space to put new release
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int bman_rcr_is_empty(void);
+
+/**
+ * bman_alloc_bpid_range - Allocate a contiguous range of BPIDs
+ * @result: is set by the API to the base BPID of the allocated range
+ * @count: the number of BPIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count BPIDs
+ *
+ * Returns the number of buffer pools allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * BPs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int bman_alloc_bpid(u32 *result)
+{
+	int ret = bman_alloc_bpid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * bman_release_bpid_range - Release the specified range of buffer pool IDs
+ * @bpid: the base BPID of the range to deallocate
+ * @count: the number of BPIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of BPIDs
+ * that it can subsequently allocate from.
+ */
+void bman_release_bpid_range(u32 bpid, unsigned int count);
+static inline void bman_release_bpid(u32 bpid)
+{
+	bman_release_bpid_range(bpid, 1);
+}
+
+int bman_reserve_bpid_range(u32 bpid, unsigned int count);
+static inline int bman_reserve_bpid(u32 bpid)
+{
+	return bman_reserve_bpid_range(bpid, 1);
+}
+
+void bman_seed_bpid_range(u32 bpid, unsigned int count);
+
+int bman_shutdown_pool(u32 bpid);
+
+/**
+ * bman_new_pool - Allocates a Buffer Pool object
+ * @params: parameters specifying the buffer pool ID and behaviour
+ *
+ * Creates a pool object for the given @params. A portal and the depletion
+ * callback field of @params are only used if the BMAN_POOL_FLAG_DEPLETION flag
+ * is set. NB, the fields from @params are copied into the new pool object, so
+ * the structure provided by the caller can be released or reused after the
+ * function returns.
+ */
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params);
+
+/**
+ * bman_free_pool - Deallocates a Buffer Pool object
+ * @pool: the pool object to release
+ */
+void bman_free_pool(struct bman_pool *pool);
+
+/**
+ * bman_get_params - Returns a pool object's parameters.
+ * @pool: the pool object
+ *
+ * The returned pointer refers to state within the pool object so must not be
+ * modified and can no longer be read once the pool object is destroyed.
+ */
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
+
+/**
+ * bman_release - Release buffer(s) to the buffer pool
+ * @pool: the buffer pool object to release to
+ * @bufs: an array of buffers to release
+ * @num: the number of buffers in @bufs (1-8)
+ * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options
+ *
+ */
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_acquire - Acquire buffer(s) from a buffer pool
+ * @pool: the buffer pool object to acquire from
+ * @bufs: array for storing the acquired buffers
+ * @num: the number of buffers desired (@bufs is at least this big)
+ *
+ * Issues an "Acquire" command via the portal's management command interface.
+ * The return value will be the number of buffers obtained from the pool, or a
+ * negative error code if a h/w error or pool starvation was encountered.
+ */
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_query_pools - Query all buffer pool states
+ * @state: storage for the queried availability and depletion states
+ */
+int bman_query_pools(struct bm_pool_state *state);
+
+/**
+ * bman_query_free_buffers - Query how many free buffers are in buffer pool
+ * @pool: the buffer pool object to query
+ *
+ * Return the number of the free buffers
+ */
+u32 bman_query_free_buffers(struct bman_pool *pool);
+
+/**
+ * bman_update_pool_thresholds - Change the buffer pool's depletion thresholds
+ * @pool: the buffer pool object to which the thresholds will be set
+ * @thresholds: the new thresholds
+ */
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds);
+
+/**
+ * bm_pool_set_hw_threshold - Change the buffer pool's thresholds
+ * @pool: Pool id
+ * @low_thresh: low threshold
+ * @high_thresh: high threshold
+ */
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_BMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index a4897b0..a3243af 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -50,7 +50,9 @@ extern "C" {
 
 /* Thread-entry/exit hooks; */
 int qman_thread_init(void);
+int bman_thread_init(void);
 int qman_thread_finish(void);
+int bman_thread_finish(void);
 
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
@@ -92,9 +94,12 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal);
  * into another blocking read/select/poll.
  */
 void qman_thread_irq(void);
+void bman_thread_irq(void);
 
 /* Global setup */
 int qman_global_init(void);
+int bman_global_init(void);
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 13/40] bus/dpaa: add support for FMAN frame queue lookup
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (11 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
                     ` (28 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/qbman/qman.c        | 99 ++++++++++++++++++++++++++++++-
 drivers/bus/dpaa/base/qbman/qman_driver.c |  7 ++-
 drivers/bus/dpaa/base/qbman/qman_priv.h   | 11 ++++
 drivers/bus/dpaa/include/fsl_qman.h       | 12 ++++
 4 files changed, 126 insertions(+), 3 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 829e671..f2bfcc2 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -176,6 +176,65 @@ static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
 	return fqtree_find(&p->retire_table, fqid);
 }
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+static void **qman_fq_lookup_table;
+static size_t qman_fq_lookup_table_size;
+
+int qman_setup_fq_lookup_table(size_t num_entries)
+{
+	num_entries++;
+	/* Allocate 1 more entry since the first entry is not used */
+	qman_fq_lookup_table = vmalloc((num_entries * sizeof(void *)));
+	if (!qman_fq_lookup_table) {
+		pr_err("QMan: Could not allocate fq lookup table\n");
+		return -ENOMEM;
+	}
+	memset(qman_fq_lookup_table, 0, num_entries * sizeof(void *));
+	qman_fq_lookup_table_size = num_entries;
+	pr_info("QMan: Allocated lookup table at %p, entry count %lu\n",
+		qman_fq_lookup_table,
+			(unsigned long)qman_fq_lookup_table_size);
+	return 0;
+}
+
+/* global structure that maintains fq object mapping */
+static DEFINE_SPINLOCK(fq_hash_table_lock);
+
+static int find_empty_fq_table_entry(u32 *entry, struct qman_fq *fq)
+{
+	u32 i;
+
+	spin_lock(&fq_hash_table_lock);
+	/* Can't use index zero because this has special meaning
+	 * in context_b field.
+	 */
+	for (i = 1; i < qman_fq_lookup_table_size; i++) {
+		if (qman_fq_lookup_table[i] == NULL) {
+			*entry = i;
+			qman_fq_lookup_table[i] = fq;
+			spin_unlock(&fq_hash_table_lock);
+			return 0;
+		}
+	}
+	spin_unlock(&fq_hash_table_lock);
+	return -ENOMEM;
+}
+
+static void clear_fq_table_entry(u32 entry)
+{
+	spin_lock(&fq_hash_table_lock);
+	BUG_ON(entry >= qman_fq_lookup_table_size);
+	qman_fq_lookup_table[entry] = NULL;
+	spin_unlock(&fq_hash_table_lock);
+}
+
+static inline struct qman_fq *get_fq_table_entry(u32 entry)
+{
+	BUG_ON(entry >= qman_fq_lookup_table_size);
+	return qman_fq_lookup_table[entry];
+}
+#endif
+
 static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
 {
 	/* Byteswap the FQD to HW format */
@@ -766,8 +825,13 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 				break;
 			case QM_MR_VERB_FQPN:
 				/* Parked */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+				fq = get_fq_table_entry(
+					be32_to_cpu(msg->fq.contextB));
+#else
 				fq = (void *)(uintptr_t)
 					be32_to_cpu(msg->fq.contextB);
+#endif
 				fq_state_change(p, fq, msg, verb);
 				if (fq->cb.fqs)
 					fq->cb.fqs(p, fq, &swapped_msg);
@@ -792,7 +856,11 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 			}
 		} else {
 			/* Its a software ERN */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(be32_to_cpu(msg->ern.tag));
+#else
 			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+#endif
 			fq->cb.ern(p, fq, &swapped_msg);
 		}
 		num++;
@@ -907,7 +975,11 @@ static inline unsigned int __poll_portal_fast(struct qman_portal *p,
 				clear_vdqcr(p, fq);
 		} else {
 			/* SDQCR: context_b points to the FQ */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(dq->contextB);
+#else
 			fq = (void *)(uintptr_t)dq->contextB;
+#endif
 			/* Now let the callback do its stuff */
 			res = fq->cb.dqrr(p, fq, dq);
 			/*
@@ -1119,7 +1191,12 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
 	fq->flags = flags;
 	fq->state = qman_fq_state_oos;
 	fq->cgr_groupid = 0;
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	if (unlikely(find_empty_fq_table_entry(&fq->key, fq))) {
+		pr_info("Find empty table entry failed\n");
+		return -ENOMEM;
+	}
+#endif
 	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
 		return 0;
 	/* Everything else is AS_IS support */
@@ -1193,7 +1270,9 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
 	case qman_fq_state_oos:
 		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
 			qman_release_fqid(fq->fqid);
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		clear_fq_table_entry(fq->key);
+#endif
 		return;
 	default:
 		break;
@@ -1258,7 +1337,11 @@ int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
 		dma_addr_t phys_fq;
 
 		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		mcc->initfq.fqd.context_b = fq->key;
+#else
 		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+#endif
 		/*
 		 *  and the physical address - NB, if the user wasn't trying to
 		 * set CONTEXTA, clear the stashing settings.
@@ -1419,7 +1502,11 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
 			msg.verb = QM_MR_VERB_FQRNI;
 			msg.fq.fqs = mcr->alterfq.fqs;
 			msg.fq.fqid = fq->fqid;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			msg.fq.contextB = fq->key;
+#else
 			msg.fq.contextB = (u32)(uintptr_t)fq;
+#endif
 			fq->cb.fqs(p, fq, &msg);
 		}
 	} else if (res == QM_MCR_RESULT_PENDING) {
@@ -1861,7 +1948,11 @@ static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
 					QM_EQCR_DCA_PARK : 0) |
 			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
 	eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	eq->tag = cpu_to_be32(fq->key);
+#else
 	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 	eq->fd = *fd;
 	cpu_to_hw_fd(&eq->fd);
 	return eq;
@@ -1907,7 +1998,11 @@ int qman_enqueue_multi(struct qman_fq *fq,
 	/* try to send as many frames as possible */
 	while (eqcr->available && frames_to_send--) {
 		eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		eq->tag = cpu_to_be32(fq->key);
+#else
 		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 		eq->fd.opaque_addr = fd->opaque_addr;
 		eq->fd.addr = cpu_to_be40(fd->addr);
 		eq->fd.status = cpu_to_be32(fd->status);
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index a7faf17..5c535dd 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -279,5 +279,10 @@ int qman_global_init(void)
 	else
 		qman_clk = be32_to_cpu(*clk);
 
-	return ret;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	ret = qman_setup_fq_lookup_table(CONFIG_FSL_QMAN_FQ_LOOKUP_MAX);
+	if (ret)
+		return ret;
+#endif
+	return 0;
 }
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index 4ae2ea5..e9826c2 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -44,6 +44,10 @@
 #include "dpaa_sys.h"
 #include <fsl_qman.h>
 
+#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)
+#error "_ARM64 requires _FSL_QMAN_FQ_LOOKUP"
+#endif
+
 /* Congestion Groups */
 /*
  * This wrapper represents a bit-array for the state of the 256 QMan congestion
@@ -197,6 +201,13 @@ void qm_set_liodns(struct qm_portal_config *pcfg);
 int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
 		       struct qm_mcr_cgrtestwrite *result);
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+/* If the fq object pointer is greater than the size of context_b field,
+ * than a lookup table is required.
+ */
+int qman_setup_fq_lookup_table(size_t num_entries);
+#endif
+
 /*   QMan s/w corenet portal, low-level i/face	 */
 
 /*
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 9735e1d..f66cb93 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -46,6 +46,15 @@ extern "C" {
 
 #include <dpaa_rbtree.h>
 
+/* FQ lookups (turn this on for 64bit user-space) */
+#if (__WORDSIZE == 64)
+#define CONFIG_FSL_QMAN_FQ_LOOKUP
+/* if FQ lookups are supported, this controls the number of initialised,
+ * s/w-consumed FQs that can be supported at any one time.
+ */
+#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)
+#endif
+
 /* Last updated for v00.800 of the BG */
 
 /* Hardware constants */
@@ -1245,6 +1254,9 @@ struct qman_fq {
 	enum qman_fq_state state;
 	int cgr_groupid;
 	struct rb_node node;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	u32 key;
+#endif
 };
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 14/40] bus/dpaa: add BMan hardware interfaces
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (12 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 13/40] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
                     ` (27 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman.c        | 394 +++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman.h        | 550 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_driver.c |  12 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |  16 +
 5 files changed, 973 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 24dfa13..6d0c5ee 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -71,6 +71,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman.c \
 	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman.c b/drivers/bus/dpaa/base/qbman/bman.c
new file mode 100644
index 0000000..a0bea62
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.c
@@ -0,0 +1,394 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "bman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define RCR_THRESH	2	/* reread h/w CI when running out of space */
+#define IRQNAME		"BMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "BMan portal %d" */
+
+struct bman_portal {
+	struct bm_portal p;
+	/* 2-element array. pools[0] is mask, pools[1] is snapshot. */
+	struct bman_depletion *pools;
+	int thresh_set;
+	unsigned long irq_sources;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct bm_portal_config *config;
+	char irqname[MAX_IRQNAME];
+};
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static DEFINE_PER_CPU(struct bman_portal, bman_affine_portal);
+
+static inline struct bman_portal *get_affine_portal(void)
+{
+	return &get_cpu_var(bman_affine_portal);
+}
+
+/*
+ * This object type refers to a pool, it isn't *the* pool. There may be
+ * more than one such object per BMan buffer pool, eg. if different users of
+ * the pool are operating via different portals.
+ */
+struct bman_pool {
+	struct bman_pool_params params;
+	/* Used for hash-table admin when using depletion notifications. */
+	struct bman_portal *portal;
+	struct bman_pool *next;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	atomic_t in_use;
+#endif
+};
+
+static inline
+struct bman_portal *bman_create_portal(struct bman_portal *portal,
+				       const struct bm_portal_config *c)
+{
+	struct bm_portal *p;
+	const struct bman_depletion *pools = &c->mask;
+	int ret;
+	u8 bpid = 0;
+
+	p = &portal->p;
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference...
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	if (bm_rcr_init(p, bm_rcr_pvb, bm_rcr_cce)) {
+		pr_err("Bman RCR initialisation failed\n");
+		return NULL;
+	}
+	if (bm_mc_init(p)) {
+		pr_err("Bman MC initialisation failed\n");
+		goto fail_mc;
+	}
+	portal->pools = kmalloc(2 * sizeof(*pools), GFP_KERNEL);
+	if (!portal->pools)
+		goto fail_pools;
+	portal->pools[0] = *pools;
+	bman_depletion_init(portal->pools + 1);
+	while (bpid < bman_pool_max) {
+		/*
+		 * Default to all BPIDs disabled, we enable as required at
+		 * run-time.
+		 */
+		bm_isr_bscn_mask(p, bpid, 0);
+		bpid++;
+	}
+	portal->slowpoll = 0;
+	/* Write-to-clear any stale interrupt status bits */
+	bm_isr_disable_write(p, 0xffffffff);
+	portal->irq_sources = 0;
+	bm_isr_enable_write(p, portal->irq_sources);
+	bm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, NULL, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need RCR to be empty before continuing */
+	ret = bm_rcr_get_fill(p);
+	if (ret) {
+		pr_err("Bman RCR unclean\n");
+		goto fail_rcr_empty;
+	}
+	/* Success */
+	portal->config = c;
+
+	bm_isr_disable_write(p, 0);
+	bm_isr_uninhibit(p);
+	return portal;
+fail_rcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->pools);
+fail_pools:
+	bm_mc_finish(p);
+fail_mc:
+	bm_rcr_finish(p);
+	return NULL;
+}
+
+struct bman_portal *
+bman_create_affine_portal(const struct bm_portal_config *c)
+{
+	struct bman_portal *portal = get_affine_portal();
+
+	/*This function is called from the context which is already affine to
+	 *CPU or in other words this in non-migratable to other CPUs.
+	 */
+	portal = bman_create_portal(portal, c);
+	if (portal) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		spin_unlock(&affine_mask_lock);
+	}
+	return portal;
+}
+
+static inline
+void bman_destroy_portal(struct bman_portal *bm)
+{
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bm_rcr_cce_update(&bm->p);
+	bm_rcr_cce_update(&bm->p);
+
+	free_irq(pcfg->irq, bm);
+
+	kfree(bm->pools);
+	bm_mc_finish(&bm->p);
+	bm_rcr_finish(&bm->p);
+	bm->config = NULL;
+}
+
+const struct
+bm_portal_config *bman_destroy_affine_portal(void)
+{
+	struct bman_portal *bm = get_affine_portal();
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bman_destroy_portal(bm);
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(pcfg->cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int
+bman_get_portal_index(void)
+{
+	struct bman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+static const u32 zero_thresholds[4] = {0, 0, 0, 0};
+
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params)
+{
+	struct bman_pool *pool = NULL;
+	u32 bpid;
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID) {
+		int ret = bman_alloc_bpid(&bpid);
+
+		if (ret)
+			return NULL;
+	} else {
+		if (params->bpid >= bman_pool_max)
+			return NULL;
+		bpid = params->bpid;
+	}
+	if (params->flags & BMAN_POOL_FLAG_THRESH) {
+		int ret = bm_pool_set(bpid, params->thresholds);
+
+		if (ret)
+			goto err;
+	}
+
+	pool = kmalloc(sizeof(*pool), GFP_KERNEL);
+	if (!pool)
+		goto err;
+	pool->params = *params;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	atomic_set(&pool->in_use, 1);
+#endif
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		pool->params.bpid = bpid;
+
+	return pool;
+err:
+	if (params->flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(bpid, zero_thresholds);
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(bpid);
+	kfree(pool);
+
+	return NULL;
+}
+
+void bman_free_pool(struct bman_pool *pool)
+{
+	if (pool->params.flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(pool->params.bpid, zero_thresholds);
+	if (pool->params.flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(pool->params.bpid);
+	kfree(pool);
+}
+
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool)
+{
+	return &pool->params;
+}
+
+static void update_rcr_ci(struct bman_portal *p, int avail)
+{
+	if (avail)
+		bm_rcr_cce_prefetch(&p->p);
+	else
+		bm_rcr_cce_update(&p->p);
+}
+
+#define BMAN_BUF_MASK 0x0000fffffffffffful
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p;
+	struct bm_rcr_entry *r;
+	u32 i = num - 1;
+	u8 avail;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_NO_RELEASE)
+		return -EINVAL;
+#endif
+
+	p = get_affine_portal();
+	avail = bm_rcr_get_avail(&p->p);
+	if (avail < 2)
+		update_rcr_ci(p, avail);
+	r = bm_rcr_start(&p->p);
+	if (unlikely(!r))
+		return -EBUSY;
+
+	/*
+	 * we can copy all but the first entry, as this can trigger badness
+	 * with the valid-bit
+	 */
+	r->bufs[0].opaque =
+		cpu_to_be64(((u64)pool->params.bpid << 48) |
+			    (bufs[0].opaque & BMAN_BUF_MASK));
+	if (i) {
+		for (i = 1; i < num; i++)
+			r->bufs[i].opaque =
+				cpu_to_be64(bufs[i].opaque & BMAN_BUF_MASK);
+	}
+
+	bm_rcr_pvb_commit(&p->p, BM_RCR_VERB_CMD_BPID_SINGLE |
+			  (num & BM_RCR_VERB_BUFCOUNT_MASK));
+
+	return 0;
+}
+
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_command *mcc;
+	struct bm_mc_result *mcr;
+	int ret, i;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_ONLY_RELEASE)
+		return -EINVAL;
+#endif
+
+	mcc = bm_mc_start(&p->p);
+	mcc->acquire.bpid = pool->params.bpid;
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_ACQUIRE |
+			(num & BM_MCC_VERB_ACQUIRE_BUFCOUNT));
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	ret = mcr->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT;
+	if (bufs) {
+		for (i = 0; i < num; i++)
+			bufs[i].opaque =
+				be64_to_cpu(mcr->acquire.bufs[i].opaque);
+	}
+	if (ret != num)
+		ret = -ENOMEM;
+	return ret;
+}
+
+int bman_query_pools(struct bm_pool_state *state)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_result *mcr;
+
+	bm_mc_start(&p->p);
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_QUERY);
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & BM_MCR_VERB_CMD_MASK) ==
+		    BM_MCR_VERB_CMD_QUERY);
+	*state = mcr->query;
+	state->as.state.state[0] = be32_to_cpu(state->as.state.state[0]);
+	state->as.state.state[1] = be32_to_cpu(state->as.state.state[1]);
+	state->ds.state.state[0] = be32_to_cpu(state->ds.state.state[0]);
+	state->ds.state.state[1] = be32_to_cpu(state->ds.state.state[1]);
+	return 0;
+}
+
+u32 bman_query_free_buffers(struct bman_pool *pool)
+{
+	return bm_pool_free_buffers(pool->params.bpid);
+}
+
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds)
+{
+	u32 bpid;
+
+	bpid = bman_get_params(pool)->bpid;
+
+	return bm_pool_set(bpid, thresholds);
+}
+
+int bman_shutdown_pool(u32 bpid)
+{
+	struct bman_portal *p = get_affine_portal();
+	return bm_shutdown_pool(&p->p, bpid);
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman.h b/drivers/bus/dpaa/base/qbman/bman.h
new file mode 100644
index 0000000..2af30e9
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.h
@@ -0,0 +1,550 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_H
+#define __BMAN_H
+
+#include "bman_priv.h"
+
+/* Cache-inhibited register offsets */
+#define BM_REG_RCR_PI_CINH	0x3000
+#define BM_REG_RCR_CI_CINH	0x3100
+#define BM_REG_RCR_ITR		0x3200
+#define BM_REG_CFG		0x3300
+#define BM_REG_SCN(n)		(0x3400 + ((n) << 6))
+#define BM_REG_ISR		0x3e00
+#define BM_REG_IIR              0x3ec0
+
+/* Cache-enabled register offsets */
+#define BM_CL_CR		0x0000
+#define BM_CL_RR0		0x0100
+#define BM_CL_RR1		0x0140
+#define BM_CL_RCR		0x1000
+#define BM_CL_RCR_PI_CENA	0x3000
+#define BM_CL_RCR_CI_CENA	0x3100
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __bm_in(bm, o)		be32_to_cpu(__raw_readl((bm)->ci + (o)))
+#define __bm_out(bm, o, val)    __raw_writel(cpu_to_be32(val), \
+					     (bm)->ci + (o))
+#define bm_in(reg)		__bm_in(&portal->addr, BM_REG_##reg)
+#define bm_out(reg, val)	__bm_out(&portal->addr, BM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __bm_cl_touch_ro(bm, o) dcbt_ro((bm)->ce + (o))
+#define __bm_cl_touch_rw(bm, o) dcbt_rw((bm)->ce + (o))
+#define __bm_cl_in(bm, o)	be32_to_cpu(__raw_readl((bm)->ce + (o)))
+#define __bm_cl_out(bm, o, val) \
+	do { \
+		u32 *__tmpclout = (bm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __bm_cl_invalidate(bm, o) dccivac((bm)->ce + (o))
+#define bm_cl_touch_ro(reg) __bm_cl_touch_ro(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_touch_rw(reg) __bm_cl_touch_rw(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_in(reg)	    __bm_cl_in(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_out(reg, val) __bm_cl_out(&portal->addr, BM_CL_##reg##_CENA, val)
+#define bm_cl_invalidate(reg)\
+	__bm_cl_invalidate(&portal->addr, BM_CL_##reg##_CENA)
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 bm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ */
+enum bm_rcr_pmode {		/* matches BCSP_CFG::RPM */
+	bm_rcr_pci = 0,		/* PI index, cache-inhibited */
+	bm_rcr_pce = 1,		/* PI index, cache-enabled */
+	bm_rcr_pvb = 2		/* valid-bit */
+};
+
+enum bm_rcr_cmode {		/* s/w-only */
+	bm_rcr_cci,		/* CI index, cache-inhibited */
+	bm_rcr_cce		/* CI index, cache-enabled */
+};
+
+/* --- Portal structures --- */
+
+#define BM_RCR_SIZE		8
+
+struct bm_rcr {
+	struct bm_rcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	u32 busy;
+	enum bm_rcr_pmode pmode;
+	enum bm_rcr_cmode cmode;
+#endif
+};
+
+struct bm_mc {
+	struct bm_mc_command *cr;
+	struct bm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum {
+		/* Can only be _mc_start()ed */
+		mc_idle,
+		/* Can only be _mc_commit()ed or _mc_abort()ed */
+		mc_user,
+		/* Can only be _mc_retry()ed */
+		mc_hw
+	} state;
+#endif
+};
+
+struct bm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct bm_portal {
+	struct bm_addr addr;
+	struct bm_rcr rcr;
+	struct bm_mc mc;
+	struct bm_portal_config config;
+} ____cacheline_aligned;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define RCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(BM_RCR_SIZE << 6)))
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 RCR_PTR2IDX(struct bm_rcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (BM_RCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void RCR_INC(struct bm_rcr *rcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates
+	 * fast code with essentially no branching overheads. We increment to
+	 * the next RCR pointer and handle overflow and 'vbit'.
+	 */
+	struct bm_rcr_entry *partial = rcr->cursor + 1;
+
+	rcr->cursor = RCR_CARRYCLEAR(partial);
+	if (partial != rcr->cursor)
+		rcr->vbit ^= BM_RCR_VERB_VBIT;
+}
+
+static inline int bm_rcr_init(struct bm_portal *portal, enum bm_rcr_pmode pmode,
+			      __maybe_unused enum bm_rcr_cmode cmode)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct bm_rcr *rcr = &portal->rcr;
+	u32 cfg;
+	u8 pi;
+
+	rcr->ring = portal->addr.ce + BM_CL_RCR;
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	rcr->cursor = rcr->ring + pi;
+	rcr->vbit = (bm_in(RCR_PI_CINH) & BM_RCR_SIZE) ?  BM_RCR_VERB_VBIT : 0;
+	rcr->available = BM_RCR_SIZE - 1
+		- bm_cyc_diff(BM_RCR_SIZE, rcr->ci, pi);
+	rcr->ithresh = bm_in(RCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+	rcr->pmode = pmode;
+	rcr->cmode = cmode;
+#endif
+	cfg = (bm_in(CFG) & 0xffffffe0) | (pmode & 0x3); /* BCSP_CFG::RPM */
+	bm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void bm_rcr_finish(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	u8 ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	DPAA_ASSERT(!rcr->busy);
+	if (pi != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("loosing uncommitted RCR entries\n");
+	if (ci != rcr->ci)
+		pr_crit("missing existing RCR completions\n");
+	if (rcr->ci != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("RCR destroyed unquiesced\n");
+}
+
+static inline struct bm_rcr_entry *bm_rcr_start(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(!rcr->busy);
+	if (!rcr->available)
+		return NULL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 1;
+#endif
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline struct bm_rcr_entry *bm_rcr_pend_and_next(
+					struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode != bm_rcr_pvb);
+	if (rcr->available == 1)
+		return NULL;
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcr->cursor);
+	RCR_INC(rcr);
+	rcr->available--;
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_pci_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pci);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	hwsync();
+	bm_out(RCR_PI_CINH, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	bm_cl_invalidate(RCR_PI);
+	bm_cl_touch_rw(RCR_PI);
+}
+
+static inline void bm_rcr_pce_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	lwsync();
+	bm_cl_out(RCR_PI, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pvb_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	struct bm_rcr_entry *rcursor;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pvb);
+	lwsync();
+	rcursor = rcr->cursor;
+	rcursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcursor);
+	RCR_INC(rcr);
+	rcr->available--;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline u8 bm_rcr_cci_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cci);
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline void bm_rcr_cce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	bm_cl_touch_ro(RCR_CI);
+}
+
+static inline u8 bm_rcr_cce_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	rcr->ci = bm_cl_in(RCR_CI) & (BM_RCR_SIZE - 1);
+	bm_cl_invalidate(RCR_CI);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline u8 bm_rcr_get_ithresh(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->ithresh;
+}
+
+static inline void bm_rcr_set_ithresh(struct bm_portal *portal, u8 ithresh)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	rcr->ithresh = ithresh;
+	bm_out(RCR_ITR, ithresh);
+}
+
+static inline u8 bm_rcr_get_avail(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->available;
+}
+
+static inline u8 bm_rcr_get_fill(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return BM_RCR_SIZE - 1 - rcr->available;
+}
+
+/* --- Management command API --- */
+
+static inline int bm_mc_init(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + BM_CL_CR;
+	mc->rr = portal->addr.ce + BM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			BM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? BM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+	return 0;
+}
+
+static inline void bm_mc_finish(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (mc->state != mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct bm_mc_command *bm_mc_start(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void bm_mc_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_user);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+}
+
+static inline void bm_mc_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_hw;
+#endif
+}
+
+static inline struct bm_mc_result *bm_mc_result(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= BM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+	return rr;
+}
+
+#define SCN_REG(bpid) BM_REG_SCN((bpid) / 32)
+#define SCN_BIT(bpid) (0x80000000 >> (bpid & 31))
+static inline void bm_isr_bscn_mask(struct bm_portal *portal, u8 bpid,
+				    int enable)
+{
+	u32 val;
+
+	DPAA_ASSERT(bpid < bman_pool_max);
+	/* REG_SCN for bpid=0..31, REG_SCN+4 for bpid=32..63 */
+	val = __bm_in(&portal->addr, SCN_REG(bpid));
+	if (enable)
+		val |= SCN_BIT(bpid);
+	else
+		val &= ~SCN_BIT(bpid);
+	__bm_out(&portal->addr, SCN_REG(bpid), val);
+}
+
+static inline u32 __bm_isr_read(struct bm_portal *portal, enum bm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 6));
+#else
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __bm_isr_write(struct bm_portal *portal, enum bm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 6), val);
+#else
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 2), val);
+#endif
+}
+
+/* Buffer Pool Cleanup */
+static inline int bm_shutdown_pool(struct bm_portal *p, u32 bpid)
+{
+	struct bm_mc_command *bm_cmd;
+	struct bm_mc_result *bm_res;
+
+	int aq_count = 0;
+	bool stop = false;
+
+	while (!stop) {
+		/* Acquire buffers until empty */
+		bm_cmd = bm_mc_start(p);
+		bm_cmd->acquire.bpid = bpid;
+		bm_mc_commit(p, BM_MCC_VERB_CMD_ACQUIRE |  1);
+		while (!(bm_res = bm_mc_result(p)))
+			cpu_relax();
+		if (!(bm_res->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT)) {
+			/* Pool is empty */
+			stop = true;
+		} else
+			++aq_count;
+	};
+	return 0;
+}
+
+#endif /* __BMAN_H */
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
index fb3c50e..28f2cf2 100644
--- a/drivers/bus/dpaa/base/qbman/bman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -65,6 +65,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct bman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -111,6 +112,14 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 	/* Use the IRQ FD as a unique IRQ number */
 	pcfg.irq = fd;
 
+	portal = bman_create_affine_portal(&pcfg);
+	if (!portal) {
+		pr_err("Bman portal initialisation failed (%d)",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	/* Set the IRQ number */
 	irq_map.type = dpaa_portal_bman;
 	irq_map.portal_cinh = map.addr.cinh;
@@ -120,10 +129,13 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 
 static int fsl_bman_portal_finish(void)
 {
+	__maybe_unused const struct bm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = bman_destroy_affine_portal();
+	BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
index 690576a..35dba7f 100644
--- a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -41,6 +41,22 @@
 #include "dpaa_sys.h"
 #include <process.h>
 #include <fsl_qman.h>
+#include <fsl_bman.h>
+
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_bpid, result, count, align, partial);
+}
+
+void bman_release_bpid_range(u32 bpid, u32 count)
+{
+	process_release(dpaa_id_bpid, bpid, count);
+}
+
+int bman_reserve_bpid_range(u32 bpid, u32 count)
+{
+	return process_reserve(dpaa_id_bpid, bpid, count);
+}
 
 int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
 {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 15/40] bus/dpaa: add fman flow control threshold setting
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (13 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
                     ` (26 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/fman/fman_hw.c | 28 ++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_fman.h  |  7 +++++++
 2 files changed, 35 insertions(+)

diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 77908ec..7618fc1 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -37,6 +37,7 @@
  */
 #include <fsl_fman.h>
 #include <fsl_fman_crc64.h>
+#include <fsl_bman.h>
 
 /* Instantiate the global variable that the inline CRC64 implementation (in
  * <fsl_fman.h>) depends on.
@@ -437,6 +438,33 @@ fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
 }
 
 int
+fman_if_get_fc_threshold(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	return in_be32(fmbm_mpd);
+}
+
+int
+fman_if_set_fc_threshold(struct fman_if *fm_if, u32 high_water,
+			 u32 low_water, u32 bpid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	out_be32(fmbm_mpd, FMAN_ENABLE_BPOOL_DEPLETION);
+	return bm_pool_set_hw_threshold(bpid, low_water, high_water);
+
+}
+
+int
 fman_if_get_fc_quanta(struct fman_if *fm_if)
 {
 	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index 0aff22c..b94bc56 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -120,6 +120,13 @@ void fman_if_loopback_disable(struct fman_if *);
 void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
 		    size_t bufsize);
 
+/* Get Flow Control threshold parameters on specific interface */
+int fman_if_get_fc_threshold(struct fman_if *fm_if);
+
+/* Enable and Set Flow Control threshold parameters on specific interface */
+int fman_if_set_fc_threshold(struct fman_if *fm_if,
+			u32 high_water, u32 low_water, u32 bpid);
+
 /* Get Flow Control pause quanta on specific interface */
 int fman_if_get_fc_quanta(struct fman_if *fm_if);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (14 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
                     ` (25 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Now that QBMAN (QMAN, BMAN) and FMAN drivers are available, this patch
integrates the DPAA Bus driver for using the drivers for scanning
devices and calling the PMD registered probe callbacks.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c               | 239 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  39 +++++
 drivers/bus/dpaa/rte_dpaa_bus.h           |   6 +
 3 files changed, 284 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index c530c83..2e16a09 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -64,9 +64,21 @@
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
 int dpaa_logtype_bus;
 
 struct rte_dpaa_bus rte_dpaa_bus;
+struct netcfg_info *dpaa_netcfg;
+
+/* define a variable to hold the portal_key, once created.*/
+pthread_key_t dpaa_portal_key;
+
+RTE_DEFINE_PER_LCORE(bool, _dpaa_io);
 
 static inline void
 dpaa_add_to_device_list(struct rte_dpaa_device *dev)
@@ -79,11 +91,238 @@ dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
 {
 	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
 }
+
+static void dpaa_clean_device_list(void);
+
+static int
+dpaa_create_device_list(void)
+{
+	int i;
+	int ret;
+	struct rte_dpaa_device *dev;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+
+	/* Creating Ethernet Devices */
+	for (i = 0; i < dpaa_netcfg->num_ethports; i++) {
+		dev = rte_zmalloc(NULL, sizeof(struct rte_dpaa_device),
+				  RTE_CACHE_LINE_SIZE);
+		if (!dev) {
+			DPAA_BUS_LOG(ERR, "Failed to allocate ETH devices");
+			ret = -ENOMEM;
+			goto cleanup;
+		}
+
+		cfg = &dpaa_netcfg->port_cfg[i];
+		fman_intf = cfg->fman_if;
+
+		/* Device identifiers */
+		dev->id.fman_id = fman_intf->fman_idx + 1;
+		dev->id.mac_id = fman_intf->mac_idx;
+		dev->id.device_type = FSL_DPAA_ETH;
+		dev->id.dev_id = i;
+
+		/* Create device name */
+		memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
+		sprintf(dev->name, "fm%d-mac%d", (fman_intf->fman_idx + 1),
+			fman_intf->mac_idx);
+		DPAA_BUS_LOG(DEBUG, "Device added: %s", dev->name);
+		dev->device.name = dev->name;
+
+		dpaa_add_to_device_list(dev);
+	}
+
+	rte_dpaa_bus.device_count = i;
+
+	return 0;
+
+cleanup:
+	dpaa_clean_device_list();
+	return ret;
+}
+
+static void
+dpaa_clean_device_list(void)
+{
+	struct rte_dpaa_device *dev = NULL;
+	struct rte_dpaa_device *tdev = NULL;
+
+	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+		rte_free(dev);
+		dev = NULL;
+	}
+}
+
+/** XXX move this function into a separate file */
+static int
+_dpaa_portal_init(void *arg)
+{
+	cpu_set_t cpuset;
+	pthread_t id;
+	uint32_t cpu = rte_lcore_id();
+	int ret;
+	struct dpaa_portal *dpaa_io_portal;
+
+	BUS_INIT_FUNC_TRACE();
+
+	if ((uint64_t)arg == 1 || cpu == LCORE_ID_ANY)
+		cpu = rte_get_master_lcore();
+	/* if the core id is not supported */
+	else
+		if (cpu >= RTE_MAX_LCORE)
+			return -1;
+
+	/* Set CPU affinity for this thread */
+	CPU_ZERO(&cpuset);
+	CPU_SET(cpu, &cpuset);
+	id = pthread_self();
+	ret = pthread_setaffinity_np(id, sizeof(cpu_set_t), &cpuset);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "pthread_setaffinity_np failed on "
+			"core :%d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	/* Initialise bman thread portals */
+	ret = bman_thread_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "BMAN thread initialized");
+
+	/* Initialise qman thread portals */
+	ret = qman_thread_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		bman_thread_finish();
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
+
+	dpaa_io_portal = rte_malloc(NULL, sizeof(struct dpaa_portal),
+				    RTE_CACHE_LINE_SIZE);
+	if (!dpaa_io_portal) {
+		DPAA_BUS_LOG(ERR, "Unable to allocate memory");
+		bman_thread_finish();
+		qman_thread_finish();
+		return -ENOMEM;
+	}
+
+	dpaa_io_portal->qman_idx = qman_get_portal_index();
+	dpaa_io_portal->bman_idx = bman_get_portal_index();
+	dpaa_io_portal->tid = syscall(SYS_gettid);
+
+	ret = pthread_setspecific(dpaa_portal_key, (void *)dpaa_io_portal);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "pthread_setspecific failed on "
+			    "core %d with ret: %d", cpu, ret);
+		dpaa_portal_finish(NULL);
+
+		return ret;
+	}
+
+	RTE_PER_LCORE(_dpaa_io) = true;
+
+	DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
+
+	return 0;
+}
+
+/*
+ * rte_dpaa_portal_init - Wrapper over _dpaa_portal_init with thread level check
+ * XXX Complete this
+ */
+int
+rte_dpaa_portal_init(void *arg)
+{
+	if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
+		return _dpaa_portal_init(arg);
+
+	return 0;
+}
+
+void
+dpaa_portal_finish(void *arg)
+{
+	struct dpaa_portal *dpaa_io_portal = (struct dpaa_portal *)arg;
+
+	if (!dpaa_io_portal) {
+		DPAA_BUS_LOG(DEBUG, "Portal already cleaned");
+		return;
+	}
+
+	bman_thread_finish();
+	qman_thread_finish();
+
+	pthread_setspecific(dpaa_portal_key, NULL);
+
+	rte_free(dpaa_io_portal);
+	dpaa_io_portal = NULL;
+
+	RTE_PER_LCORE(_dpaa_io) = false;
+}
+
 static int
 rte_dpaa_bus_scan(void)
 {
+	int ret;
+
 	BUS_INIT_FUNC_TRACE();
 
+	/* Load the device-tree driver */
+	ret = of_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "of_init failed with ret: %d", ret);
+		return -1;
+	}
+
+	/* Get the interface configurations from device-tree */
+	dpaa_netcfg = netcfg_acquire();
+	if (!dpaa_netcfg) {
+		DPAA_BUS_LOG(ERR, "netcfg_acquire failed");
+		return -EINVAL;
+	}
+
+	if (!dpaa_netcfg->num_ethports) {
+		DPAA_BUS_LOG(INFO, "no network interfaces available");
+		/* This is not an error */
+		return 0;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "Bus: Address of netcfg=%p, Ethports=%d",
+		     dpaa_netcfg, dpaa_netcfg->num_ethports);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dump_netcfg(dpaa_netcfg);
+#endif
+
+	DPAA_BUS_LOG(DEBUG, "Number of devices = %d\n",
+		     dpaa_netcfg->num_ethports);
+	ret = dpaa_create_device_list();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to create device list. (%d)", ret);
+		return ret;
+	}
+
+	/* create the key, supplying a function that'll be invoked
+	 * when a portal affined thread will be deleted.
+	 */
+	ret = pthread_key_create(&dpaa_portal_key, dpaa_portal_finish);
+	if (ret) {
+		DPAA_BUS_LOG(DEBUG, "Unable to create pthread key. (%d)", ret);
+		dpaa_clean_device_list();
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "dpaa_portal_key=%u, ret=%d\n",
+		    (unsigned int)dpaa_portal_key, ret);
+
 	return 0;
 }
 
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 8c1ea65..3d4dc88 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,7 +1,46 @@
 DPDK_17.08 {
 	global:
 
+	bman_acquire;
+	bman_free_pool;
+	bman_get_params;
+	bman_new_pool;
+	bman_release;
+	dpaa_netcfg;
+	fman_ccsr_map_fd;
+	fman_dealloc_bufs_mask_hi;
+	fman_dealloc_bufs_mask_lo;
+	fman_if_disable_rx;
+	fman_if_enable_rx;
+	fman_if_discard_rx_errors;
+	fman_if_get_fc_threshold;
+	fman_if_get_fc_quanta;
+	fman_if_promiscuous_disable;
+	fman_if_promiscuous_enable;
+	fman_if_reset_mcast_filter_table;
+	fman_if_set_bp;
+	fman_if_set_fc_threshold;
+	fman_if_set_fc_quanta;
+	fman_if_set_fdoff;
+	fman_if_set_ic_params;
+	fman_if_set_maxfrm;
+	fman_if_set_mcast_filter_table;
+	fman_if_stats_get;
+	fman_if_stats_reset;
+	fm_mac_add_exact_match_mac_addr;
+	fm_mac_rem_exact_match_mac_addr;
+	netcfg_acquire;
+	netcfg_release;
+	qman_create_fq;
+	qman_dequeue;
+	qman_dqrr_consume;
+	qman_enqueue_multi;
+	qman_init_fq;
+	qman_set_vdq;
+	qman_reserve_fqid_range;
 	rte_dpaa_driver_register;
 	rte_dpaa_driver_unregister;
+	rte_dpaa_mem_ptov;
+	rte_dpaa_portal_init;
 
 };
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index d1de6d3..5c795f9 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -36,6 +36,12 @@
 #include <rte_bus.h>
 #include <rte_mempool.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
 #define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
 
 #define DEV_TO_DPAA_DEVICE(ptr)	\
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 17/40] doc: add NXP DPAA PMD documentation
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (15 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
                     ` (24 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                       |   2 +
 doc/guides/nics/dpaa.rst          | 367 ++++++++++++++++++++++++++++++++++++++
 doc/guides/nics/features/dpaa.ini |   8 +
 doc/guides/nics/index.rst         |   1 +
 4 files changed, 378 insertions(+)
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index 620d57a..839423b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -392,6 +392,8 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: doc/guides/nics/dpaa.rst
+F: doc/guides/nics/features/dpaa.ini
 
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
new file mode 100644
index 0000000..9ea1c69
--- /dev/null
+++ b/doc/guides/nics/dpaa.rst
@@ -0,0 +1,367 @@
+..  BSD LICENSE
+    Copyright 2017 NXP.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+DPAA Poll Mode Driver
+=====================
+
+The DPAA NIC PMD (**librte_pmd_dpaa**) provides poll mode driver
+support for the inbuilt NIC found in the **NXP DPAA** SoC family.
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+NXP DPAA (Data Path Acceleration Architecture - Gen 1)
+------------------------------------------------------
+
+This section provides an overview of the NXP DPAA architecture
+and how it is integrated into the DPDK.
+
+Contents summary
+
+- DPAA overview
+- DPAA driver architecture overview
+
+.. _dpaa_overview:
+
+DPAA Overview
+~~~~~~~~~~~~~
+
+Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
+
+The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
+components on specific QorIQ series multicore processors. This architecture
+provides the infrastructure to support simplified sharing of networking
+interfaces and accelerators by multiple CPU cores, and the accelerators
+themselves.
+
+DPAA includes:
+
+- Cores
+- Network and packet I/O
+- Hardware offload accelerators
+- Infrastructure required to facilitate flow of packets between the components above
+
+Infrastructure components are:
+
+- The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
+  It allows  CPUs and other accelerators connected to the SoC datapath to
+  enqueue and dequeue ethernet frames, thus providing the infrastructure for
+  data exchange among CPUs and datapath accelerators.
+- The Buffer Manager (BMan) is a hardware buffer pool management block that
+  allows software and accelerators on the datapath to acquire and release
+  buffers in order to build frames.
+
+Hardware accelerators are:
+
+- SEC - Cryptographic accelerator
+- PME - Pattern matching engine
+
+The Network and packet I/O component:
+
+- The Frame Manager (FMan) is a key component in the DPAA and makes use of the
+  DPAA infrastructure (QMan and BMan). FMan  is responsible for packet
+  distribution and policing. Each frame can be parsed, classified and results
+  may be attached to the frame. This meta data can be used to select
+  particular QMan queue, which the packet is forwarded to.
+
+
+DPAA DPDK - Poll Mode Driver Overview
+-------------------------------------
+
+This section provides an overview of the drivers for DPAA:
+
+* Bus driver and associated "DPAA infrastructure" drivers
+* Functional object drivers (such as Ethernet).
+
+Brief description of each driver is provided in layout below as well as
+in the following sections.
+
+.. code-block:: console
+
+                                       +------------+
+                                       | DPDK DPAA  |
+                                       |    PMD     |
+                                       +-----+------+
+                                             |
+                                       +-----+------+       +---------------+
+                                       :  Ethernet  :.......| DPDK DPAA     |
+                    . . . . . . . . .  :   (FMAN)   :       | Mempool driver|
+                   .                   +---+---+----+       |  (BMAN)       |
+                  .                        ^   |            +-----+---------+
+                 .                         |   |<enqueue,         .
+                .                          |   | dequeue>         .
+               .                           |   |                  .
+              .                        +---+---V----+             .
+             .      . . . . . . . . . .: Portal drv :             .
+            .      .                   :            :             .
+           .      .                    +-----+------+             .
+          .      .                     :   QMAN     :             .
+         .      .                      :  Driver    :             .
+    +----+------+-------+              +-----+------+             .
+    |   DPDK DPAA Bus   |                    |                    .
+    |   driver          |....................|.....................
+    |   /bus/dpaa       |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|========================
+                                            PHY
+    =========================================|========================
+
+In the above representation, solid lines represent components which interface
+with DPDK RTE Framework and dotted lines represent DPAA internal components.
+
+DPAA Bus driver
+~~~~~~~~~~~~~~~
+
+The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
+Key functions include:
+
+- Scanning and parsing the various objects and adding them to their respective
+  device list.
+- Performing probe for available drivers against each scanned device
+- Creating necessary ethernet instance before passing control to the PMD
+
+DPAA NIC Driver (PMD)
+~~~~~~~~~~~~~~~~~~~~~
+
+DPAA PMD is traditional DPDK PMD which provides necessary interface between
+RTE framework and DPAA internal components/drivers.
+
+- Once devices have been identified by DPAA Bus, each device is associated
+  with the PMD
+- PMD is responsible for implementing necessary glue layer between RTE APIs
+  and lower level QMan and FMan blocks.
+  The Ethernet driver is bound to a FMAN port and implements the interfaces
+  needed to connect the DPAA network interface to the network stack.
+  Each FMAN Port corresponds to a DPDK network interface.
+
+
+Features
+^^^^^^^^
+
+  Features of the DPAA PMD are:
+
+  - Multiple queues for TX and RX
+  - Receive Side Scaling (RSS)
+  - Packet type information
+  - Checksum offload
+  - Promiscuous mode
+
+DPAA Mempool Driver
+~~~~~~~~~~~~~~~~~~~
+
+DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
+Manager.
+
+- Using standard Mempools operations RTE API, the mempool driver interfaces
+  with RTE to service each mempool creation, deletion, buffer allocation and
+  deallocation requests.
+- Each FMAN instance has a BMan pool attached to it during initialization.
+  Each Tx frame can be automatically released by hardware, if allocated from
+  this pool.
+
+
+Supported DPAA SoCs
+-------------------
+
+- LS1043A/LS1023A
+- LS1046A/LS1026A
+
+Prerequisites
+-------------
+
+There are three main pre-requisities for executing DPAA PMD on a DPAA
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+   For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
+
+2. **Linux Kernel**
+
+   It can be obtained from `NXP's Github hosting <https://github.com/qoriq-open-source/linux>`_.
+
+3. **Rootfile system**
+
+   Any *aarch64* supporting filesystem can be used. For example,
+   Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
+   from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
+
+4. **FMC Tool**
+
+   Before any DPDK application can be executed, the Frame Manager Configuration
+   Tool (FMC) need to be executed to set the configurations of the queues. This
+   includes the queue state, RSS and other policies.
+   This tool can be obtained from `NXP (Freescale) Public Git Repository <http://git.freescale.com/git/cgit.cgi/ppc/sdk/fmc.git>`_.
+   This tool needs configuration files which are available in the
+   :ref:`DPDK Extra Scripts <extra_scripts>`, described below.
+
+As an alternative method, DPAA PMD can also be executed using images provided
+as part of SDK from NXP. The SDK includes all the above prerequisites necessary
+to bring up a DPAA board.
+
+The following dependencies are not part of DPDK and must be installed
+separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+
+.. _extra_scripts:
+
+- **DPDK Extra Scripts**
+
+  DPAA based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK Extra repository.
+
+  `DPDK Extras Scripts <https://github.com/qoriq-open-source/dpdk-extras>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+**.
+- Supported architectures:  **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
+  to setup the basic DPDK environment.
+
+.. note::
+
+   Some part of dpaa bus code (qbman and fman - library) routines are
+   dual licensed (BSD & GPLv2).
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_bus_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_pmd_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
+
+  Toggle display of generic debugging messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT`` (default ``n``)
+
+  Toggle display of initialization related messages.
+
+- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` (default ``dpaa``)
+
+  This is not a DPAA specific configuration - it is a generic RTE config.
+  For optimal performance and hardware utilization, it is expected that DPAA
+  Mempool driver is used for mempools. For that, this configuration needs to
+  enabled.
+
+Environment Variables
+~~~~~~~~~~~~~~~~~~~~~
+
+DPAA drivers uses the following environment variables to configure its
+state during application initialization:
+
+- ``DPAA_NUM_RX_QUEUES`` (default 1)
+
+  This defines the number of Rx queues configured for an application, per
+  port. Hardware would distribute across these many number of queues on Rx
+  of packets.
+  In case the application is configured to use lesser number of queues than
+  configured above, it might result in packet loss (because of distribution).
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+#. Running testpmd:
+
+   Follow instructions available in the document
+   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+   to run testpmd.
+
+   Example output:
+
+   .. code-block:: console
+
+      ./arm64-dpaa-linuxapp-gcc/testpmd -c 0xff -n 1 \
+        -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
+
+      .....
+      EAL: Registered [pci] bus.
+      EAL: Registered [dpaa] bus.
+      EAL: Detected 4 lcore(s)
+      .....
+      EAL: dpaa: Bus scan completed
+      .....
+      Configuring Port 0 (socket 0)
+      Port 0: 00:00:00:00:00:01
+      Configuring Port 1 (socket 0)
+      Port 1: 00:00:00:00:00:02
+      .....
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Port 1 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+Limitations
+-----------
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+DPAA drivers for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
+up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
new file mode 100644
index 0000000..9e8befc
--- /dev/null
+++ b/doc/guides/nics/features/dpaa.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'dpaa' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+ARMv8                = Y
+Usage doc            = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 240d082..6fc8eaf 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -42,6 +42,7 @@ Network Interface Controller Drivers
     bnx2x
     bnxt
     cxgbe
+    dpaa
     dpaa2
     e1000em
     ena
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 18/40] bus/dpaa: add DPAA mempool logging macros
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (16 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 19/40] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
                     ` (23 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c      |  5 +++++
 drivers/bus/dpaa/rte_dpaa_logs.h | 28 ++++++++++++++++++++++++++++
 2 files changed, 33 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 2e16a09..417d0d7 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -71,6 +71,7 @@
 #include <netcfg.h>
 
 int dpaa_logtype_bus;
+int dpaa_logtype_mempool;
 
 struct rte_dpaa_bus rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
@@ -423,4 +424,8 @@ dpaa_init_log(void)
 	dpaa_logtype_bus = rte_log_register("bus.dpaa");
 	if (dpaa_logtype_bus >= 0)
 		rte_log_set_level(dpaa_logtype_bus, RTE_LOG_NOTICE);
+
+	dpaa_logtype_mempool = rte_log_register("mempool.dpaa");
+	if (dpaa_logtype_mempool >= 0)
+		rte_log_set_level(dpaa_logtype_mempool, RTE_LOG_NOTICE);
 }
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
index 54eda23..18e586e 100644
--- a/drivers/bus/dpaa/rte_dpaa_logs.h
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -37,6 +37,7 @@
 #include <rte_log.h>
 
 extern int dpaa_logtype_bus;
+extern int dpaa_logtype_mempool;
 
 #define DPAA_BUS_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
@@ -61,4 +62,31 @@ extern int dpaa_logtype_bus;
 #define DPAA_BUS_ERR(fmt, args...) \
 	DPAA_BUS_LOG(ERR, fmt, ## args)
 
+/* Mempool related logs */
+
+#define DPAA_MEMPOOL_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_mempool, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define MEMPOOL_INIT_FUNC_TRACE() DPAA_MEMPOOL_LOG(DEBUG, " >>")
+
+/* DEBUG and WARN are conditional to compiled configuration */
+#ifdef RTE_LIBRTE_DPAA_MEMPOOL_DEBUG
+#define DPAA_MEMPOOL_DEBUG(fmt, args...) \
+	DPAA_MEMPOOL_LOG(DEBUG, fmt, ## args)
+
+#define DPAA_MEMPOOL_WARN(fmt, args...) \
+	DPAA_MEMPOOL_LOG(WARN, fmt, ## args)
+#else /* RTE_LIBRTE_DPAA_MEMPOOL_DEBUG */
+#define DPAA_MEMPOOL_DEBUG(fmt, args...) do { } while(0)
+#define DPAA_MEMPOOL_WARN(fmt, args...)  do { } while(0)
+#endif /* RTE_LIBRTE_DPAA_MEMPOOL_DEBUG */
+
+/* ERR and INFO are unconditional */
+#define DPAA_MEMPOOL_ERR(fmt, args...) \
+	DPAA_MEMPOOL_LOG(ERR, fmt, ## args)
+
+#define DPAA_MEMPOOL_INFO(fmt, args...) \
+	DPAA_MEMPOOL_LOG(INFO, fmt, ## args)
+
 #endif /* _DPAA_LOGS_H_ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 19/40] mempool/dpaa: add support for NXP DPAA Mempool
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (17 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 20/40] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
                     ` (22 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This Mempool driver works with DPAA BMan hardware block. This block
manages data buffers in memory, and provides efficient interface with
other hardware and software components for buffer requests.

This patch adds support for BMan. Compilation would be enabled in
subsequent patches.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/mempool/dpaa/Makefile                     |  65 ++++++
 drivers/mempool/dpaa/dpaa_mempool.c               | 264 ++++++++++++++++++++++
 drivers/mempool/dpaa/dpaa_mempool.h               |  78 +++++++
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map |   6 +
 4 files changed, 413 insertions(+)
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map

diff --git a/drivers/mempool/dpaa/Makefile b/drivers/mempool/dpaa/Makefile
new file mode 100644
index 0000000..45a1f7b
--- /dev/null
+++ b/drivers/mempool/dpaa/Makefile
@@ -0,0 +1,65 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 NXP. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
+CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
+
+# versioning export map
+EXPORT_MAP := rte_mempool_dpaa_version.map
+
+# Lbrary version
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa_mempool.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
new file mode 100644
index 0000000..3b96cbd
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -0,0 +1,264 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <dpaa_mempool.h>
+
+struct pool_info_entry rte_dpaa_pool_table[DPAA_MAX_BPOOLS];
+
+static void
+dpaa_buf_free(struct pool_info_entry *bp_info, uint64_t addr)
+{
+	struct bm_buffer buf;
+	int ret;
+
+	DPAA_MEMPOOL_DEBUG("Free 0x%lx to bpid: %d", addr, bp_info->bpid);
+
+	bm_buffer_set64(&buf, addr);
+retry:
+	ret = bman_release(bp_info->bp, &buf, 1, 0);
+	if (ret) {
+		DPAA_MEMPOOL_DEBUG("BMAN busy. Retrying...");
+		cpu_spin(CPU_SPIN_BACKOFF_CYCLES);
+		goto retry;
+	}
+}
+
+static int
+dpaa_mbuf_create_pool(struct rte_mempool *mp)
+{
+	struct bman_pool *bp;
+	struct bm_buffer bufs[8];
+	uint8_t bpid;
+	int num_bufs = 0, ret = 0;
+	struct bman_pool_params params = {
+		.flags = BMAN_POOL_FLAG_DYNAMIC_BPID
+	};
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	bp = bman_new_pool(&params);
+	if (!bp) {
+		DPAA_MEMPOOL_ERR("bman_new_pool() failed");
+		return -ENODEV;
+	}
+	bpid = bman_get_params(bp)->bpid;
+
+	/* Drain the pool of anything already in it. */
+	do {
+		/* Acquire is all-or-nothing, so we drain in 8s,
+		 * then in 1s for the remainder.
+		 */
+		if (ret != 1)
+			ret = bman_acquire(bp, bufs, 8, 0);
+		if (ret < 8)
+			ret = bman_acquire(bp, bufs, 1, 0);
+		if (ret > 0)
+			num_bufs += ret;
+	} while (ret > 0);
+	if (num_bufs)
+		DPAA_MEMPOOL_WARN("drained %u bufs from BPID %d",
+				  num_bufs, bpid);
+
+	rte_dpaa_pool_table[bpid].mp = mp;
+	rte_dpaa_pool_table[bpid].bpid = bpid;
+	rte_dpaa_pool_table[bpid].size = mp->elt_size;
+	rte_dpaa_pool_table[bpid].bp = bp;
+	rte_dpaa_pool_table[bpid].meta_data_size =
+		sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(mp);
+	rte_dpaa_pool_table[bpid].dpaa_ops_index = mp->ops_index;
+	mp->pool_data = (void *)&rte_dpaa_pool_table[bpid];
+
+	DPAA_MEMPOOL_INFO("BMAN pool created for bpid =%d", bpid);
+	return 0;
+}
+
+static void
+dpaa_mbuf_free_pool(struct rte_mempool *mp)
+{
+	struct pool_info_entry *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	bman_free_pool(bp_info->bp);
+	DPAA_MEMPOOL_INFO("BMAN pool freed for bpid =%d", bp_info->bpid);
+}
+
+static int
+dpaa_mbuf_free_bulk(struct rte_mempool *pool,
+		    void *const *obj_table,
+		    unsigned int n)
+{
+	struct pool_info_entry *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+	int ret;
+	unsigned int i = 0;
+
+	DPAA_MEMPOOL_DEBUG(" Request to free %d buffers in bpid = %d",
+			   n, bp_info->bpid);
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
+				 ret);
+		return 0;
+	}
+
+	while (i < n) {
+		dpaa_buf_free(bp_info, (uint64_t)rte_mempool_virt2phy(pool,
+			      obj_table[i]) + bp_info->meta_data_size);
+		i = i + 1;
+	}
+
+	DPAA_MEMPOOL_DEBUG(" freed %d buffers in bpid =%d", n, bp_info->bpid);
+
+	return 0;
+}
+
+static int
+dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
+		     void **obj_table,
+		     unsigned int count)
+{
+	struct rte_mbuf **m = (struct rte_mbuf **)obj_table;
+	struct bm_buffer bufs[DPAA_MBUF_MAX_ACQ_REL];
+	struct pool_info_entry *bp_info;
+	void *bufaddr;
+	int i, ret;
+	unsigned int n = 0;
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+
+	DPAA_MEMPOOL_DEBUG(" Request to alloc %d buffers in bpid = %d",
+		    count, bp_info->bpid);
+
+	if (unlikely(count >= (RTE_MEMPOOL_CACHE_MAX_SIZE * 2))) {
+		DPAA_MEMPOOL_ERR("Unable to allocate requested (%u) buffers",
+				 count);
+		return -1;
+	}
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
+				 ret);
+		return 0;
+	}
+
+	while (n < count) {
+		/* Acquire is all-or-nothing, so we drain in 7s,
+		 * then the remainder.
+		 */
+		if ((count - n) > DPAA_MBUF_MAX_ACQ_REL) {
+			ret = bman_acquire(bp_info->bp, bufs,
+					   DPAA_MBUF_MAX_ACQ_REL, 0);
+		} else {
+			ret = bman_acquire(bp_info->bp, bufs, count - n, 0);
+		}
+		/* In case of less than requested number of buffers available
+		 * in pool, qbman_swp_acquire returns 0
+		 */
+		if (ret <= 0) {
+			DPAA_MEMPOOL_DEBUG("Buffer acquire failed with"
+					   " err code: %d", ret);
+			/* The API expect the exact number of requested
+			 * buffers. Releasing all buffers allocated
+			 */
+			dpaa_mbuf_free_bulk(pool, obj_table, n);
+			return -1;
+		}
+		/* assigning mbuf from the acquired objects */
+		for (i = 0; (i < ret) && bufs[i].addr; i++) {
+			/* TODO-errata - objerved that bufs may be null
+			 * i.e. first buffer is valid, remaining 6 buffers
+			 * may be null.
+			 */
+			bufaddr = (void *)rte_dpaa_mem_ptov(bufs[i].addr);
+			m[n] = (struct rte_mbuf *)((char *)bufaddr
+						- bp_info->meta_data_size);
+			rte_mbuf_refcnt_set(m[n], 1);
+			DPAA_MEMPOOL_DEBUG("Acquired %p address %p from BMAN",
+					   (void *)bufaddr, (void *)m[n]);
+			n++;
+		}
+	}
+
+	DPAA_MEMPOOL_DEBUG(" allocated %d buffers from bpid =%d",
+			   n, bp_info->bpid);
+	return 0;
+}
+
+static unsigned int
+dpaa_mbuf_get_count(const struct rte_mempool *mp)
+{
+	struct pool_info_entry *bp_info;
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	return bman_query_free_buffers(bp_info->bp);
+}
+
+
+struct rte_mempool_ops dpaa_mpool_ops = {
+	.name = "dpaa",
+	.alloc = dpaa_mbuf_create_pool,
+	.free = dpaa_mbuf_free_pool,
+	.enqueue = dpaa_mbuf_free_bulk,
+	.dequeue = dpaa_mbuf_alloc_bulk,
+	.get_count = dpaa_mbuf_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
diff --git a/drivers/mempool/dpaa/dpaa_mempool.h b/drivers/mempool/dpaa/dpaa_mempool.h
new file mode 100644
index 0000000..b097667
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_MEMPOOL_H__
+#define __DPAA_MEMPOOL_H__
+
+/* System headers */
+#include <stdio.h>
+#include <stdbool.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <unistd.h>
+
+#include <rte_mempool.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <fsl_usd.h>
+#include <fsl_bman.h>
+
+#define CPU_SPIN_BACKOFF_CYCLES               512
+
+/* total number of bpools on SoC */
+#define DPAA_MAX_BPOOLS	256
+
+/* Maximum release/acquire from BMAN */
+#define DPAA_MBUF_MAX_ACQ_REL  8
+
+struct pool_info_entry {
+	struct rte_mempool *mp;
+	struct bman_pool *bp;
+	uint32_t bpid;
+	uint32_t size;
+	uint32_t meta_data_size;
+	int32_t dpaa_ops_index;
+};
+
+#define DPAA_MEMPOOL_TO_POOL_INFO(__mp) \
+	(struct pool_info_entry *)__mp->pool_data
+
+#define DPAA_MEMPOOL_TO_BPID(__mp) \
+	((struct pool_info_entry *)__mp->pool_data)->bpid
+
+extern struct pool_info_entry rte_dpaa_pool_table[DPAA_MAX_BPOOLS];
+
+#define DPAA_BPID_TO_POOL_INFO(__bpid) (&rte_dpaa_pool_table[__bpid])
+
+#endif
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
new file mode 100644
index 0000000..5be8f56
--- /dev/null
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -0,0 +1,6 @@
+DPDK_17.08 {
+	global:
+
+	rte_dpaa_pool_table;
+
+};
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 20/40] drivers: enable compilation of DPAA Mempool driver
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (18 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 19/40] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 21/40] maintainers: claim ownership " Shreyansh Jain
                     ` (21 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch also adds configuration necessary for compilation of DPAA
Mempool driver into the DPAA specific config file.
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=dpaa is also configured to allow
applications to use DPAA mempool as default.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       | 1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 5 +++++
 drivers/mempool/Makefile                 | 2 ++
 3 files changed, 8 insertions(+)

diff --git a/config/common_base b/config/common_base
index 8ea4967..e99b0a7 100644
--- a/config/common_base
+++ b/config/common_base
@@ -304,6 +304,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index cf603f3..b34c203 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -44,3 +44,8 @@ CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
+
+# NXP DPAA Mempool
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index 8fd40e1..595f717 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -33,6 +33,8 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mempool librte_ring
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 21/40] maintainers: claim ownership of DPAA Mempool driver
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (19 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 20/40] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 22/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
                     ` (20 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 839423b..b71f423 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -392,6 +392,7 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: drivers/mempool/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 22/40] bus/dpaa: add DPAA PMD logging macros
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (20 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 21/40] maintainers: claim ownership " Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 23/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
                     ` (19 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c      |  5 +++++
 drivers/bus/dpaa/rte_dpaa_logs.h | 37 +++++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 417d0d7..9eccf2a 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -72,6 +72,7 @@
 
 int dpaa_logtype_bus;
 int dpaa_logtype_mempool;
+int dpaa_logtype_pmd;
 
 struct rte_dpaa_bus rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
@@ -428,4 +429,8 @@ dpaa_init_log(void)
 	dpaa_logtype_mempool = rte_log_register("mempool.dpaa");
 	if (dpaa_logtype_mempool >= 0)
 		rte_log_set_level(dpaa_logtype_mempool, RTE_LOG_NOTICE);
+
+	dpaa_logtype_pmd = rte_log_register("pmd.dpaa");
+	if (dpaa_logtype_pmd >= 0)
+		rte_log_set_level(dpaa_logtype_pmd, RTE_LOG_NOTICE);
 }
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
index 18e586e..42d8bbe 100644
--- a/drivers/bus/dpaa/rte_dpaa_logs.h
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -38,6 +38,7 @@
 
 extern int dpaa_logtype_bus;
 extern int dpaa_logtype_mempool;
+extern int dpaa_logtype_pmd;
 
 #define DPAA_BUS_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
@@ -89,4 +90,40 @@ extern int dpaa_logtype_mempool;
 #define DPAA_MEMPOOL_INFO(fmt, args...) \
 	DPAA_MEMPOOL_LOG(INFO, fmt, ## args)
 
+/* PMD related logs */
+
+#define DPAA_PMD_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_mempool, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() DPAA_MEMPOOL_LOG(DEBUG, " >>")
+
+/* DEBUG and WARN are conditional to compiled configuration */
+#ifdef RTE_LIBRTE_DPAA_PMD_DEBUG
+#define DPAA_PMDL_DEBUG(fmt, args...) \
+	DPAA_PMD_LOG(DEBUG, fmt, ## args)
+
+#define DPAA_PMD_WARN(fmt, args...) \
+	DPAA_PMD_LOG(WARN, fmt, ## args)
+#else /* RTE_LIBRTE_DPAA_MEMPOOL_DEBUG */
+#define DPAA_PMD_DEBUG(fmt, args...) do { } while(0)
+#define DPAA_PMD_WARN(fmt, args...)  do { } while(0)
+#endif /* RTE_LIBRTE_DPAA_MEMPOOL_DEBUG */
+
+/* ERR and INFO are unconditional */
+#define DPAA_PMD_ERR(fmt, args...) \
+	DPAA_PMD_LOG(ERR, fmt, ## args)
+
+#define DPAA_PMD_INFO(fmt, args...) \
+	DPAA_PMD_LOG(INFO, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define DPAA_RX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, fmt, ## args)
+#define DPAA_TX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, fmt, ## args)
+#define DPAA_DP_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, fmt, ## args)
+
+
 #endif /* _DPAA_LOGS_H_ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 23/40] net/dpaa: add NXP DPAA PMD driver skeleton
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (21 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 22/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 24/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
                     ` (18 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

A skeleton which would be called after bus device scan. It currently
fails to identify the device.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                               |   1 +
 drivers/net/dpaa/Makefile                 |  64 ++++++++
 drivers/net/dpaa/dpaa_ethdev.c            | 260 ++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_ethdev.h            | 128 +++++++++++++++
 drivers/net/dpaa/rte_pmd_dpaa_version.map |   4 +
 5 files changed, 457 insertions(+)
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index b71f423..dde0a18 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -393,6 +393,7 @@ M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
 F: drivers/mempool/dpaa/
+F: drivers/net/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
new file mode 100644
index 0000000..8fcde26
--- /dev/null
+++ b/drivers/net/dpaa/Makefile
@@ -0,0 +1,64 @@
+#   BSD LICENSE
+#
+#   Copyright 2017 NXP.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Freescale Semiconductor, Inc nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_SDK_DPAA=$(RTE_SDK)/drivers/net/dpaa
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+
+CFLAGS += -I$(RTE_SDK_DPAA)/
+CFLAGS += -I$(RTE_SDK_DPAA)/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
+
+EXPORT_MAP := rte_pmd_dpaa_version.map
+
+LIBABIVER := 1
+
+# Interfaces with DPDK
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
new file mode 100644
index 0000000..40f6765
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -0,0 +1,260 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <dpaa_ethdev.h>
+
+/* Keep track of whether QMAN and BMAN have been globally initialized */
+static int is_global_init;
+
+static int
+dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+
+static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	/* Change tx callback to the real one */
+	dev->tx_pkt_burst = NULL;
+
+	return 0;
+}
+
+static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = NULL;
+}
+
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static struct eth_dev_ops dpaa_devops = {
+	.dev_configure		  = dpaa_eth_dev_configure,
+	.dev_start		  = dpaa_eth_dev_start,
+	.dev_stop		  = dpaa_eth_dev_stop,
+	.dev_close		  = dpaa_eth_dev_close,
+};
+
+/* Initialise a network interface */
+static int
+dpaa_dev_init(struct rte_eth_dev *eth_dev)
+{
+	int dev_id;
+	struct rte_dpaa_device *dpaa_device;
+	struct dpaa_if *dpaa_intf;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
+	dev_id = dpaa_device->id.dev_id;
+	dpaa_intf = eth_dev->data->dev_private;
+
+	dpaa_intf->name = dpaa_device->name;
+
+	dpaa_intf->ifid = dev_id;
+
+	eth_dev->dev_ops = &dpaa_devops;
+
+	return 0;
+}
+
+static int
+dpaa_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	if (!dpaa_intf) {
+		DPAA_PMD_WARN("Already closed or not started");
+		return -1;
+	}
+
+	dpaa_eth_dev_close(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
+	return 0;
+}
+
+static int
+rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv,
+			   struct rte_dpaa_device *dpaa_dev)
+{
+	int diag;
+	int ret;
+	struct rte_eth_dev *eth_dev;
+	char ethdev_name[RTE_ETH_NAME_MAX_LEN];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!is_global_init) {
+		/* One time load of Qman/Bman drivers */
+		ret = qman_global_init();
+		if (ret) {
+			DPAA_PMD_ERR("QMAN initialization failed: %d",
+				     ret);
+			return ret;
+		}
+		ret = bman_global_init();
+		if (ret) {
+			DPAA_PMD_ERR("BMAN initialization failed: %d",
+				     ret);
+			return ret;
+		}
+
+		is_global_init = 1;
+	}
+
+	snprintf(ethdev_name, RTE_ETH_NAME_MAX_LEN - 1, "%s", dpaa_dev->name);
+
+	ret = rte_dpaa_portal_init((void *)1);
+	if (ret) {
+		DPAA_PMD_ERR("Unable to initialize portal");
+		return ret;
+	}
+
+	/* In case of secondary process, the device is already configured
+	 * and no further action is required, except portal initialization
+	 * and verifying secondary attachment to port name.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		eth_dev = rte_eth_dev_attach_secondary(ethdev_name);
+		if (!eth_dev)
+			return -ENOMEM;
+		return 0;
+	}
+
+	eth_dev = rte_eth_dev_allocate(ethdev_name);
+	if (eth_dev == NULL)
+		return -ENOMEM;
+
+	eth_dev->data->dev_private = rte_zmalloc(
+					"ethdev private structure",
+					sizeof(struct dpaa_if),
+					RTE_CACHE_LINE_SIZE);
+	if (!eth_dev->data->dev_private) {
+		DPAA_PMD_ERR("Cannot allocate memzone for port data");
+		rte_eth_dev_release_port(eth_dev);
+		return -ENOMEM;
+	}
+
+	eth_dev->device = &dpaa_dev->device;
+	eth_dev->device->driver = &dpaa_drv->driver;
+	dpaa_dev->eth_dev = eth_dev;
+
+	/* Invoke PMD device initialization function */
+	diag = dpaa_dev_init(eth_dev);
+	if (diag == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eth_dev->data->dev_private);
+
+	rte_eth_dev_release_port(eth_dev);
+	return diag;
+}
+
+static int
+rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
+{
+	struct rte_eth_dev *eth_dev;
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev = dpaa_dev->eth_dev;
+	dpaa_dev_uninit(eth_dev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eth_dev->data->dev_private);
+
+	rte_eth_dev_release_port(eth_dev);
+
+	return 0;
+}
+
+static struct rte_dpaa_driver rte_dpaa_pmd = {
+	.drv_type = FSL_DPAA_ETH,
+	.probe = rte_dpaa_probe,
+	.remove = rte_dpaa_remove,
+};
+
+RTE_PMD_REGISTER_DPAA(net_dpaa, rte_dpaa_pmd);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
new file mode 100644
index 0000000..8aeaebf
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -0,0 +1,128 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2014-2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_ETHDEV_H__
+#define __DPAA_ETHDEV_H__
+
+/* System headers */
+#include <stdbool.h>
+#include <rte_ethdev.h>
+
+#include <rte_dpaa_logs.h>
+
+
+#define DPAA_MBUF_HW_ANNOTATION		64
+#define DPAA_FD_PTA_SIZE		64
+
+#if (DPAA_MBUF_HW_ANNOTATION + DPAA_FD_PTA_SIZE) > RTE_PKTMBUF_HEADROOM
+#error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM"
+#endif
+
+/* we will re-use the HEADROOM for annotation in RX */
+#define DPAA_HW_BUF_RESERVE	0
+#define DPAA_PACKET_LAYOUT_ALIGN	64
+
+/* Alignment to use for cpu-local structs to avoid coherency problems. */
+#define MAX_CACHELINE			64
+
+#define DPAA_MIN_RX_BUF_SIZE 512
+#define DPAA_MAX_RX_PKT_LEN  10240
+
+/* RX queue tail drop threshold
+ * currently considering 32 KB packets.
+ */
+#define CONG_THRESHOLD_RX_Q  (32 * 1024)
+
+/*max mac filter for memac(8) including primary mac addr*/
+#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
+
+/*Maximum number of slots available in TX ring*/
+#define MAX_TX_RING_SLOTS	8
+
+/* PCD frame queues */
+#define DPAA_PCD_FQID_START		0x400
+#define DPAA_PCD_FQID_MULTIPLIER	0x100
+#define DPAA_DEFAULT_NUM_PCD_QUEUES	1
+
+#define DPAA_IF_TX_PRIORITY		3
+#define DPAA_IF_RX_PRIORITY		4
+#define DPAA_IF_DEBUG_PRIORITY		7
+
+#define DPAA_IF_RX_ANNOTATION_STASH	1
+#define DPAA_IF_RX_DATA_STASH		1
+#define DPAA_IF_RX_CONTEXT_STASH		0
+
+/* Each "debug" FQ is represented by one of these */
+#define DPAA_DEBUG_FQ_RX_ERROR   0
+#define DPAA_DEBUG_FQ_TX_ERROR   1
+
+#define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
+		PKT_TX_IP_CKSUM |                \
+		PKT_TX_TCP_CKSUM |               \
+		PKT_TX_UDP_CKSUM)
+
+
+/* DPAA Frame descriptor macros */
+
+#define DPAA_FD_CMD_FCO			0x80000000
+/**< Frame queue Context Override */
+#define DPAA_FD_CMD_RPD			0x40000000
+/**< Read Prepended Data */
+#define DPAA_FD_CMD_UPD			0x20000000
+/**< Update Prepended Data */
+#define DPAA_FD_CMD_DTC			0x10000000
+/**< Do IP/TCP/UDP Checksum */
+#define DPAA_FD_CMD_DCL4C		0x10000000
+/**< Didn't calculate L4 Checksum */
+#define DPAA_FD_CMD_CFQ			0x00ffffff
+/**< Confirmation Frame Queue */
+
+/* Configuration variables exported from DPAA bus */
+extern struct netcfg_info *dpaa_netcfg;
+
+/* Each network interface is represented by one of these */
+struct dpaa_if {
+	int valid;
+	char *name;
+	const struct fm_eth_port_cfg *cfg;
+	struct qman_fq *rx_queues;
+	struct qman_fq *tx_queues;
+	struct qman_fq debug_queues[2];
+	uint16_t nb_rx_queues;
+	uint16_t nb_tx_queues;
+	uint32_t ifid;
+	struct fman_if *fif;
+	struct pool_info_entry *bp_info;
+	struct rte_eth_fc_conf *fc_conf;
+};
+
+#endif
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
new file mode 100644
index 0000000..b6d2840
--- /dev/null
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -0,0 +1,4 @@
+DPDK_17.08 {
+
+	local: *;
+};
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 24/40] config: enable NXP DPAA PMD compilation
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (22 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 23/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 25/40] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
                     ` (17 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       |  1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 11 +++++++++++
 drivers/net/Makefile                     |  2 ++
 mk/rte.app.mk                            |  5 +++++
 4 files changed, 19 insertions(+)

diff --git a/config/common_base b/config/common_base
index e99b0a7..41d88cf 100644
--- a/config/common_base
+++ b/config/common_base
@@ -305,6 +305,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
+CONFIG_RTE_LIBRTE_DPAA_PMD=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index b34c203..87c0d26 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -39,6 +39,13 @@ CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
 CONFIG_RTE_LIBRTE_VHOST_NUMA=n
 CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
 
+#
+# Compile Environment Abstraction Layer
+#
+CONFIG_RTE_MAX_LCORE=4
+CONFIG_RTE_MAX_NUMA_NODES=1
+CONFIG_RTE_PKTMBUF_HEADROOM=128
+
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
@@ -49,3 +56,7 @@ CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL_DEBUG=n
 CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
+
+# Compile software NXP DPAA PMD
+CONFIG_RTE_LIBRTE_DPAA_PMD=y
+CONFIG_RTE_LIBRTE_DPAA_PMD_DEBUG=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 35ed813..efd1a34 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -51,6 +51,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DEPDIRS-bonding = $(core-libs) librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
 DEPDIRS-cxgbe = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += e1000
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 4fe22d1..711ebed 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -115,6 +115,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD)      += -lrte_pmd_cxgbe
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_pmd_dpaa
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_pmd_dpaa2
 _LDLIBS-$(CONFIG_RTE_LIBRTE_E1000_PMD)      += -lrte_pmd_e1000
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ENA_PMD)        += -lrte_pmd_ena
@@ -178,6 +179,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_bus_fslmc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_mempool_dpaa2
 endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
 
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+endif
+
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
 
 _LDLIBS-y += --no-whole-archive
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 25/40] net/dpaa: add support for Tx and Rx queue setup
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (23 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 24/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 26/40] net/dpaa: add support for MTU update Shreyansh Jain
                     ` (16 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/Makefile      |   4 +
 drivers/net/dpaa/dpaa_ethdev.c | 331 ++++++++++++++++++++++++++++++++++++++++-
 drivers/net/dpaa/dpaa_ethdev.h |   6 +
 drivers/net/dpaa/dpaa_rxtx.c   | 313 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h   |  61 ++++++++
 mk/rte.app.mk                  |   1 +
 6 files changed, 713 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h

diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index 8fcde26..06b63fc 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -44,11 +44,13 @@ else
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 endif
+CFLAGS +=-Wno-pointer-arith
 
 CFLAGS += -I$(RTE_SDK_DPAA)/
 CFLAGS += -I$(RTE_SDK_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
 
@@ -58,7 +60,9 @@ LIBABIVER := 1
 
 # Interfaces with DPDK
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_rxtx.c
 
 LDLIBS += -lrte_bus_dpaa
+LDLIBS += -lrte_mempool_dpaa
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 40f6765..372a4b9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -62,8 +62,15 @@
 
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
+#include <dpaa_mempool.h>
 
 #include <dpaa_ethdev.h>
+#include <dpaa_rxtx.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <fsl_fman.h>
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -79,20 +86,104 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
 	PMD_INIT_FUNC_TRACE();
 
 	/* Change tx callback to the real one */
-	dev->tx_pkt_burst = NULL;
+	dev->tx_pkt_burst = dpaa_eth_queue_tx;
+	fman_if_enable_rx(dpaa_intf->fif);
 
 	return 0;
 }
 
 static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = NULL;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_disable_rx(dpaa_intf->fif);
+	dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
+}
+
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+}
+
+static
+int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+			    unsigned int socket_id __rte_unused,
+			    const struct rte_eth_rxconf *rx_conf __rte_unused,
+			    struct rte_mempool *mp)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	DPAA_PMD_INFO("Rx queue setup for queue index: %d", queue_idx);
+
+	if (!dpaa_intf->bp_info || dpaa_intf->bp_info->mp != mp) {
+		struct fman_if_ic_params icp;
+		uint32_t fd_offset;
+		uint32_t bp_size;
+
+		if (!mp->pool_data) {
+			DPAA_PMD_ERR("Not an offloaded buffer pool!");
+			return -1;
+		}
+		dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+		memset(&icp, 0, sizeof(icp));
+		/* set ICEOF for to the default value , which is 0*/
+		icp.iciof = DEFAULT_ICIOF;
+		icp.iceof = DEFAULT_RX_ICEOF;
+		icp.icsz = DEFAULT_ICSZ;
+		fman_if_set_ic_params(dpaa_intf->fif, &icp);
+
+		fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
+		fman_if_set_fdoff(dpaa_intf->fif, fd_offset);
+
+		/* Buffer pool size should be equal to Dataroom Size*/
+		bp_size = rte_pktmbuf_data_room_size(mp);
+		fman_if_set_bp(dpaa_intf->fif, mp->size,
+			       dpaa_intf->bp_info->bpid, bp_size);
+		dpaa_intf->valid = 1;
+		DPAA_PMD_INFO("if =%s - fd_offset = %d offset = %d",
+			    dpaa_intf->name, fd_offset,
+			fman_if_get_fdoff(dpaa_intf->fif));
+	}
+	dev->data->rx_queues[queue_idx] = &dpaa_intf->rx_queues[queue_idx];
+
+	return 0;
+}
+
+static
+void dpaa_eth_rx_queue_release(void *rxq __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
 }
 
-static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+static
+int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+		unsigned int socket_id __rte_unused,
+		const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	DPAA_PMD_INFO("Tx queue setup for queue index: %d", queue_idx);
+	dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx];
+	return 0;
+}
+
+static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 }
@@ -102,15 +193,102 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+
+	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
+	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
+	.rx_queue_release	  = dpaa_eth_rx_queue_release,
+	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 };
 
+/* Initialise an Rx FQ */
+static int dpaa_rx_queue_init(struct qman_fq *fq,
+			      uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		DPAA_PMD_ERR("reserve rx fqid %d failed with ret: %d",
+			     fqid, ret);
+		return -EINVAL;
+	}
+
+	DPAA_PMD_DEBUG("creating rx fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create rx fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTA;
+
+	opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
+			   QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_a.stashing.exclusive = 0;
+	opts.fqd.context_a.stashing.annotation_cl = DPAA_IF_RX_ANNOTATION_STASH;
+	opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
+	opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
+
+	/*Enable tail drop */
+	opts.we_mask = opts.we_mask | QM_INITFQ_WE_TDTHRESH;
+	opts.fqd.fq_ctrl = opts.fqd.fq_ctrl | QM_FQCTRL_TDE;
+	qm_fqd_taildrop_set(&opts.fqd.td, CONG_THRESHOLD_RX_Q, 1);
+
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init rx fqid %d failed with ret: %d", fqid, ret);
+	return ret;
+}
+
+/* Initialise a Tx FQ */
+static int dpaa_tx_queue_init(struct qman_fq *fq,
+			      struct fman_if *fman_intf)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID |
+			     QMAN_FQ_FLAG_TO_DCPORTAL, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create tx fq failed with ret: %d", ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTB | QM_INITFQ_WE_CONTEXTA;
+	opts.fqd.dest.channel = fman_intf->tx_channel_id;
+	opts.fqd.dest.wq = DPAA_IF_TX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_b = 0;
+	/* no tx-confirmation */
+	opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+	opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
+	DPAA_PMD_DEBUG("init tx fq %p, fqid %d", fq, fq->fqid);
+	ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init tx fqid %d failed %d", fq->fqid, ret);
+	return ret;
+}
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
 {
+	int num_cores, num_rx_fqs, fqid;
+	int loop, ret = 0;
 	int dev_id;
 	struct rte_dpaa_device *dpaa_device;
 	struct dpaa_if *dpaa_intf;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+	struct fman_if_bpool *bp, *tmp_bp;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -121,19 +299,149 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
 	dev_id = dpaa_device->id.dev_id;
 	dpaa_intf = eth_dev->data->dev_private;
+	cfg = &dpaa_netcfg->port_cfg[dev_id];
+	fman_intf = cfg->fman_if;
 
 	dpaa_intf->name = dpaa_device->name;
 
+	/* save fman_if & cfg in the interface struture */
+	dpaa_intf->fif = fman_intf;
 	dpaa_intf->ifid = dev_id;
+	dpaa_intf->cfg = cfg;
+
+	/* Initialize Rx FQ's */
+	if (getenv("DPAA_NUM_RX_QUEUES"))
+		num_rx_fqs = atoi(getenv("DPAA_NUM_RX_QUEUES"));
+	else
+		num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
+
+	/* Each device can not have more than DPAA_PCD_FQID_MULTIPLIER RX queues */
+	if (num_rx_fqs <= 0 || num_rx_fqs > DPAA_PCD_FQID_MULTIPLIER) {
+		DPAA_PMD_ERR("Invalid number of RX queues\n");
+		return -EINVAL;
+	}
 
+	dpaa_intf->rx_queues = rte_zmalloc(NULL,
+		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
+	for (loop = 0; loop < num_rx_fqs; loop++) {
+		fqid = DPAA_PCD_FQID_START + dpaa_intf->ifid *
+			DPAA_PCD_FQID_MULTIPLIER + loop;
+		ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop], fqid);
+		if (ret)
+			return ret;
+		dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_rx_queues = num_rx_fqs;
+
+	/* Initialise Tx FQs. Have as many Tx FQ's as number of cores */
+	num_cores = rte_lcore_count();
+	dpaa_intf->tx_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
+		num_cores, MAX_CACHELINE);
+	if (!dpaa_intf->tx_queues)
+		return -ENOMEM;
+
+	for (loop = 0; loop < num_cores; loop++) {
+		ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
+					 fman_intf);
+		if (ret)
+			return ret;
+		dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_tx_queues = num_cores;
+
+	DPAA_PMD_DEBUG("All frame queues created");
+
+	/* reset bpool list, initialize bpool dynamically */
+	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+
+	/* Populate ethdev structure */
 	eth_dev->dev_ops = &dpaa_devops;
+	eth_dev->data->nb_rx_queues = dpaa_intf->nb_rx_queues;
+	eth_dev->data->nb_tx_queues = dpaa_intf->nb_tx_queues;
+	eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
+	eth_dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
+		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		DPAA_PMD_ERR("Failed to allocate %d bytes needed to "
+						"store MAC addresses",
+				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
+		return -ENOMEM;
+	}
+
+	/* copy the primary mac address */
+	memcpy(eth_dev->data->mac_addrs[0].addr_bytes,
+		fman_intf->mac_addr.addr_bytes,
+		ETHER_ADDR_LEN);
+
+	DPAA_PMD_DEBUG("interface %s macaddr:", dpaa_device->name);
+	for (loop = 0; loop < ETHER_ADDR_LEN; loop++) {
+		if (loop != (ETHER_ADDR_LEN - 1))
+			printf("%02x:", fman_intf->mac_addr.addr_bytes[loop]);
+		else
+			printf("%02x\n", fman_intf->mac_addr.addr_bytes[loop]);
+	}
+
+	/* Disable RX mode */
+	fman_if_discard_rx_errors(fman_intf);
+	fman_if_disable_rx(fman_intf);
+	/* Disable promiscuous mode */
+	fman_if_promiscuous_disable(fman_intf);
+	/* Disable multicast */
+	fman_if_reset_mcast_filter_table(fman_intf);
+	/* Reset interface statistics */
+	fman_if_stats_reset(fman_intf);
 
 	return 0;
 }
 
+/* Retire, drain, OOS a FQ. (Dynamically-allocated FQIDs will be released.) */
+static void
+teardown_fq(struct qman_fq *fq)
+{
+	u32 flags;
+	int s = qman_retire_fq(fq, &flags);
+	if (s == 1) {
+		/* Retire is non-blocking, poll for completion */
+		enum qman_fq_state state;
+		do {
+			qman_poll();
+			qman_fq_state(fq, &state, &flags);
+		} while (state != qman_fq_state_retired);
+		if (flags & QMAN_FQ_STATE_NE) {
+			/* FQ isn't empty, drain it */
+			s = qman_volatile_dequeue(fq, 0,
+				QM_VDQCR_NUMFRAMES_TILLEMPTY);
+			if (s) {
+				DPAA_PMD_ERR("Fail: %s: %d\n",
+					     "qman_volatile_dequeue()", s);
+				return;
+			}
+			/* Poll for completion */
+			do {
+				qman_poll();
+				qman_fq_state(fq, &state, &flags);
+			} while (flags & QMAN_FQ_STATE_VDQCR);
+		}
+	}
+	s = qman_oos_fq(fq);
+	if (!(fq->flags & QMAN_FQ_FLAG_DYNAMIC_FQID))
+		qman_release_fqid(fq->fqid);
+	if (s)
+		DPAA_PMD_ERR("Fail: %s: %d\n", "qman_oos_fq()", s);
+	else
+		qman_destroy_fq(fq, 0);
+}
+
 static int
 dpaa_dev_uninit(struct rte_eth_dev *dev)
 {
+	int i;
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 
 	PMD_INIT_FUNC_TRACE();
@@ -148,6 +456,23 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
 
 	dpaa_eth_dev_close(dev);
 
+	/* free the all queue memory */
+	for (i = 0; i < dpaa_intf->nb_rx_queues; i++)
+		teardown_fq(&dpaa_intf->rx_queues[i]);
+
+	rte_free(dpaa_intf->rx_queues);
+	dpaa_intf->rx_queues = NULL;
+
+	for (i = 0; i < dpaa_intf->nb_tx_queues; i++)
+		teardown_fq(&dpaa_intf->tx_queues[i]);
+
+	rte_free(dpaa_intf->tx_queues);
+	dpaa_intf->tx_queues = NULL;
+
+	/* free memory for storing MAC addresses */
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
 	dev->dev_ops = NULL;
 	dev->rx_pkt_burst = NULL;
 	dev->tx_pkt_burst = NULL;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 8aeaebf..da7f3be 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -38,7 +38,13 @@
 #include <rte_ethdev.h>
 
 #include <rte_dpaa_logs.h>
+#include <dpaa_mempool.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
 
 #define DPAA_MBUF_HW_ANNOTATION		64
 #define DPAA_FD_PTA_SIZE		64
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
new file mode 100644
index 0000000..3226614
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -0,0 +1,313 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <limits.h>
+#include <sched.h>
+#include <pthread.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_udp.h>
+
+#include "dpaa_ethdev.h"
+#include "dpaa_rxtx.h"
+#include <rte_dpaa_bus.h>
+#include <dpaa_mempool.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
+#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
+	do { \
+		(_fd)->cmd = 0; \
+		(_fd)->opaque_addr = 0; \
+		(_fd)->opaque = QM_FD_CONTIG << DPAA_FD_FORMAT_SHIFT; \
+		(_fd)->opaque |= ((_mbuf)->data_off) << DPAA_FD_OFFSET_SHIFT; \
+		(_fd)->opaque |= (_mbuf)->pkt_len; \
+		(_fd)->addr = (_mbuf)->buf_physaddr; \
+		(_fd)->bpid = _bpid; \
+	} while (0)
+
+static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
+							uint32_t ifid)
+{
+	struct pool_info_entry *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *mbuf;
+	void *ptr;
+	uint16_t offset =
+		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
+	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
+
+	DPAA_RX_LOG(DEBUG, " FD--->MBUF");
+
+	/* Ignoring case when format != qm_fd_contig */
+	ptr = rte_dpaa_mem_ptov(fd->addr);
+	/* Ignoring case when ptr would be NULL. That is only possible incase
+	 * of a corrupted packet
+	 */
+
+	mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
+	/* Prefetch the Parse results and packet data to L1 */
+	rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
+	rte_prefetch0((void *)((uint8_t *)ptr + offset));
+
+	mbuf->data_off = offset;
+	mbuf->data_len = length;
+	mbuf->pkt_len = length;
+
+	mbuf->port = ifid;
+	mbuf->nb_segs = 1;
+	mbuf->ol_flags = 0;
+	mbuf->next = NULL;
+	rte_mbuf_refcnt_set(mbuf, 1);
+
+	return mbuf;
+}
+
+uint16_t dpaa_eth_queue_rx(void *q,
+			   struct rte_mbuf **bufs,
+			   uint16_t nb_bufs)
+{
+	struct qman_fq *fq = q;
+	struct qm_dqrr_entry *dq;
+	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_PMD_ERR("Failure in affining portal");
+		return 0;
+	}
+
+	ret = qman_set_vdq(fq, (nb_bufs > DPAA_MAX_DEQUEUE_NUM_FRAMES) ?
+				DPAA_MAX_DEQUEUE_NUM_FRAMES : nb_bufs);
+	if (ret)
+		return 0;
+
+	do {
+		dq = qman_dequeue(fq);
+		if (!dq)
+			continue;
+		bufs[num_rx++] = dpaa_eth_fd_to_mbuf(&dq->fd, ifid);
+		qman_dqrr_consume(fq, dq);
+	} while (fq->flags & QMAN_FQ_STATE_VDQCR);
+
+	return num_rx;
+}
+
+static void *dpaa_get_pktbuf(struct pool_info_entry *bp_info)
+{
+	int ret;
+	uint64_t buf = 0;
+	struct bm_buffer bufs;
+
+	ret = bman_acquire(bp_info->bp, &bufs, 1, 0);
+	if (ret <= 0) {
+		DPAA_PMD_WARN("Failed to allocate buffers %d", ret);
+		return (void *)buf;
+	}
+
+	DPAA_RX_LOG(DEBUG, "got buffer 0x%lx from pool %d",
+		    (uint64_t)bufs.addr, bufs.bpid);
+
+	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
+	if (!buf)
+		goto out;
+
+out:
+	return (void *)buf;
+}
+
+static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
+					     struct dpaa_if *dpaa_intf)
+{
+	struct rte_mbuf *dpaa_mbuf;
+
+	/* allocate pktbuffer on bpid for dpaa port */
+	dpaa_mbuf = dpaa_get_pktbuf(dpaa_intf->bp_info);
+	if (!dpaa_mbuf)
+		return NULL;
+
+	memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + mbuf->data_off, (void *)
+		((uint8_t *)(mbuf->buf_addr) + mbuf->data_off), mbuf->pkt_len);
+
+	/* Copy only the required fields */
+	dpaa_mbuf->data_off = mbuf->data_off;
+	dpaa_mbuf->pkt_len = mbuf->pkt_len;
+	dpaa_mbuf->ol_flags = mbuf->ol_flags;
+	dpaa_mbuf->packet_type = mbuf->packet_type;
+	dpaa_mbuf->tx_offload = mbuf->tx_offload;
+	rte_pktmbuf_free(mbuf);
+	return dpaa_mbuf;
+}
+
+uint16_t
+dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+	struct rte_mbuf *mbuf, *mi = NULL;
+	struct rte_mempool *mp;
+	struct pool_info_entry *bp_info;
+	struct qm_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send, loop, i = 0;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_PMD_ERR("Failure in affining portal");
+		return 0;
+	}
+
+	DPAA_TX_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
+
+	while (nb_bufs) {
+		frames_to_send = (nb_bufs >> 3) ? MAX_TX_RING_SLOTS : nb_bufs;
+		for (loop = 0; loop < frames_to_send; loop++, i++) {
+			mbuf = bufs[i];
+			if (RTE_MBUF_DIRECT(mbuf)) {
+				mp = mbuf->pool;
+			} else {
+				mi = rte_mbuf_from_indirect(mbuf);
+				mp = mi->pool;
+			}
+
+			bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+			if (mp->ops_index == bp_info->dpaa_ops_index) {
+				DPAA_TX_LOG(DEBUG, "BMAN offloaded buffer, "
+					    "mbuf: %p", mbuf);
+				if (mbuf->nb_segs == 1) {
+					if (RTE_MBUF_DIRECT(mbuf)) {
+						if (rte_mbuf_refcnt_read(mbuf) > 1) {
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], 0xff);
+							rte_mbuf_refcnt_update(mbuf, -1);
+						} else {
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], bp_info->bpid);
+						}
+					} else {
+						if (rte_mbuf_refcnt_read(mi) > 1) {
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], 0xff);
+						} else {
+							rte_mbuf_refcnt_update(mi, 1);
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], bp_info->bpid);
+						}
+						rte_pktmbuf_free(mbuf);
+					}
+				} else {
+					DPAA_PMD_DEBUG("Number of Segments not supported");
+					/* Set frames_to_send & nb_bufs so that
+					 * packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+			} else {
+				struct qman_fq *txq = q;
+				struct dpaa_if *dpaa_intf = txq->dpaa_intf;
+
+				DPAA_TX_LOG(DEBUG, "Non-BMAN offloaded buffer."
+					    "Allocating an offloaded buffer");
+				mbuf = dpaa_get_dmable_mbuf(mbuf, dpaa_intf);
+				if (!mbuf) {
+					DPAA_TX_LOG(DEBUG, "no dpaa buffers.");
+					/* Set frames_to_send & nb_bufs so that
+					 * packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+
+				DPAA_MBUF_TO_CONTIG_FD(mbuf, &fd_arr[loop],
+						dpaa_intf->bp_info->bpid);
+			}
+		}
+
+send_pkts:
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qman_enqueue_multi(q, &fd_arr[loop],
+					frames_to_send - loop);
+		}
+		nb_bufs -= frames_to_send;
+	}
+
+	DPAA_TX_LOG(DEBUG, "Transmitted %d buffers on queue: %p", i, q);
+
+	return i;
+}
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+		uint16_t nb_bufs __rte_unused)
+{
+	DPAA_TX_LOG(DEBUG, "Drop all packets");
+
+	/* Drop all incoming packets. No need to free packets here
+	 * because the rte_eth f/w frees up the packets through tx_buffer
+	 * callback in case this functions returns count less than nb_bufs
+	 */
+	return 0;
+}
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
new file mode 100644
index 0000000..09f1aa4
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -0,0 +1,61 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPDK_RXTX_H__
+#define __DPDK_RXTX_H__
+
+/* internal offset from where IC is copied to packet buffer*/
+#define DEFAULT_ICIOF          32
+/* IC transfer size */
+#define DEFAULT_ICSZ	48
+
+/* IC offsets from buffer header address */
+#define DEFAULT_RX_ICEOF	16
+
+#define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
+	/** <Maximum number of frames to be dequeued in a single rx call*/
+/* FD structure masks and offset */
+#define DPAA_FD_FORMAT_MASK 0xE0000000
+#define DPAA_FD_OFFSET_MASK 0x1FF00000
+#define DPAA_FD_LENGTH_MASK 0xFFFFF
+#define DPAA_FD_FORMAT_SHIFT 29
+#define DPAA_FD_OFFSET_SHIFT 20
+
+uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+			      uint16_t nb_bufs __rte_unused);
+#endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 711ebed..7209b4c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -181,6 +181,7 @@ endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
 
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_mempool_dpaa
 endif
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 26/40] net/dpaa: add support for MTU update
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (24 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 25/40] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 27/40] net/dpaa: add support for jumbo frames Shreyansh Jain
                     ` (15 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 9e8befc..59ef23d 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,5 +4,6 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 372a4b9..4f39c4c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -76,6 +76,26 @@
 static int is_global_init;
 
 static int
+dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (mtu < ETHER_MIN_MTU)
+		return -EINVAL;
+	if (mtu > ETHER_MAX_LEN)
+		return -1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
+
+	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
+
+	return 0;
+}
+
+static int
 dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
@@ -198,6 +218,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+	.mtu_set		  = dpaa_mtu_set,
 };
 
 /* Initialise an Rx FQ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 27/40] net/dpaa: add support for jumbo frames
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (25 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 26/40] net/dpaa: add support for MTU update Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 28/40] net/dpaa: add support for link status update Shreyansh Jain
                     ` (14 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 10 +++++++++-
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 59ef23d..e62812c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Jumbo frame          = Y
 MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4f39c4c..057840a 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -85,7 +85,7 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	if (mtu < ETHER_MIN_MTU)
 		return -EINVAL;
 	if (mtu > ETHER_MAX_LEN)
-		return -1;
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
 	else
 		dev->data->dev_conf.rxmode.jumbo_frame = 0;
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
@@ -100,6 +100,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 
+	if (dev->data->dev_conf.rxmode.jumbo_frame == 1) {
+		if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
+		    DPAA_MAX_RX_PKT_LEN)
+			return dpaa_mtu_set(dev,
+				dev->data->dev_conf.rxmode.max_rx_pkt_len);
+		else
+			return -1;
+	}
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 28/40] net/dpaa: add support for link status update
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (26 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 27/40] net/dpaa: add support for jumbo frames Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 29/40] net/dpaa: add support for device info and speed capability Shreyansh Jain
                     ` (13 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 42 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index e62812c..132f94b 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 057840a..a6513d8 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -142,6 +142,28 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static int dpaa_eth_link_update(struct rte_eth_dev *dev,
+				int wait_to_complete __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_link *link = &dev->data->dev_link;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dpaa_intf->fif->mac_type == fman_mac_1g)
+		link->link_speed = 1000;
+	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
+		link->link_speed = 10000;
+	else
+		DPAA_PMD_ERR("invalid link_speed: %s, %d",
+			     dpaa_intf->name, dpaa_intf->fif->mac_type);
+
+	link->link_status = dpaa_intf->valid;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = ETH_LINK_AUTONEG;
+	return 0;
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -216,6 +238,22 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 	PMD_INIT_FUNC_TRACE();
 }
 
+static int dpaa_link_down(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+	return 0;
+}
+
+static int dpaa_link_up(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_start(dev);
+	return 0;
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -226,7 +264,11 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+
+	.link_update		  = dpaa_eth_link_update,
 	.mtu_set		  = dpaa_mtu_set,
+	.dev_set_link_down	  = dpaa_link_down,
+	.dev_set_link_up	  = dpaa_link_up,
 };
 
 /* Initialise an Rx FQ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 29/40] net/dpaa: add support for device info and speed capability
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (27 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 28/40] net/dpaa: add support for link status update Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 30/40] net/dpaa: add support for promiscuous toggle Shreyansh Jain
                     ` (12 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 132f94b..19beada 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index a6513d8..8ba3237 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -142,6 +142,25 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
+			      struct rte_eth_dev_info *dev_info)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->max_rx_queues = dpaa_intf->nb_rx_queues;
+	dev_info->max_tx_queues = dpaa_intf->nb_tx_queues;
+	dev_info->min_rx_bufsize = DPAA_MIN_RX_BUF_SIZE;
+	dev_info->max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
+	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
+	dev_info->max_hash_mac_addrs = 0;
+	dev_info->max_vfs = 0;
+	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
+				ETH_LINK_SPEED_10G);
+}
+
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 				int wait_to_complete __rte_unused)
 {
@@ -259,6 +278,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+	.dev_infos_get		  = dpaa_eth_dev_info,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 30/40] net/dpaa: add support for promiscuous toggle
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (28 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 29/40] net/dpaa: add support for device info and speed capability Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 31/40] net/dpaa: add support for multicast toggle Shreyansh Jain
                     ` (11 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 19beada..b2dfd81 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -8,5 +8,6 @@ Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
+Promiscuous mode     = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 8ba3237..772832f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -183,6 +183,25 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+
+static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_enable(dpaa_intf->fif);
+}
+
+static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_disable(dpaa_intf->fif);
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -286,6 +305,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
+	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 31/40] net/dpaa: add support for multicast toggle
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (29 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 30/40] net/dpaa: add support for promiscuous toggle Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 32/40] net/dpaa: add support for MAC address update Shreyansh Jain
                     ` (10 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index b2dfd81..f21a85f 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -9,5 +9,6 @@ Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
 Promiscuous mode     = Y
+Allmulticast mode    = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 772832f..dfea271 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -202,6 +202,24 @@ static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
 	fman_if_promiscuous_disable(dpaa_intf->fif);
 }
 
+static void dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_set_mcast_filter_table(dpaa_intf->fif);
+}
+
+static void dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_reset_mcast_filter_table(dpaa_intf->fif);
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -307,6 +325,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.link_update		  = dpaa_eth_link_update,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
+	.allmulticast_enable	  = dpaa_eth_multicast_enable,
+	.allmulticast_disable	  = dpaa_eth_multicast_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 32/40] net/dpaa: add support for MAC address update
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (30 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 31/40] net/dpaa: add support for multicast toggle Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 33/40] net/dpaa: add support for basic stats Shreyansh Jain
                     ` (9 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 55 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 56 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index f21a85f..cdf5e46 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -10,5 +10,6 @@ Jumbo frame          = Y
 MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
+Unicast MAC filter   = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index dfea271..ee84acb 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -310,6 +310,57 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *addr,
+			     uint32_t index,
+			     __rte_unused uint32_t pool)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fm_mac_add_exact_match_mac_addr(dpaa_intf->fif,
+					      addr->addr_bytes, index);
+
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Adding the MAC ADDR failed:"
+			" err = %d", ret);
+	return 0;
+}
+
+static void
+dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
+			  uint32_t index)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fm_mac_rem_exact_match_mac_addr(dpaa_intf->fif, index);
+
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Removing the MAC ADDR failed:"
+			" err = %d", ret);
+}
+
+static void
+dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
+		       struct ether_addr *addr)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fm_mac_add_exact_match_mac_addr(dpaa_intf->fif,
+					      addr->addr_bytes, 0);
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Setting the MAC ADDR failed %d", ret);
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -330,6 +381,10 @@ static struct eth_dev_ops dpaa_devops = {
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
+	.mac_addr_add		  = dpaa_dev_add_mac_addr,
+	.mac_addr_remove	  = dpaa_dev_remove_mac_addr,
+	.mac_addr_set		  = dpaa_dev_set_mac_addr,
+
 };
 
 /* Initialise an Rx FQ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 33/40] net/dpaa: add support for basic stats
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (31 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 32/40] net/dpaa: add support for MAC address update Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 34/40] net/dpaa: add support for flow control Shreyansh Jain
                     ` (8 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index cdf5e46..c09efd8 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,5 +11,6 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index ee84acb..f23e10d 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -183,6 +183,24 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void dpaa_eth_stats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_stats *stats)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_get(dpaa_intf->fif, stats);
+}
+
+static void dpaa_eth_stats_reset(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_reset(dpaa_intf->fif);
+}
 
 static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
 {
@@ -374,6 +392,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.stats_get		  = dpaa_eth_stats_get,
+	.stats_reset		  = dpaa_eth_stats_reset,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.allmulticast_enable	  = dpaa_eth_multicast_enable,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 34/40] net/dpaa: add support for flow control
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (32 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 33/40] net/dpaa: add support for basic stats Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 35/40] net/dpaa: add support for hashed RSS Shreyansh Jain
                     ` (7 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 116 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index c09efd8..1ba6b11 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Flow control         = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f23e10d..f3d8650 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -329,6 +329,85 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 }
 
 static int
+dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			DPAA_PMD_ERR("unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	net_fc = dpaa_intf->fc_conf;
+
+	if (fc_conf->high_water < fc_conf->low_water) {
+		DPAA_PMD_ERR("Incorrect Flow Control Configuration");
+		return -EINVAL;
+	}
+
+	if (fc_conf->mode == RTE_FC_NONE) {
+		return 0;
+	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_FC_FULL) {
+		fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water,
+					 fc_conf->low_water,
+				dpaa_intf->bp_info->bpid);
+		if (fc_conf->pause_time)
+			fman_if_set_fc_quanta(dpaa_intf->fif,
+					      fc_conf->pause_time);
+	}
+
+	/* Save the information in dpaa device */
+	net_fc->pause_time = fc_conf->pause_time;
+	net_fc->high_water = fc_conf->high_water;
+	net_fc->low_water = fc_conf->low_water;
+	net_fc->send_xon = fc_conf->send_xon;
+	net_fc->mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd;
+	net_fc->mode = fc_conf->mode;
+	net_fc->autoneg = fc_conf->autoneg;
+
+	return 0;
+}
+
+static int
+dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc = dpaa_intf->fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (net_fc) {
+		fc_conf->pause_time = net_fc->pause_time;
+		fc_conf->high_water = net_fc->high_water;
+		fc_conf->low_water = net_fc->low_water;
+		fc_conf->send_xon = net_fc->send_xon;
+		fc_conf->mac_ctrl_frame_fwd = net_fc->mac_ctrl_frame_fwd;
+		fc_conf->mode = net_fc->mode;
+		fc_conf->autoneg = net_fc->autoneg;
+		return 0;
+	}
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
+static int
 dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *addr,
 			     uint32_t index,
@@ -391,6 +470,9 @@ static struct eth_dev_ops dpaa_devops = {
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
+	.flow_ctrl_get		  = dpaa_flow_ctrl_get,
+	.flow_ctrl_set		  = dpaa_flow_ctrl_set,
+
 	.link_update		  = dpaa_eth_link_update,
 	.stats_get		  = dpaa_eth_stats_get,
 	.stats_reset		  = dpaa_eth_stats_reset,
@@ -407,6 +489,33 @@ static struct eth_dev_ops dpaa_devops = {
 
 };
 
+static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
+{
+	struct rte_eth_fc_conf *fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			DPAA_PMD_ERR("unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	fc_conf = dpaa_intf->fc_conf;
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
 /* Initialise an Rx FQ */
 static int dpaa_rx_queue_init(struct qman_fq *fq,
 			      uint32_t fqid)
@@ -558,6 +667,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 
 	DPAA_PMD_DEBUG("All frame queues created");
 
+	/* Get the initial configuration for flow control */
+	dpaa_fc_set_default(dpaa_intf);
+
 	/* reset bpool list, initialize bpool dynamically */
 	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
 		list_del(&bp->node);
@@ -663,6 +775,10 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
 
 	dpaa_eth_dev_close(dev);
 
+	/* release configuration memory */
+	if (dpaa_intf->fc_conf)
+		rte_free(dpaa_intf->fc_conf);
+
 	/* free the all queue memory */
 	for (i = 0; i < dpaa_intf->nb_rx_queues; i++)
 		teardown_fq(&dpaa_intf->rx_queues[i]);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 35/40] net/dpaa: add support for hashed RSS
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (33 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 34/40] net/dpaa: add support for flow control Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 36/40] net/dpaa: add support for packet type parsing Shreyansh Jain
                     ` (6 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c |  1 +
 drivers/net/dpaa/dpaa_ethdev.h | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f3d8650..108f397 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -157,6 +157,7 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
 				ETH_LINK_SPEED_10G);
 }
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index da7f3be..a9d1c2c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -91,6 +91,16 @@
 #define DPAA_DEBUG_FQ_RX_ERROR   0
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
+#define DPAA_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP)
+
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
 		PKT_TX_TCP_CKSUM |               \
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 36/40] net/dpaa: add support for packet type parsing
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (34 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 35/40] net/dpaa: add support for hashed RSS Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 37/40] net/dpaa: add support for checksum offload Shreyansh Jain
                     ` (5 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Add support for parsing the packet type and L2/L3 checksum offload
capability information.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   2 +
 drivers/net/dpaa/dpaa_ethdev.c    |  26 ++++++
 drivers/net/dpaa/dpaa_rxtx.c      | 116 +++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 174 ++++++++++++++++++++++++++++++++++++++
 4 files changed, 318 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 1ba6b11..2ef1b56 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,7 +11,9 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+RSS hash             = Y
 Flow control         = Y
+Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 108f397..ee9e1be 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -111,6 +111,27 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 	return 0;
 }
 
+static const uint32_t *
+dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		/*todo -= add more types */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_SCTP
+	};
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dev->rx_pkt_burst == dpaa_eth_queue_rx)
+		return ptypes;
+	return NULL;
+}
 
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
@@ -160,6 +181,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
 				ETH_LINK_SPEED_10G);
+	dev_info->rx_offload_capa =
+		(DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM   |
+		DEV_RX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
@@ -465,6 +490,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
 	.dev_infos_get		  = dpaa_eth_dev_info,
+	.dev_supported_ptypes_get = dpaa_supported_ptypes_get,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 3226614..e091cd8 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,121 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
+				     uint64_t prs __rte_unused)
+{
+	DPAA_RX_LOG(DEBUG, "Slow parsing");
+	/*TBD:XXX: to be implemented*/
+}
+
+static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
+					uint64_t fd_virt_addr)
+{
+	struct annotations_t *annot = GET_ANNOTATIONS(fd_virt_addr);
+	uint64_t prs = *((uint64_t *)(&annot->parse)) & DPAA_PARSE_MASK;
+
+	DPAA_RX_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
+
+	switch (prs) {
+	case DPAA_PKT_TYPE_NONE:
+		m->packet_type = 0;
+		break;
+	case DPAA_PKT_TYPE_ETHER:
+		m->packet_type = RTE_PTYPE_L2_ETHER;
+		break;
+	case DPAA_PKT_TYPE_IPV4:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4;
+		break;
+	case DPAA_PKT_TYPE_IPV6:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6;
+		break;
+	case DPAA_PKT_TYPE_IPV4_FRAG:
+	case DPAA_PKT_TYPE_IPV4_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV6_FRAG:
+	case DPAA_PKT_TYPE_IPV6_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV4_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP;
+		break;
+	/* More switch cases can be added */
+	default:
+		dpaa_slow_parsing(m, prs);
+	}
+
+	m->tx_offload = annot->parse.ip_off[0];
+	m->tx_offload |= (annot->parse.l4_off - annot->parse.ip_off[0])
+					<< DPAA_PKT_L3_LEN_SHIFT;
+
+	/* Set the hash values */
+	m->hash.rss = (uint32_t)(rte_be_to_cpu_64(annot->hash));
+	m->ol_flags = PKT_RX_RSS_HASH;
+	/* All packets with Bad checksum are dropped by interface (and
+	 * corresponding notification issued to RX error queues).
+	 */
+	m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	/* Check if Vlan is present */
+	if (prs & DPAA_PARSE_VLAN_MASK)
+		m->ol_flags |= PKT_RX_VLAN_PKT;
+	/* Packet received without stripping the vlan */
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -117,6 +232,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 	mbuf->ol_flags = 0;
 	mbuf->next = NULL;
 	rte_mbuf_refcnt_set(mbuf, 1);
+	dpaa_eth_packet_info(mbuf, (uint64_t)mbuf->buf_addr);
 
 	return mbuf;
 }
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 09f1aa4..f688934 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -44,6 +44,7 @@
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
+
 /* FD structure masks and offset */
 #define DPAA_FD_FORMAT_MASK 0xE0000000
 #define DPAA_FD_OFFSET_MASK 0x1FF00000
@@ -51,6 +52,179 @@
 #define DPAA_FD_FORMAT_SHIFT 29
 #define DPAA_FD_OFFSET_SHIFT 20
 
+/* Parsing mask (Little Endian) - 0x00E044ED00800000
+ *	Classification Plan ID 0x00
+ *	L4R 0xE0 -
+ *		0x20 - TCP
+ *		0x40 - UDP
+ *		0x80 - SCTP
+ *	L3R 0xEDC4 (in Big Endian) -
+ *		0x8000 - IPv4
+ *		0x4000 - IPv6
+ *		0x8140 - IPv4 Ext + Frag
+ *		0x8040 - IPv4 Frag
+ *		0x8100 - IPv4 Ext
+ *		0x4140 - IPv6 Ext + Frag
+ *		0x4040 - IPv6 Frag
+ *		0x4100 - IPv6 Ext
+ *	L2R 0x8000 (in Big Endian) -
+ *		0x8000 - Ethernet type
+ *	ShimR & Logical Port ID 0x0000
+ */
+#define DPAA_PARSE_MASK			0x00E044ED00800000
+#define DPAA_PARSE_VLAN_MASK		0x0000000000700000
+
+/* Parsed values (Little Endian) */
+#define DPAA_PKT_TYPE_NONE		0x0000000000000000
+#define DPAA_PKT_TYPE_ETHER		0x0000000000800000
+#define DPAA_PKT_TYPE_IPV4	(0x0000008000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV6	(0x0000004000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_GRE	(0x0000002000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV4_FRAG	(0x0000400000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_FRAG	(0x0000400000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_EXT	(0x0000000100000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_EXT	(0x0000000100000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_TCP	(0x0020000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_TCP	(0x0020000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_UDP	(0x0040000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_UDP	(0x0040000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_SCTP	(0x0080000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_SCTP	(0x0080000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_FRAG_TCP (0x0020000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_TCP (0x0020000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_UDP (0x0040000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_UDP (0x0040000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_SCTP (0x0080000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_SCTP (0x0080000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_EXT_UDP (0x0040000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_UDP (0x0040000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_IPV4_EXT_TCP (0x0020000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_TCP (0x0020000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_TUNNEL_4_4 (0x0000000800000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6 (0x0000000400000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6 (0x0000000400000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_4 (0x0000000800000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_UDP (0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_UDP (0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_UDP (0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_UDP (0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_TCP (0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_TCP (0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_TCP (0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_TCP (0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_L3_LEN_SHIFT	7
+
+/**
+ * FMan parse result array
+ */
+struct dpaa_eth_parse_results_t {
+	 uint8_t     lpid;		 /**< Logical port id */
+	 uint8_t     shimr;		 /**< Shim header result  */
+	 union {
+		uint16_t              l2r;	/**< Layer 2 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      ethernet:1;
+			uint16_t      vlan:1;
+			uint16_t      llc_snap:1;
+			uint16_t      mpls:1;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      unused_1:3;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      eth_frame_type:2;
+			uint16_t      l2r_err:5;
+			/*00-unicast, 01-multicast, 11-broadcast*/
+#else
+			uint16_t      l2r_err:5;
+			uint16_t      eth_frame_type:2;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      unused_1:3;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      mpls:1;
+			uint16_t      llc_snap:1;
+			uint16_t      vlan:1;
+			uint16_t      ethernet:1;
+#endif
+		}__attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint16_t              l3r;	/**< Layer 3 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      first_ipv4:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      gre:1;
+			uint16_t      min_enc:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      last_ipv6:1;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      last_ip_err_code:3;
+#else
+			uint16_t      last_ip_err_code:3;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      last_ipv6:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      min_enc:1;
+			uint16_t      gre:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      first_ipv4:1;
+#endif
+#define first_ip_option        first_ip_err_code & 0x01
+#define first_unknown_ip_proto first_ip_err_code & 0x02
+#define first_fragmented       first_ip_err_code & 0x04
+#define first_ip_type          first_ip_err_code & 0x18
+
+		}__attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint8_t               l4r;	/**< Layer 4 result */
+		struct{
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint8_t	       l4_type:3;
+			uint8_t	       l4_info_err:1;
+			uint8_t	       l4_result:4; /*if type IPSec: 1 ESP, 2 AH*/
+#else
+			uint8_t        l4_result:4; /*if type IPSec: 1 ESP, 2 AH*/
+			uint8_t        l4_info_err:1;
+			uint8_t        l4_type:3;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 uint8_t     cplan;		 /**< Classification plan id */
+	 uint16_t    nxthdr;		 /**< Next Header  */
+	 uint16_t    cksum;		 /**< Checksum */
+	 uint32_t    lcv;		 /**< LCV */
+	 uint8_t     shim_off[3];	 /**< Shim offset */
+	 uint8_t     eth_off;		 /**< ETH offset */
+	 uint8_t     llc_snap_off;	 /**< LLC_SNAP offset */
+	 uint8_t     vlan_off[2];	 /**< VLAN offset */
+	 uint8_t     etype_off;		 /**< ETYPE offset */
+	 uint8_t     pppoe_off;		 /**< PPP offset */
+	 uint8_t     mpls_off[2];	 /**< MPLS offset */
+	 uint8_t     ip_off[2];		 /**< IP offset */
+	 uint8_t     gre_off;		 /**< GRE offset */
+	 uint8_t     l4_off;		 /**< Layer 4 offset */
+	 uint8_t     nxthdr_off;	 /**< Parser end point */
+} __attribute__ ((__packed__));
+
+/* The structure is the Prepended Data to the Frame which is used by FMAN */
+struct annotations_t {
+	uint8_t reserved[DEFAULT_RX_ICEOF];
+	struct dpaa_eth_parse_results_t parse;	/**< Pointer to Parsed result*/
+	uint64_t reserved1;
+	uint64_t hash;			/**< Hash Result */
+};
+
+#define GET_ANNOTATIONS(_buf) \
+	(struct annotations_t *)(_buf)
+
+#define GET_RX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
+
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
 uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 37/40] net/dpaa: add support for checksum offload
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (35 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 36/40] net/dpaa: add support for packet type parsing Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 38/40] net/dpaa: add support for Scattered Rx Shreyansh Jain
                     ` (4 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  2 +
 drivers/net/dpaa/dpaa_ethdev.c    |  4 ++
 drivers/net/dpaa/dpaa_rxtx.c      | 88 +++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 19 +++++++++
 4 files changed, 113 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 2ef1b56..23626c0 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -13,6 +13,8 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 RSS hash             = Y
 Flow control         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index ee9e1be..b45dd0a 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -185,6 +185,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 		(DEV_RX_OFFLOAD_IPV4_CKSUM |
 		DEV_RX_OFFLOAD_UDP_CKSUM   |
 		DEV_RX_OFFLOAD_TCP_CKSUM);
+	dev_info->tx_offload_capa =
+		(DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index e091cd8..9afc722 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -200,6 +200,82 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
 	/* Packet received without stripping the vlan */
 }
 
+static inline void dpaa_checksum(struct rte_mbuf *mbuf)
+{
+	struct ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf, struct ether_hdr *);
+	char *l3_hdr = (char *)eth_hdr + mbuf->l2_len;
+	struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+	struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	DPAA_TX_LOG(DEBUG, "Calculating checksum for mbuf: %p", mbuf);
+
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	    ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	    RTE_PTYPE_L3_IPV4_EXT)) {
+		ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+		ipv4_hdr->hdr_checksum = 0;
+		ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
+	} else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6_EXT))
+		ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP) {
+		struct tcp_hdr *tcp_hdr = (struct tcp_hdr *)(l3_hdr +
+					  mbuf->l3_len);
+		tcp_hdr->cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			tcp_hdr->cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+							       tcp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			tcp_hdr->cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+							       tcp_hdr);
+	} else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) ==
+		   RTE_PTYPE_L4_UDP) {
+		struct udp_hdr *udp_hdr = (struct udp_hdr *)(l3_hdr +
+							     mbuf->l3_len);
+		udp_hdr->dgram_cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			udp_hdr->dgram_cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+								     udp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			udp_hdr->dgram_cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+								     udp_hdr);
+	}
+}
+
+static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
+					 struct qm_fd *fd, char *prs_buf)
+{
+	struct dpaa_eth_parse_results_t *prs;
+
+	DPAA_TX_LOG(DEBUG, " Offloading checksum for mbuf: %p", mbuf);
+
+	prs = GET_TX_PRS(prs_buf);
+	prs->l3r = 0;
+	prs->l4r = 0;
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	   RTE_PTYPE_L3_IPV4_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV4;
+	else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		 ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		RTE_PTYPE_L3_IPV6_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV6;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_TCP;
+	else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_UDP;
+
+	prs->ip_off[0] = mbuf->l2_len;
+	prs->l4_off = mbuf->l3_len + mbuf->l2_len;
+	/* Enable L3 (and L4, if TCP or UDP) HW checksum*/
+	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -368,6 +444,18 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 						}
 						rte_pktmbuf_free(mbuf);
 					}
+					if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+						if (mbuf->data_off < DEFAULT_TX_ICEOF +
+							sizeof(struct dpaa_eth_parse_results_t)) {
+							DPAA_TX_LOG(DEBUG, "Checksum offload Err: "
+								"Not enough Headroom "
+								"space for correct Checksum offload."
+								"So Calculating checksum in Software.");
+							dpaa_checksum(mbuf);
+						} else
+							dpaa_checksum_offload(mbuf, &fd_arr[loop],
+								mbuf->buf_addr);
+					}
 				} else {
 					DPAA_PMD_DEBUG("Number of Segments not supported");
 					/* Set frames_to_send & nb_bufs so that
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index f688934..b1c292b 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -41,6 +41,22 @@
 
 /* IC offsets from buffer header address */
 #define DEFAULT_RX_ICEOF	16
+#define DEFAULT_TX_ICEOF	16
+
+/*
+ * Values for the L3R field of the FM Parse Results
+ */
+/* L3 Type field: First IP Present IPv4 */
+#define DPAA_L3_PARSE_RESULT_IPV4 0x80
+/* L3 Type field: First IP Present IPv6 */
+#define DPAA_L3_PARSE_RESULT_IPV6	0x40
+/* Values for the L4R field of the FM Parse Results
+ * See $8.8.4.7.20 - L4 HXS - L4 Results from DPAA-Rev2 Reference Manual.
+ */
+/* L4 Type field: UDP */
+#define DPAA_L4_PARSE_RESULT_UDP	0x40
+/* L4 Type field: TCP */
+#define DPAA_L4_PARSE_RESULT_TCP	0x20
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
@@ -225,6 +241,9 @@ struct annotations_t {
 #define GET_RX_PRS(_buf) \
 	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
 
+#define GET_TX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_TX_ICEOF)
+
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
 uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 38/40] net/dpaa: add support for Scattered Rx
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (36 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 37/40] net/dpaa: add support for checksum offload Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 39/40] net/dpaa: add packet dump for debugging Shreyansh Jain
                     ` (3 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_rxtx.c      | 162 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      |   9 +++
 3 files changed, 172 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 23626c0..0e7956c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -8,6 +8,7 @@ Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
+Scattered Rx         = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 9afc722..5bf4d68 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -276,18 +276,83 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
 	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
 }
 
+struct rte_mbuf *
+dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
+{
+	struct pool_info_entry *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+	struct qm_sg_entry *sgt, *sg_temp;
+	void *vaddr, *sg_vaddr;
+	int i = 0;
+	uint8_t fd_offset = fd->offset;
+
+	DPAA_RX_LOG(DEBUG, "Received an SG frame");
+
+	vaddr = rte_dpaa_mem_ptov(qm_fd_addr(fd));
+	if (!vaddr) {
+		DPAA_PMD_ERR("unable to convert physical address");
+		return NULL;
+	}
+	sgt = vaddr + fd_offset;
+	sg_temp = &sgt[i++];
+	hw_sg_to_cpu(sg_temp);
+	temp = (struct rte_mbuf *)((char *)vaddr - bp_info->meta_data_size);
+	sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+
+	first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						bp_info->meta_data_size);
+	first_seg->data_off = sg_temp->offset;
+	first_seg->data_len = sg_temp->length;
+	first_seg->pkt_len = sg_temp->length;
+	rte_mbuf_refcnt_set(first_seg, 1);
+
+	first_seg->port = ifid;
+	first_seg->nb_segs = 1;
+	first_seg->ol_flags = 0;
+	prev_seg = first_seg;
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		hw_sg_to_cpu(sg_temp);
+		sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+		cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						      bp_info->meta_data_size);
+		cur_seg->data_off = sg_temp->offset;
+		cur_seg->data_len = sg_temp->length;
+		first_seg->pkt_len += sg_temp->length;
+		first_seg->nb_segs += 1;
+		rte_mbuf_refcnt_set(cur_seg, 1);
+		prev_seg->next = cur_seg;
+		if (sg_temp->final) {
+			cur_seg->next = NULL;
+			break;
+		} else {
+			prev_seg = cur_seg;
+		}
+	}
+
+	dpaa_eth_packet_info(first_seg, (uint64_t)vaddr);
+	rte_pktmbuf_free_seg(temp);
+
+	return first_seg;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
 	struct pool_info_entry *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
 	struct rte_mbuf *mbuf;
 	void *ptr;
+	uint8_t format =
+		(fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
 	uint16_t offset =
 		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
 	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
 
 	DPAA_RX_LOG(DEBUG, " FD--->MBUF");
 
+	if (unlikely(format == qm_fd_sg))
+		return dpaa_eth_sg_to_mbuf(fd, ifid);
+
 	/* Ignoring case when format != qm_fd_contig */
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
@@ -390,6 +455,95 @@ static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
 	return dpaa_mbuf;
 }
 
+int
+dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
+		struct qm_fd *fd,
+		uint32_t bpid)
+{
+	struct rte_mbuf *cur_seg = mbuf, *prev_seg = NULL;
+	struct pool_info_entry *bp_info = DPAA_BPID_TO_POOL_INFO(bpid);
+	struct rte_mbuf *temp, *mi;
+	struct qm_sg_entry *sg_temp, *sgt;
+	int i = 0;
+
+	DPAA_TX_LOG(DEBUG, "Creating SG FD to transmit");
+
+	temp = rte_pktmbuf_alloc(bp_info->mp);
+	if (!temp) {
+		DPAA_PMD_ERR("Failure in allocation mbuf");
+		return -1;
+	}
+	if (temp->buf_len < ((mbuf->nb_segs * sizeof(struct qm_sg_entry))
+				+ temp->data_off)) {
+		DPAA_PMD_ERR("Insufficient space in mbuf for SG entries");
+		return -1;
+	}
+
+	fd->cmd = 0;
+	fd->opaque_addr = 0;
+
+	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+		if (temp->data_off < DEFAULT_TX_ICEOF
+			+ sizeof(struct dpaa_eth_parse_results_t))
+			temp->data_off = DEFAULT_TX_ICEOF
+				+ sizeof(struct dpaa_eth_parse_results_t);
+		dcbz_64(temp->buf_addr);
+		dpaa_checksum_offload(mbuf, fd, temp->buf_addr);
+	}
+
+	sgt = temp->buf_addr + temp->data_off;
+	fd->format = QM_FD_SG;
+	fd->addr = temp->buf_physaddr;
+	fd->offset = temp->data_off;
+	fd->bpid = bpid;
+	fd->length20 = mbuf->pkt_len;
+
+
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		sg_temp->opaque = 0;
+		sg_temp->val = 0;
+		sg_temp->addr = cur_seg->buf_physaddr;
+		sg_temp->offset = cur_seg->data_off;
+		sg_temp->length = cur_seg->data_len;
+		if (RTE_MBUF_DIRECT(cur_seg)) {
+			if (rte_mbuf_refcnt_read(cur_seg) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+				rte_mbuf_refcnt_update(cur_seg, -1);
+			} else
+				sg_temp->bpid =
+					DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
+			cur_seg = cur_seg->next;
+		} else {
+			/* Get owner MBUF from indirect buffer */
+			mi = rte_mbuf_from_indirect(cur_seg);
+			if (rte_mbuf_refcnt_read(mi) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * owner buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+			} else {
+				sg_temp->bpid = DPAA_MEMPOOL_TO_BPID(mi->pool);
+				rte_mbuf_refcnt_update(mi, 1);
+			}
+			prev_seg = cur_seg;
+			cur_seg = cur_seg->next;
+			prev_seg->next = NULL;
+			rte_pktmbuf_free(prev_seg);
+		}
+		if (cur_seg == NULL) {
+			sg_temp->final = 1;
+			cpu_to_hw_sg(sg_temp);
+			break;
+		}
+		cpu_to_hw_sg(sg_temp);
+	}
+	return 0;
+}
+
 uint16_t
 dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 {
@@ -456,6 +610,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 							dpaa_checksum_offload(mbuf, &fd_arr[loop],
 								mbuf->buf_addr);
 					}
+				} else if (mbuf->nb_segs > 1 && mbuf->nb_segs <= DPAA_SGT_MAX_ENTRIES) {
+					if (dpaa_eth_mbuf_to_sg_fd(mbuf,
+						&fd_arr[loop], bp_info->bpid)) {
+						DPAA_PMD_DEBUG("Unable to create Scatter Gather FD");
+						frames_to_send = loop;
+						nb_bufs = loop;
+						goto send_pkts;
+					}
 				} else {
 					DPAA_PMD_DEBUG("Number of Segments not supported");
 					/* Set frames_to_send & nb_bufs so that
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index b1c292b..afc33e2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -58,6 +58,8 @@
 /* L4 Type field: TCP */
 #define DPAA_L4_PARSE_RESULT_TCP	0x20
 
+#define DPAA_SGT_MAX_ENTRIES 16 /* maximum number of entries in SG Table */
+
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
 
@@ -251,4 +253,11 @@ uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
 			      struct rte_mbuf **bufs __rte_unused,
 			      uint16_t nb_bufs __rte_unused);
+
+struct rte_mbuf *dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid);
+
+int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
+			   struct qm_fd *fd,
+			   uint32_t bpid);
+
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 39/40] net/dpaa: add packet dump for debugging
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (37 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 38/40] net/dpaa: add support for Scattered Rx Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-04 14:44   ` [PATCH v2 40/40] net/dpaa: support for firmware version get API Shreyansh Jain
                     ` (2 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc |  2 ++
 drivers/net/dpaa/dpaa_ethdev.c           | 42 ++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.c             | 27 +++++++++++++++++++-
 3 files changed, 70 insertions(+), 1 deletion(-)

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 87c0d26..40b2804 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -51,6 +51,8 @@ CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER_DISPLAY=n
+CONFIG_RTE_LIBRTE_DPAA_CHECKING=n
 
 # NXP DPAA Mempool
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index b45dd0a..5befd72 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -624,6 +624,39 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
 	return ret;
 }
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
+static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "reserve debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return -EINVAL;
+	}
+	/* "map" this Rx FQ to one of the interfaces Tx FQID */
+	PMD_DRV_LOG(DEBUG, "creating debug fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "create debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL;
+	opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		PMD_DRV_LOG(ERR, "init debug fqid %d failed with ret: %d",
+			    fqid, ret);
+	return ret;
+}
+#endif
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -696,6 +729,15 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	}
 	dpaa_intf->nb_tx_queues = num_cores;
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+#endif
+
 	DPAA_PMD_DEBUG("All frame queues created");
 
 	/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 5bf4d68..1e52f0e 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,31 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+#if (defined RTE_LIBRTE_DPAA_DEBUG_DRIVER_DISPLAY)
+void dpaa_display_frame(const struct qm_fd *fd)
+{
+	int ii;
+	char *ptr;
+
+	printf("%s::bpid %x addr %08x%08x, format %d off %d, len %d stat %x\n",
+	       __func__, fd->bpid, fd->addr_hi, fd->addr_lo, fd->format,
+		fd->offset, fd->length20, fd->status);
+
+	ptr = (char *)rte_dpaa_mem_ptov(fd->addr);
+	ptr += fd->offset;
+	printf("%02x ", *ptr);
+	for (ii = 1; ii < fd->length20; ii++) {
+		printf("%02x ", *ptr);
+		if ((ii % 16) == 0)
+			printf("\n");
+		ptr++;
+	}
+	printf("\n");
+}
+#else
+#define dpaa_display_frame(a)
+#endif
+
 static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
 				     uint64_t prs __rte_unused)
 {
@@ -354,6 +379,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 		return dpaa_eth_sg_to_mbuf(fd, ifid);
 
 	/* Ignoring case when format != qm_fd_contig */
+	dpaa_display_frame(fd);
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
 	 * of a corrupted packet
@@ -498,7 +524,6 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 	fd->bpid = bpid;
 	fd->length20 = mbuf->pkt_len;
 
-
 	while (i < DPAA_SGT_MAX_ENTRIES) {
 		sg_temp = &sgt[i++];
 		sg_temp->opaque = 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v2 40/40] net/dpaa: support for firmware version get API
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (38 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 39/40] net/dpaa: add packet dump for debugging Shreyansh Jain
@ 2017-07-04 14:44   ` Shreyansh Jain
  2017-07-05  0:13   ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Thomas Monjalon
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 0e7956c..09b9bd9 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -18,5 +18,6 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
+FW version           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5befd72..b99b964 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -163,6 +163,41 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static int
+dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
+		     char *fw_version,
+		     size_t fw_size)
+{
+	int ret;
+	FILE *svr_file = NULL;
+	unsigned int svr_ver = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	svr_file = fopen("/sys/devices/soc0/soc_id", "r");
+	if (!svr_file) {
+		DPAA_PMD_ERR("Unable to open SoC device");
+		return -ENOTSUP; /* Not supported on this infra */
+	}
+
+	ret = fscanf(svr_file, "svr:%x", &svr_ver);
+	if (ret <= 0) {
+		DPAA_PMD_ERR("Unable to read SoC device");
+		return -ENOTSUP; /* Not supported on this infra */
+	}
+
+	ret = snprintf(fw_version, fw_size,
+		       "svr:%x-fman-v%x",
+		       svr_ver,
+		       fman_ip_rev);
+
+	ret += 1; /* add the size of '\0' */
+	if (fw_size < (uint32_t)ret)
+		return ret;
+	else
+		return 0;
+}
+
 static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 			      struct rte_eth_dev_info *dev_info)
 {
@@ -518,6 +553,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.mac_addr_remove	  = dpaa_dev_remove_mac_addr,
 	.mac_addr_set		  = dpaa_dev_set_mac_addr,
 
+	.fw_version_get		  = dpaa_fw_version_get,
 };
 
 static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* Re: [PATCH 36/38] net/dpaa: add support for checksum offload
  2017-06-28 15:50   ` Ferruh Yigit
@ 2017-07-04 14:48     ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:48 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

Hello Ferruh,

On Wednesday 28 June 2017 09:20 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>> @@ -363,6 +439,18 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
>>  						}
>>  						rte_pktmbuf_free(mbuf);
>>  					}
>> +					if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
>> +						if (mbuf->data_off < DEFAULT_TX_ICEOF +
>> +							sizeof(struct dpaa_eth_parse_results_t)) {
>> +							PMD_DRV_LOG(DEBUG, "Checksum offload Err: "
>> +								"Not enough Headroom "
>> +								"space for correct Checksum offload."
>> +								"So Calculating checksum in Software.");
>> +							dpaa_checksum(mbuf);
>> +						} else
>> +							dpaa_checksum_offload(mbuf, &fd_arr[loop],
>> +								mbuf->buf_addr);
>> +					}
> 
> There is a tx_pkt_prepare() dev_ops.
> Does it make sense to move this calculations to that function?

I did have a look at this before sending the v2.
In case of DPAA driver, it is not possible to segregate the preparation phase from transmission phase.
Further, there are still applications which don't call the prep function - in those cases, the I/O wouldn't happen.
And, making an internal call to prep (on basis of some (!prep) checks), is performance impact.

> 
>>  				} else {
>>  					PMD_DRV_LOG(DEBUG, "Number of Segments not supported");
>>  					/* Set frames_to_send & nb_bufs so that
> 
> <...>
> 

-
Shreyansh

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 34/38] net/dpaa: add support for hashed RSS
  2017-06-30 11:39       ` Ferruh Yigit
@ 2017-07-04 14:49         ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:49 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Friday 30 June 2017 05:09 PM, Ferruh Yigit wrote:
> On 6/30/2017 11:31 AM, Shreyansh Jain wrote:
>> On Wednesday 28 June 2017 09:18 PM, Ferruh Yigit wrote:
>>> On 6/16/2017 6:41 AM, Shreyansh Jain wrote:
>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>
>>> Just to confirm:
>>>
>>> Is no HW configuration required to enable RSS?
>>> Is HW updates mbuf->rss automatically, without driver involvement?
>>>
>>> <...>
>>
>> For DPAA platform, the configuration of queues and RSS on them is done using an external tool - just before executing DPDK application. This is part of application startup.
>> Though, I did notice that I have not documented this explicitly in the dpaa.rst. I will correct the documentation.
> 
> For second question, I have seen next patch updates the mbuf->rss,
> perhaps "RSS hash" support can be claimed with that patch.

I have fixed this in v2.

> 
>>
>>>
>>>>  Promiscuous mode     = Y
>>>>  Allmulticast mode    = Y
>>>>  Unicast MAC filter   = Y
>>>> +RSS hash             = Y
>>>>  Flow control         = Y
>>>>  Basic stats          = Y
>>>>  ARMv8                = Y
>>>
>>> <...>
>>>
>>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup
  2017-06-29 15:41       ` Ferruh Yigit
  2017-06-30 11:48         ` Shreyansh Jain
@ 2017-07-04 14:50         ` Shreyansh Jain
  1 sibling, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-04 14:50 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Thursday 29 June 2017 09:11 PM, Ferruh Yigit wrote:
> On 6/29/2017 3:55 PM, Shreyansh Jain wrote:
>> On Wednesday 28 June 2017 09:15 PM, Ferruh Yigit wrote:
>>> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>> ---
> 
> <...>
> 
>>>
>>>> +
>>>> +	/* Initialize Rx FQ's */
>>>> +	if (getenv("DPAA_NUM_RX_QUEUES"))
>>>
>>> I think this was disscussed before, should a PMD get config options from
>>> enviroment variable? Altough this works, I am for a more explicit
>>> method, like dev_args.
>>
>> Well, I do remember that discussion and still continued with it because 
>> 1) I am not done with that dev_args changes and 2) I think this is more 
>> non-intrusive as this is specific to DPAA without need for expanding it 
>> towards dev_args (and impacting application arg list).
>> You think this is no-go? If so, I will fix this.
> 
> Proving argument looks more clear to me, it is more visible, and for
> example if multiple process will be run, environment variables can be
> confusing.
> 
> But this is not no-go, I would like to hear other comments. Also I
> recognized that mlx and ark drivers are also using this.
> 
> But however this is implemented, this should be clearly documented,
> right now this is a hidden config.

I have updated the documentation to show this environment option, in v2.
Thanks for highlighting.

> 
> <...>
>>>> +uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
>>>> +			      struct rte_mbuf **bufs __rte_unused,
>>>> +		uint16_t nb_bufs __rte_unused)
>>>> +{
>>>> +	PMD_TX_LOG(DEBUG, "Drop all packets");
>>>
>>> Should mbufs freed here?
>>>
>>>> +
>>>> +	/* Drop all incoming packets. No need to free packets here
>>>> +	 * because the rte_eth f/w frees up the packets through tx_buffer
>>>> +	 * callback in case this functions returns count less than nb_bufs
>>>> +	 */
>>
>> Ah, actually I was banking on logic that in case a driver doesn't 
>> release memory, the API caller (on getting less than nb_bufs) would do 
>> that. This is case for stopped interface.
>>
>> But, I agree, this is dirty fix. I will change this.
> 
> I missed your logic here indeed, this looks a valid option too, its your
> call.
> 
>>
>>>> +	return 0;
>>>> +}
>>>
>>> <...>
>>>
>>>
>>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (39 preceding siblings ...)
  2017-07-04 14:44   ` [PATCH v2 40/40] net/dpaa: support for firmware version get API Shreyansh Jain
@ 2017-07-05  0:13   ` Thomas Monjalon
  2017-07-05  4:38     ` Shreyansh Jain
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
  41 siblings, 1 reply; 367+ messages in thread
From: Thomas Monjalon @ 2017-07-05  0:13 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

Hi Shreyansh,

04/07/2017 16:43, Shreyansh Jain:
> This patchset introduces the following:
> 1. DPAA Bus (drivers/bus/dpaa)
>  The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
>  or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
>  The patches introduce necessary layers to expose the DPAA hardware
>  blocks for interfacing with RTE framework.
> 
> 2. DPAA Mempool (drivers/mempool/dpaa)
>  BMan, or Buffer Manager, block of DPAA features a hardware offloaded
>  mempool. These patches add support for a driver to manage the BMan
>  block. This driver allows for mempool creation, deletion, buffer
>  acquire and release, as per the RTE APIs.
> 
> 3. DPAA PMD (drivers/net/dpaa)
>  The Poll Mode Driver for DPAA NIC Interfaces.

There is so much to review in this series!
(and not much reviews)
I hope you were not expecting a quick integration.

Please could you start checking what checkpatch is saying?

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD
  2017-07-05  0:13   ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Thomas Monjalon
@ 2017-07-05  4:38     ` Shreyansh Jain
  2017-07-05  6:28       ` Thomas Monjalon
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-07-05  4:38 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, hemant.agrawal

Hello Thomas,

On Wednesday 05 July 2017 05:43 AM, Thomas Monjalon wrote:
> Hi Shreyansh,
> 
> 04/07/2017 16:43, Shreyansh Jain:
>> This patchset introduces the following:
>> 1. DPAA Bus (drivers/bus/dpaa)
>>  The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
>>  or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
>>  The patches introduce necessary layers to expose the DPAA hardware
>>  blocks for interfacing with RTE framework.
>>
>> 2. DPAA Mempool (drivers/mempool/dpaa)
>>  BMan, or Buffer Manager, block of DPAA features a hardware offloaded
>>  mempool. These patches add support for a driver to manage the BMan
>>  block. This driver allows for mempool creation, deletion, buffer
>>  acquire and release, as per the RTE APIs.
>>
>> 3. DPAA PMD (drivers/net/dpaa)
>>  The Poll Mode Driver for DPAA NIC Interfaces.
> 
> There is so much to review in this series!
> (and not much reviews)
> I hope you were not expecting a quick integration.

I understand this.
Ferruh has been putting in quite an effort - but yes, other than that, lack of external review.
I am just expecting inputs - if there are none, then probably that would be integration point (other than continuous improvements we do internally) or patches might stagnate.

But just a random thought off my head (which might help me as a reviewer): How does one review integral/infrastructure related code blocks without a deep insight? ethdev/rxtx are relatively much easier/relevant for reviewers - but not low level blocks. In case of DPAA, that (core routines) is a huge chunks. And, if there are not much reviews (because of lack of interest, or whatever reason), what should an author do (besides gently requesting others, and doing some himself/herself).

> 
> Please could you start checking what checkpatch is saying?
> 

I have seen those - and ignored them for a while. They are related to complex statements defined as macros. Unfortunately, at some of the places, I can't avoid it.
Otherwise, there are some which require code-restructuring (deep indentation), which I plan to do shortly.

-
Shreyansh

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD
  2017-07-05  4:38     ` Shreyansh Jain
@ 2017-07-05  6:28       ` Thomas Monjalon
  0 siblings, 0 replies; 367+ messages in thread
From: Thomas Monjalon @ 2017-07-05  6:28 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

05/07/2017 06:38, Shreyansh Jain:
> Hello Thomas,
> 
> On Wednesday 05 July 2017 05:43 AM, Thomas Monjalon wrote:
> > Hi Shreyansh,
> > 
> > 04/07/2017 16:43, Shreyansh Jain:
> >> This patchset introduces the following:
> >> 1. DPAA Bus (drivers/bus/dpaa)
> >>  The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
> >>  or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
> >>  The patches introduce necessary layers to expose the DPAA hardware
> >>  blocks for interfacing with RTE framework.
> >>
> >> 2. DPAA Mempool (drivers/mempool/dpaa)
> >>  BMan, or Buffer Manager, block of DPAA features a hardware offloaded
> >>  mempool. These patches add support for a driver to manage the BMan
> >>  block. This driver allows for mempool creation, deletion, buffer
> >>  acquire and release, as per the RTE APIs.
> >>
> >> 3. DPAA PMD (drivers/net/dpaa)
> >>  The Poll Mode Driver for DPAA NIC Interfaces.
> > 
> > There is so much to review in this series!
> > (and not much reviews)
> > I hope you were not expecting a quick integration.
> 
> I understand this.
> Ferruh has been putting in quite an effort - but yes, other than that, lack of external review.
> I am just expecting inputs - if there are none, then probably that would be integration point (other than continuous improvements we do internally) or patches might stagnate.
> 
> But just a random thought off my head (which might help me as a reviewer): How does one review integral/infrastructure related code blocks without a deep insight? ethdev/rxtx are relatively much easier/relevant for reviewers - but not low level blocks. In case of DPAA, that (core routines) is a huge chunks. And, if there are not much reviews (because of lack of interest, or whatever reason), what should an author do (besides gently requesting others, and doing some himself/herself).

I guess nobody will review the low level.
But we can check how it is integrated within the framework.

> > Please could you start checking what checkpatch is saying?
> > 
> 
> I have seen those - and ignored them for a while. They are related to complex statements defined as macros. Unfortunately, at some of the places, I can't avoid it.
> Otherwise, there are some which require code-restructuring (deep indentation), which I plan to do shortly.

Thanks

^ permalink raw reply	[flat|nested] 367+ messages in thread

* [PATCH v3 00/40] Introduce NXP DPAA Bus, Mempool and PMD
  2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                     ` (40 preceding siblings ...)
  2017-07-05  0:13   ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Thomas Monjalon
@ 2017-08-23 14:11   ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
                       ` (40 more replies)
  41 siblings, 41 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Change Log:
============

v3:
 - Rebasing over 17.11-rc0 (85238f50)
 - Checkpatch fixes
   (There are still 2 errors which I think are false positives)
 - Implement rte_bus.find_device() interface
 - Various other minor updates/cleanups

v2:
 - Fixing various comments from Ferruh, but broadly:
  -) Logging is been changed to reflect rte_log_register
  -) Logs across Bus, Mempool and PMD updated
  -) fixed incorrect feature claimed in dpaa.ini
 - Removed 24/40/48 bit swapping macro from EAL.
   These are defined in dpaa/bus now (compat.h)
 - Added missing memory cleanup operation
 - Updated documentation with some missing information

Introduction
============

RFC was posted here -> [R3]
V2 was posted here -> [R5]

This patch series adds NXP's QorIQ-Layerscape DPAA Architecture based
bus driver, mempool driver and PMD. This version of driver supports NXP
LS1043A/LS1023A, LS1046A/LS1026A family of network SoCs. [R1]

DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
components designed for high-speed network packet processing. This
architecture provides the infrastructure to support simplified sharing of
networking interfaces and accelerators by multiple CPU cores, and the
accelerators themselves.

This patchset introduces the following:
1. DPAA Bus (drivers/bus/dpaa)
 The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
 or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
 The patches introduce necessary layers to expose the DPAA hardware
 blocks for interfacing with RTE framework.

2. DPAA Mempool (drivers/mempool/dpaa)
 BMan, or Buffer Manager, block of DPAA features a hardware offloaded
 mempool. These patches add support for a driver to manage the BMan
 block. This driver allows for mempool creation, deletion, buffer
 acquire and release, as per the RTE APIs.

3. DPAA PMD (drivers/net/dpaa)
 The Poll Mode Driver for DPAA NIC Interfaces.

Patch Layout
============

01: Add DPAA SoC build configuration
02~16: Add DPAA Bus support and features, incrementally
17: Add Documentation
18~21: Add DPAA Mempool support
22~40: Add PMD and its various features, incrementally

References
==========

[R1] http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-layerscape-arm-processors:QORIQ-ARM
[R2] http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf
[R3] RFC: http://dpdk.org/ml/archives/dev/2017-May/066675.html
[R4] v1: http://dpdk.org/ml/archives/dev/2017-June/068020.html
[R5] v2: http://dpdk.org/ml/archives/dev/2017-July/070113.html

Hemant Agrawal (2):
  bus/dpaa: add compatibility and helper macros
  net/dpaa: support for firmware version get API

Shreyansh Jain (38):
  config: add NXP DPAA SoC build configuration
  bus/dpaa: introduce NXP DPAA Bus driver skeleton
  bus/dpaa: add OF parser for device scanning
  bus/dpaa: introducing FMan configurations
  bus/dpaa: add FMan hardware operations
  bus/dpaa: enable DPAA IOCTL portal driver
  bus/dpaa: add layer for interrupt emulation using pthread
  bus/dpaa: add routines for managing a RB tree
  bus/dpaa: add QMAN interface driver
  bus/dpaa: add QMan driver core routines
  bus/dpaa: add BMAN driver core
  bus/dpaa: add support for FMAN frame queue lookup
  bus/dpaa: add BMan hardware interfaces
  bus/dpaa: add fman flow control threshold setting
  bus/dpaa: integrate DPAA Bus with hardware blocks
  doc: add NXP DPAA PMD documentation
  bus/dpaa: add DPAA mempool logging macros
  mempool/dpaa: add support for NXP DPAA Mempool
  drivers: enable compilation of DPAA Mempool driver
  maintainers: claim ownership of DPAA Mempool driver
  bus/dpaa: add DPAA PMD logging macros
  net/dpaa: add NXP DPAA PMD driver skeleton
  config: enable NXP DPAA PMD compilation
  net/dpaa: add support for Tx and Rx queue setup
  net/dpaa: add support for MTU update
  net/dpaa: add support for jumbo frames
  net/dpaa: add support for link status update
  net/dpaa: add support for device info and speed capability
  net/dpaa: add support for promiscuous toggle
  net/dpaa: add support for multicast toggle
  net/dpaa: add support for MAC address update
  net/dpaa: add support for basic stats
  net/dpaa: add support for flow control
  net/dpaa: add support for hashed RSS
  net/dpaa: add support for packet type parsing
  net/dpaa: add support for checksum offload
  net/dpaa: add support for Scattered Rx
  net/dpaa: add packet dump for debugging

 MAINTAINERS                                       |    9 +
 config/common_base                                |    5 +
 config/defconfig_arm64-dpaa-linuxapp-gcc          |   64 +
 doc/guides/nics/dpaa.rst                          |  374 +++
 doc/guides/nics/features/dpaa.ini                 |   23 +
 doc/guides/nics/index.rst                         |    1 +
 drivers/bus/Makefile                              |    3 +
 drivers/bus/dpaa/Makefile                         |   83 +
 drivers/bus/dpaa/base/fman/fman.c                 |  559 +++++
 drivers/bus/dpaa/base/fman/fman_hw.c              |  634 ++++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c         |  214 ++
 drivers/bus/dpaa/base/fman/of.c                   |  576 +++++
 drivers/bus/dpaa/base/qbman/bman.c                |  394 ++++
 drivers/bus/dpaa/base/qbman/bman.h                |  550 +++++
 drivers/bus/dpaa/base/qbman/bman_driver.c         |  323 +++
 drivers/bus/dpaa/base/qbman/bman_priv.h           |  125 ++
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c          |  104 +
 drivers/bus/dpaa/base/qbman/dpaa_sys.c            |  136 ++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h            |   65 +
 drivers/bus/dpaa/base/qbman/process.c             |  331 +++
 drivers/bus/dpaa/base/qbman/qman.c                | 2497 +++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h                |  888 ++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c         |  288 +++
 drivers/bus/dpaa/base/qbman/qman_priv.h           |  314 +++
 drivers/bus/dpaa/dpaa_bus.c                       |  465 ++++
 drivers/bus/dpaa/include/compat.h                 |  389 ++++
 drivers/bus/dpaa/include/dpaa_bits.h              |   65 +
 drivers/bus/dpaa/include/dpaa_list.h              |  101 +
 drivers/bus/dpaa/include/dpaa_rbtree.h            |  143 ++
 drivers/bus/dpaa/include/fman.h                   |  459 ++++
 drivers/bus/dpaa/include/fsl_bman.h               |  375 ++++
 drivers/bus/dpaa/include/fsl_fman.h               |  189 ++
 drivers/bus/dpaa/include/fsl_fman_crc64.h         |  263 +++
 drivers/bus/dpaa/include/fsl_qman.h               | 2021 +++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h                |  107 +
 drivers/bus/dpaa/include/netcfg.h                 |   96 +
 drivers/bus/dpaa/include/of.h                     |  190 ++
 drivers/bus/dpaa/include/process.h                |  107 +
 drivers/bus/dpaa/rte_bus_dpaa_version.map         |   46 +
 drivers/bus/dpaa/rte_dpaa_bus.h                   |  170 ++
 drivers/bus/dpaa/rte_dpaa_logs.h                  |  130 ++
 drivers/mempool/Makefile                          |    2 +
 drivers/mempool/dpaa/Makefile                     |   64 +
 drivers/mempool/dpaa/dpaa_mempool.c               |  276 +++
 drivers/mempool/dpaa/dpaa_mempool.h               |   77 +
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map |    6 +
 drivers/net/Makefile                              |    2 +
 drivers/net/dpaa/Makefile                         |   67 +
 drivers/net/dpaa/dpaa_ethdev.c                    |  970 ++++++++
 drivers/net/dpaa/dpaa_ethdev.h                    |  140 ++
 drivers/net/dpaa/dpaa_rxtx.c                      |  759 +++++++
 drivers/net/dpaa/dpaa_rxtx.h                      |  295 +++
 drivers/net/dpaa/rte_pmd_dpaa_version.map         |    4 +
 mk/machine/dpaa/rte.vars.mk                       |   61 +
 mk/rte.app.mk                                     |    6 +
 55 files changed, 16605 insertions(+)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h
 create mode 100644 drivers/bus/dpaa/include/of.h
 create mode 100644 drivers/bus/dpaa/include/process.h
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map
 create mode 100644 mk/machine/dpaa/rte.vars.mk

-- 
2.9.3

^ permalink raw reply	[flat|nested] 367+ messages in thread

* [PATCH v3 01/40] config: add NXP DPAA SoC build configuration
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
                       ` (39 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch adds skeleton build configuration for DPAA platform.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc | 39 ++++++++++++++++++++
 mk/machine/dpaa/rte.vars.mk              | 61 ++++++++++++++++++++++++++++++++
 2 files changed, 100 insertions(+)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 mk/machine/dpaa/rte.vars.mk

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
new file mode 100644
index 0000000..0815026
--- /dev/null
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -0,0 +1,39 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 Freescale Semiconductor, Inc.
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+#include "defconfig_arm64-armv8a-linuxapp-gcc"
+
+# NXP (Freescale) - Soc Architecture with FMAN, QMAN & BMAN support
+CONFIG_RTE_MACHINE="dpaa"
+CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
+CONFIG_RTE_LIBRTE_VHOST_NUMA=n
+CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
diff --git a/mk/machine/dpaa/rte.vars.mk b/mk/machine/dpaa/rte.vars.mk
new file mode 100644
index 0000000..356a6af
--- /dev/null
+++ b/mk/machine/dpaa/rte.vars.mk
@@ -0,0 +1,61 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# machine:
+#
+#   - can define ARCH variable (overridden by cmdline value)
+#   - can define CROSS variable (overridden by cmdline value)
+#   - define MACHINE_CFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+#   - can define CPU_CFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_LDFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_ASFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - may override any previously defined variable
+#
+
+# ARCH =
+# CROSS =
+# MACHINE_CFLAGS =
+# MACHINE_LDFLAGS =
+# MACHINE_ASFLAGS =
+# CPU_CFLAGS =
+# CPU_LDFLAGS =
+# CPU_ASFLAGS =
+MACHINE_CFLAGS += -march=armv8-a+crc
+
+ifdef CONFIG_RTE_ARCH_ARM_TUNE
+MACHINE_CFLAGS += -mtune=$(CONFIG_RTE_ARCH_ARM_TUNE:"%"=%)
+endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
                       ` (38 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 MAINTAINERS                               |   5 +
 config/common_base                        |   3 +
 config/defconfig_arm64-dpaa-linuxapp-gcc  |   6 +
 drivers/bus/Makefile                      |   3 +
 drivers/bus/dpaa/Makefile                 |  62 +++++++++
 drivers/bus/dpaa/dpaa_bus.c               | 207 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |   7 +
 drivers/bus/dpaa/rte_dpaa_bus.h           | 164 +++++++++++++++++++++++
 drivers/bus/dpaa/rte_dpaa_logs.h          |  66 ++++++++++
 9 files changed, 523 insertions(+)
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h

diff --git a/MAINTAINERS b/MAINTAINERS
index a0cd75e..6ee20ce 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -405,6 +405,11 @@ F: drivers/net/nfp/
 F: doc/guides/nics/nfp.rst
 F: doc/guides/nics/features/nfp.ini
 
+NXP dpaa
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+M: Shreyansh Jain <shreyansh.jain@nxp.com>
+F: drivers/bus/dpaa/
+
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/config/common_base b/config/common_base
index 5e97a08..2bb2269 100644
--- a/config/common_base
+++ b/config/common_base
@@ -303,6 +303,9 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_MBOX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=n
+
 #
 # Compile NXP DPAA2 FSL-MC Bus
 #
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 0815026..110042c 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -37,3 +37,9 @@ CONFIG_RTE_MACHINE="dpaa"
 CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
 CONFIG_RTE_LIBRTE_VHOST_NUMA=n
 CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
+
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=y
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 0224214..6cb6466 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -32,6 +32,9 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mbuf librte_mempool librte_ring librte_ether
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += dpaa
+DEPDIRS-dpaa = $(core-libs)
+
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 DEPDIRS-fslmc = $(core-libs)
 
diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
new file mode 100644
index 0000000..ef508d3
--- /dev/null
+++ b/drivers/bus/dpaa/Makefile
@@ -0,0 +1,62 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_BUS_DPAA=$(RTE_SDK)/drivers/bus/dpaa
+
+#
+# library name
+#
+LIB = librte_bus_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+
+CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+
+# versioning export map
+EXPORT_MAP := rte_bus_dpaa_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	dpaa_bus.c
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
new file mode 100644
index 0000000..cc343b3
--- /dev/null
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -0,0 +1,207 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_bus.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+int dpaa_logtype_bus;
+
+struct rte_dpaa_bus rte_dpaa_bus;
+
+static inline void
+dpaa_add_to_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+
+static inline void
+dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+
+static int
+rte_dpaa_bus_scan(void)
+{
+	BUS_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/* register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
+{
+	RTE_VERIFY(driver);
+
+	BUS_INIT_FUNC_TRACE();
+
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = &rte_dpaa_bus;
+}
+
+/* un-register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
+{
+	struct rte_dpaa_bus *dpaa_bus;
+
+	BUS_INIT_FUNC_TRACE();
+
+	dpaa_bus = driver->dpaa_bus;
+
+	TAILQ_REMOVE(&dpaa_bus->driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = NULL;
+}
+
+static int
+rte_dpaa_device_match(struct rte_dpaa_driver *drv,
+		      struct rte_dpaa_device *dev)
+{
+	int ret = -1;
+
+	BUS_INIT_FUNC_TRACE();
+
+	if (!drv || !dev) {
+		DPAA_BUS_DEBUG("Invalid drv or dev received.");
+		return ret;
+	}
+
+	if (drv->drv_type == dev->device_type) {
+		DPAA_BUS_INFO("Device: %s matches for driver: %s",
+			      dev->name, drv->driver.name);
+		ret = 0; /* Found a match */
+	}
+
+	return ret;
+}
+
+static int
+rte_dpaa_bus_probe(void)
+{
+	int ret = -1;
+	struct rte_dpaa_device *dev;
+	struct rte_dpaa_driver *drv;
+
+	BUS_INIT_FUNC_TRACE();
+
+	/* For each registered driver, and device, call the driver->probe */
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		TAILQ_FOREACH(drv, &rte_dpaa_bus.driver_list, next) {
+			ret = rte_dpaa_device_match(drv, dev);
+			if (ret)
+				continue;
+
+			if (!drv->probe)
+				continue;
+
+			ret = drv->probe(drv, dev);
+			if (ret)
+				DPAA_BUS_ERR("Unable to probe.\n");
+			break;
+		}
+	}
+	return 0;
+}
+
+static struct rte_device *
+rte_dpaa_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
+		     const void *data)
+{
+	struct rte_dpaa_device *dev;
+
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		if (start && &dev->device == start) {
+			start = NULL;  /* starting point found */
+			continue;
+		}
+
+		if (cmp(&dev->device, data) == 0)
+			return &dev->device;
+	}
+
+	return NULL;
+}
+
+struct rte_dpaa_bus rte_dpaa_bus = {
+	.bus = {
+		.scan = rte_dpaa_bus_scan,
+		.probe = rte_dpaa_bus_probe,
+		.find_device = rte_dpaa_find_device,
+	},
+	.device_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.device_list),
+	.driver_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.driver_list),
+	.device_count = 0,
+};
+
+RTE_REGISTER_BUS(FSL_DPAA_BUS_NAME, rte_dpaa_bus.bus);
+
+RTE_INIT(dpaa_init_log);
+static void
+dpaa_init_log(void)
+{
+	dpaa_logtype_bus = rte_log_register("bus.dpaa");
+	if (dpaa_logtype_bus >= 0)
+		rte_log_set_level(dpaa_logtype_bus, RTE_LOG_NOTICE);
+}
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
new file mode 100644
index 0000000..d97a009
--- /dev/null
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -0,0 +1,7 @@
+DPDK_17.11 {
+	global:
+
+	rte_dpaa_driver_register;
+	rte_dpaa_driver_unregister;
+
+};
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
new file mode 100644
index 0000000..8a1e192
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -0,0 +1,164 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __RTE_DPAA_BUS_H__
+#define __RTE_DPAA_BUS_H__
+
+#include <rte_bus.h>
+#include <rte_mempool.h>
+
+#define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
+
+#define DEV_TO_DPAA_DEVICE(ptr)	\
+		container_of(ptr, struct rte_dpaa_device, device)
+
+struct rte_dpaa_device;
+struct rte_dpaa_driver;
+
+/* DPAA Device and Driver lists for DPAA bus */
+TAILQ_HEAD(rte_dpaa_device_list, rte_dpaa_device);
+TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
+
+enum rte_dpaa_type {
+	FSL_DPAA_ETH = 1,
+	FSL_DPAA_CRYPTO,
+};
+
+struct rte_dpaa_bus {
+	struct rte_bus bus;
+	struct rte_dpaa_device_list device_list;
+	struct rte_dpaa_driver_list driver_list;
+	int device_count;
+};
+
+struct dpaa_device_id {
+	uint8_t fman_id; /**< Fman interface ID, for ETH type device */
+	uint8_t mac_id; /**< Fman MAC interface ID, for ETH type device */
+	uint16_t dev_id; /**< Device Identifier from DPDK */
+};
+
+struct rte_dpaa_device {
+	TAILQ_ENTRY(rte_dpaa_device) next;
+	struct rte_device device;
+	union {
+		struct rte_eth_dev *eth_dev;
+		struct rte_cryptodev *crypto_dev;
+	};
+	struct rte_dpaa_driver *driver;
+	struct dpaa_device_id id;
+	enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
+	char name[RTE_ETH_NAME_MAX_LEN];
+};
+
+typedef int (*rte_dpaa_probe_t)(struct rte_dpaa_driver *dpaa_drv,
+				struct rte_dpaa_device *dpaa_dev);
+typedef int (*rte_dpaa_remove_t)(struct rte_dpaa_device *dpaa_dev);
+
+struct rte_dpaa_driver {
+	TAILQ_ENTRY(rte_dpaa_driver) next;
+	struct rte_driver driver;
+	struct rte_dpaa_bus *dpaa_bus;
+	enum rte_dpaa_type drv_type;
+	rte_dpaa_probe_t probe;
+	rte_dpaa_remove_t remove;
+};
+
+struct dpaa_portal {
+	uint32_t bman_idx; /**< BMAN Portal ID*/
+	uint32_t qman_idx; /**< QMAN Portal ID*/
+	uint64_t tid;/**< Parent Thread id for this portal */
+};
+
+/* TODO - this is costly, need to write a fast coversion routine */
+static inline void *rte_dpaa_mem_ptov(phys_addr_t paddr)
+{
+	const struct rte_memseg *memseg = rte_eal_get_physmem_layout();
+	int i;
+
+	for (i = 0; i < RTE_MAX_MEMSEG && memseg[i].addr != NULL; i++) {
+		if (paddr >= memseg[i].phys_addr && paddr <
+			memseg[i].phys_addr + memseg[i].len)
+			return (uint8_t *)(memseg[i].addr) +
+			       (paddr - memseg[i].phys_addr);
+	}
+
+	return NULL;
+}
+
+/**
+ * Register a DPAA driver.
+ *
+ * @param driver
+ *   A pointer to a rte_dpaa_driver structure describing the driver
+ *   to be registered.
+ */
+void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
+
+/**
+ * Unregister a DPAA driver.
+ *
+ * @param driver
+ *	A pointer to a rte_dpaa_driver structure describing the driver
+ *	to be unregistered.
+ */
+void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
+
+/**
+ * Initialize a DPAA portal
+ *
+ * @param arg
+ *	Per thread ID
+ *
+ * @return
+ *	0 in case of success, error otherwise
+ */
+int rte_dpaa_portal_init(void *arg);
+
+/**
+ * Cleanup a DPAA Portal
+ */
+void dpaa_portal_finish(void *arg);
+
+/** Helper for DPAA device registration from driver (eth, crypto) instance */
+#define RTE_PMD_REGISTER_DPAA(nm, dpaa_drv) \
+RTE_INIT(dpaainitfn_ ##nm); \
+static void dpaainitfn_ ##nm(void) \
+{\
+	(dpaa_drv).driver.name = RTE_STR(nm);\
+	rte_dpaa_driver_register(&dpaa_drv); \
+} \
+RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_DPAA_BUS_H__ */
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
new file mode 100644
index 0000000..3ca3f9b
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -0,0 +1,66 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA_LOGS_H_
+#define _DPAA_LOGS_H_
+
+#include <rte_log.h>
+
+extern int dpaa_logtype_bus;
+
+#define DPAA_BUS_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define BUS_INIT_FUNC_TRACE() DPAA_BUS_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_BUS
+#define DPAA_BUS_HWWARN(cond, fmt, args...) \
+	do {\
+		if (cond) \
+			DPAA_BUS_LOG(DEBUG, "WARN: " fmt, ##args); \
+	} while (0)
+#define DPAA_BUS_DEBUG(fmt, args...) \
+	DPAA_BUS_LOG(DEBUG, fmt, ## args)
+#else
+#define DPAA_BUS_HWWARN(cond, fmt, args...) do { } while (0)
+#define DPAA_BUS_DEBUG(fmt, args...) do { } while (0)
+#endif
+
+#define DPAA_BUS_INFO(fmt, args...) \
+	DPAA_BUS_LOG(INFO, fmt, ## args)
+#define DPAA_BUS_ERR(fmt, args...) \
+	DPAA_BUS_LOG(ERR, fmt, ## args)
+#define DPAA_BUS_WARN(fmt, args...) \
+	DPAA_BUS_LOG(WARNING, fmt, ## args)
+
+#endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 03/40] bus/dpaa: add compatibility and helper macros
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
                       ` (37 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Linked list, bit operations and compatibility macros.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 v3:
 - Removed checkpatch warning and duplicate PER_CPU macro
---
 drivers/bus/dpaa/include/compat.h    | 389 +++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/dpaa_bits.h |  65 ++++++
 drivers/bus/dpaa/include/dpaa_list.h | 101 +++++++++
 3 files changed, 555 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h

diff --git a/drivers/bus/dpaa/include/compat.h b/drivers/bus/dpaa/include/compat.h
new file mode 100644
index 0000000..a1fd53e
--- /dev/null
+++ b/drivers/bus/dpaa/include/compat.h
@@ -0,0 +1,389 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __COMPAT_H
+#define __COMPAT_H
+
+#include <sched.h>
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+#include <stdint.h>
+#include <stdlib.h>
+#include <stddef.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <pthread.h>
+#include <linux/types.h>
+#include <stdbool.h>
+#include <ctype.h>
+#include <malloc.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/mman.h>
+#include <limits.h>
+#include <assert.h>
+#include <dirent.h>
+#include <inttypes.h>
+#include <error.h>
+#include <rte_byteorder.h>
+#include <rte_atomic.h>
+#include <rte_spinlock.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+
+/* The following definitions are primarily to allow the single-source driver
+ * interfaces to be included by arbitrary program code. Ie. for interfaces that
+ * are also available in kernel-space, these definitions provide compatibility
+ * with certain attributes and types used in those interfaces.
+ */
+
+/* Required compiler attributes */
+#define __maybe_unused	__rte_unused
+#define __always_unused	__rte_unused
+#define __packed	__rte_packed
+#define noinline	__attribute__((noinline))
+
+#define L1_CACHE_BYTES 64
+#define ____cacheline_aligned __attribute__((aligned(L1_CACHE_BYTES)))
+#define __stringify_1(x) #x
+#define __stringify(x)	__stringify_1(x)
+
+#ifdef ARRAY_SIZE
+#undef ARRAY_SIZE
+#endif
+#define ARRAY_SIZE(a) (sizeof(a) / sizeof((a)[0]))
+
+/* Debugging */
+#define prflush(fmt, args...) \
+	do { \
+		printf(fmt, ##args); \
+		fflush(stdout); \
+	} while (0)
+
+#define pr_crit(fmt, args...)	 prflush("CRIT:" fmt, ##args)
+#define pr_err(fmt, args...)	 prflush("ERR:" fmt, ##args)
+#define pr_warn(fmt, args...)	 prflush("WARN:" fmt, ##args)
+#define pr_info(fmt, args...)	 prflush(fmt, ##args)
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_BUS
+#ifdef pr_debug
+#undef pr_debug
+#endif
+#define pr_debug(fmt, args...)	printf(fmt, ##args)
+#else
+#define pr_debug(fmt, args...) {}
+#endif
+
+#define ASSERT(x) do {\
+	if (!(x)) \
+		rte_panic("DPAA: x"); \
+} while (0)
+#define DPAA_BUG_ON(x) ASSERT(!(x))
+
+/* Required types */
+typedef uint8_t		u8;
+typedef uint16_t	u16;
+typedef uint32_t	u32;
+typedef uint64_t	u64;
+typedef uint64_t	dma_addr_t;
+typedef cpu_set_t	cpumask_t;
+typedef uint32_t	phandle;
+typedef uint32_t	gfp_t;
+typedef uint32_t	irqreturn_t;
+
+#define IRQ_HANDLED	0
+#define request_irq	qbman_request_irq
+#define free_irq	qbman_free_irq
+
+#define __iomem
+#define GFP_KERNEL	0
+#define __raw_readb(p)	(*(const volatile unsigned char *)(p))
+#define __raw_readl(p)	(*(const volatile unsigned int *)(p))
+#define __raw_writel(v, p) {*(volatile unsigned int *)(p) = (v); }
+
+/* to be used as an upper-limit only */
+#define NR_CPUS			64
+
+/* Waitqueue stuff */
+typedef struct { }		wait_queue_head_t;
+#define DECLARE_WAIT_QUEUE_HEAD(x) int dummy_##x __always_unused
+#define wake_up(x)		do { } while (0)
+
+/* I/O operations */
+static inline u32 in_be32(volatile void *__p)
+{
+	volatile u32 *p = __p;
+	return rte_be_to_cpu_32(*p);
+}
+
+static inline void out_be32(volatile void *__p, u32 val)
+{
+	volatile u32 *p = __p;
+	*p = rte_cpu_to_be_32(val);
+}
+
+#define dcbt_ro(p) __builtin_prefetch(p, 0)
+#define dcbt_rw(p) __builtin_prefetch(p, 1)
+
+#define dcbz(p) { asm volatile("dc zva, %0" : : "r" (p) : "memory"); }
+#define dcbz_64(p) dcbz(p)
+#define hwsync() rte_rmb()
+#define lwsync() rte_wmb()
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+#define dccivac(p) { asm volatile("dc civac, %0" : : "r"(p) : "memory"); }
+
+#define dcbit_ro(p) \
+	do { \
+		dccivac(p);						\
+		asm volatile("prfm pldl1keep, [%0, #64]" : : "r" (p));	\
+	} while (0)
+
+#define barrier() { asm volatile ("" : : : "memory"); }
+#define cpu_relax barrier
+
+static inline uint64_t mfatb(void)
+{
+	uint64_t ret, ret_new, timeout = 200;
+
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret));
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	while (ret != ret_new && timeout--) {
+		ret = ret_new;
+		asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	}
+	DPAA_BUG_ON(!timeout && (ret != ret_new));
+	return ret * 64;
+}
+
+/* Spin for a few cycles without bothering the bus */
+static inline void cpu_spin(int cycles)
+{
+	uint64_t now = mfatb();
+
+	while (mfatb() < (now + cycles))
+		;
+}
+
+/* Qman/Bman API inlines and macros; */
+#ifdef lower_32_bits
+#undef lower_32_bits
+#endif
+#define lower_32_bits(x) ((u32)(x))
+
+#ifdef upper_32_bits
+#undef upper_32_bits
+#endif
+#define upper_32_bits(x) ((u32)(((x) >> 16) >> 16))
+
+/*
+ * Swap bytes of a 48-bit value.
+ */
+static inline uint64_t
+__bswap_48(uint64_t x)
+{
+	return  ((x & 0x0000000000ffULL) << 40) |
+		((x & 0x00000000ff00ULL) << 24) |
+		((x & 0x000000ff0000ULL) <<  8) |
+		((x & 0x0000ff000000ULL) >>  8) |
+		((x & 0x00ff00000000ULL) >> 24) |
+		((x & 0xff0000000000ULL) >> 40);
+}
+
+/*
+ * Swap bytes of a 40-bit value.
+ */
+static inline uint64_t
+__bswap_40(uint64_t x)
+{
+	return  ((x & 0x00000000ffULL) << 32) |
+		((x & 0x000000ff00ULL) << 16) |
+		((x & 0x0000ff0000ULL)) |
+		((x & 0x00ff000000ULL) >> 16) |
+		((x & 0xff00000000ULL) >> 32);
+}
+
+/*
+ * Swap bytes of a 24-bit value.
+ */
+static inline uint32_t
+__bswap_24(uint32_t x)
+{
+	return  ((x & 0x0000ffULL) << 16) |
+		((x & 0x00ff00ULL)) |
+		((x & 0xff0000ULL) >> 16);
+}
+
+#define be64_to_cpu(x) rte_be_to_cpu_64(x)
+#define be32_to_cpu(x) rte_be_to_cpu_32(x)
+#define be16_to_cpu(x) rte_be_to_cpu_16(x)
+
+#define cpu_to_be64(x) rte_cpu_to_be_64(x)
+#define cpu_to_be32(x) rte_cpu_to_be_32(x)
+#define cpu_to_be16(x) rte_cpu_to_be_16(x)
+
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+
+#define cpu_to_be48(x) __bswap_48(x)
+#define be48_to_cpu(x) __bswap_48(x)
+
+#define cpu_to_be40(x) __bswap_40(x)
+#define be40_to_cpu(x) __bswap_40(x)
+
+#define cpu_to_be24(x) __bswap_24(x)
+#define be24_to_cpu(x) __bswap_24(x)
+
+#else /* RTE_BIG_ENDIAN */
+
+#define cpu_to_be48(x) (x)
+#define be48_to_cpu(x) (x)
+
+#define cpu_to_be40(x) (x)
+#define be40_to_cpu(x) (x)
+
+#define cpu_to_be24(x) (x)
+#define be24_to_cpu(x) (x)
+
+#endif /* RTE_BIG_ENDIAN */
+
+/* When copying aligned words or shorts, try to avoid memcpy() */
+/* memcpy() stuff - when you know alignments in advance */
+#define CONFIG_TRY_BETTER_MEMCPY
+
+#ifdef CONFIG_TRY_BETTER_MEMCPY
+static inline void copy_words(void *dest, const void *src, size_t sz)
+{
+	u32 *__dest = dest;
+	const u32 *__src = src;
+	size_t __sz = sz >> 2;
+
+	DPAA_BUG_ON((unsigned long)dest & 0x3);
+	DPAA_BUG_ON((unsigned long)src & 0x3);
+	DPAA_BUG_ON(sz & 0x3);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_shorts(void *dest, const void *src, size_t sz)
+{
+	u16 *__dest = dest;
+	const u16 *__src = src;
+	size_t __sz = sz >> 1;
+
+	DPAA_BUG_ON((unsigned long)dest & 0x1);
+	DPAA_BUG_ON((unsigned long)src & 0x1);
+	DPAA_BUG_ON(sz & 0x1);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_bytes(void *dest, const void *src, size_t sz)
+{
+	u8 *__dest = dest;
+	const u8 *__src = src;
+
+	while (sz--)
+		*(__dest++) = *(__src++);
+}
+#else
+#define copy_words memcpy
+#define copy_shorts memcpy
+#define copy_bytes memcpy
+#endif
+
+/* Allocator stuff */
+#define kmalloc(sz, t)	malloc(sz)
+#define vmalloc(sz)	malloc(sz)
+#define kfree(p)	{ if (p) free(p); }
+static inline void *kzalloc(size_t sz, gfp_t __foo __rte_unused)
+{
+	void *ptr = malloc(sz);
+
+	if (ptr)
+		memset(ptr, 0, sz);
+	return ptr;
+}
+
+static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused)
+{
+	void *p;
+
+	if (posix_memalign(&p, 4096, 4096))
+		return 0;
+	memset(p, 0, 4096);
+	return (unsigned long)p;
+}
+
+/* Spinlock stuff */
+#define spinlock_t		rte_spinlock_t
+#define __SPIN_LOCK_UNLOCKED(x)	RTE_SPINLOCK_INITIALIZER
+#define DEFINE_SPINLOCK(x)	spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
+#define spin_lock_init(x)	rte_spinlock_init(x)
+#define spin_lock_destroy(x)
+#define spin_lock(x)		rte_spinlock_lock(x)
+#define spin_unlock(x)		rte_spinlock_unlock(x)
+#define spin_lock_irq(x)	spin_lock(x)
+#define spin_unlock_irq(x)	spin_unlock(x)
+#define spin_lock_irqsave(x, f) spin_lock_irq(x)
+#define spin_unlock_irqrestore(x, f) spin_unlock_irq(x)
+
+#define atomic_t                rte_atomic32_t
+#define atomic_read(v)          rte_atomic32_read(v)
+#define atomic_set(v, i)        rte_atomic32_set(v, i)
+
+#define atomic_inc(v)           rte_atomic32_add(v, 1)
+#define atomic_dec(v)           rte_atomic32_sub(v, 1)
+
+#define atomic_inc_and_test(v)  rte_atomic32_inc_and_test(v)
+#define atomic_dec_and_test(v)  rte_atomic32_dec_and_test(v)
+
+#define atomic_inc_return(v)    rte_atomic32_add_return(v, 1)
+#define atomic_dec_return(v)    rte_atomic32_sub_return(v, 1)
+#define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0)
+
+#include <dpaa_list.h>
+#include <dpaa_bits.h>
+
+#endif /* __COMPAT_H */
diff --git a/drivers/bus/dpaa/include/dpaa_bits.h b/drivers/bus/dpaa/include/dpaa_bits.h
new file mode 100644
index 0000000..71f2d80
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_bits.h
@@ -0,0 +1,65 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_BITS_H
+#define __DPAA_BITS_H
+
+/* Bitfield stuff. */
+#define BITS_PER_ULONG	(sizeof(unsigned long) << 3)
+#define SHIFT_PER_ULONG	(((1 << 5) == BITS_PER_ULONG) ? 5 : 6)
+#define BITS_MASK(idx)	(1UL << ((idx) & (BITS_PER_ULONG - 1)))
+#define BITS_IDX(idx)	((idx) >> SHIFT_PER_ULONG)
+
+static inline void dpaa_set_bits(unsigned long mask,
+				 volatile unsigned long *p)
+{
+	*p |= mask;
+}
+
+static inline void dpaa_set_bit(int idx, volatile unsigned long *bits)
+{
+	dpaa_set_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+static inline void dpaa_clear_bits(unsigned long mask,
+				   volatile unsigned long *p)
+{
+	*p &= ~mask;
+}
+
+static inline void dpaa_clear_bit(int idx,
+				  volatile unsigned long *bits)
+{
+	dpaa_clear_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+#endif /* __DPAA_BITS_H */
diff --git a/drivers/bus/dpaa/include/dpaa_list.h b/drivers/bus/dpaa/include/dpaa_list.h
new file mode 100644
index 0000000..871e612
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_list.h
@@ -0,0 +1,101 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_LIST_H
+#define __DPAA_LIST_H
+
+/****************/
+/* Linked-lists */
+/****************/
+
+struct list_head {
+	struct list_head *prev;
+	struct list_head *next;
+};
+
+#define COMPAT_LIST_HEAD(n) \
+struct list_head n = { \
+	.prev = &n, \
+	.next = &n \
+}
+
+#define INIT_LIST_HEAD(p) \
+do { \
+	struct list_head *__p298 = (p); \
+	__p298->next = __p298; \
+	__p298->prev = __p298->next; \
+} while (0)
+#define list_entry(node, type, member) \
+	(type *)((void *)node - offsetof(type, member))
+#define list_empty(p) \
+({ \
+	const struct list_head *__p298 = (p); \
+	((__p298->next == __p298) && (__p298->prev == __p298)); \
+})
+#define list_add(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->next = __l298->next; \
+	__p298->prev = __l298; \
+	__l298->next->prev = __p298; \
+	__l298->next = __p298; \
+} while (0)
+#define list_add_tail(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->prev = __l298->prev; \
+	__p298->next = __l298; \
+	__l298->prev->next = __p298; \
+	__l298->prev = __p298; \
+} while (0)
+#define list_for_each(i, l)				\
+	for (i = (l)->next; i != (l); i = i->next)
+#define list_for_each_safe(i, j, l)			\
+	for (i = (l)->next, j = i->next; i != (l);	\
+	     i = j, j = i->next)
+#define list_for_each_entry(i, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name); &i->name != (l); \
+		i = list_entry(i->name.next, typeof(*i), name))
+#define list_for_each_entry_safe(i, j, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name), \
+		j = list_entry(i->name.next, typeof(*j), name); \
+		&i->name != (l); \
+		i = j, j = list_entry(j->name.next, typeof(*j), name))
+#define list_del(i) \
+do { \
+	(i)->next->prev = (i)->prev; \
+	(i)->prev->next = (i)->next; \
+} while (0)
+
+#endif /* __DPAA_LIST_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 04/40] bus/dpaa: add OF parser for device scanning
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (2 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
                       ` (36 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This layer is used by Bus driver's scan function. Devices are parsed
using OF parser and added to DPAA device list.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile       |   7 +
 drivers/bus/dpaa/base/fman/of.c | 576 ++++++++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/of.h   | 190 +++++++++++++
 3 files changed, 773 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/include/of.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index ef508d3..488e263 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -44,7 +44,12 @@ CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 endif
 
+CFLAGS +=-Wno-pointer-arith
+CFLAGS +=-Wno-cast-qual
+CFLAGS += -D _GNU_SOURCE
+
 CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_BUS_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -58,5 +63,7 @@ LIBABIVER := 1
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/of.c \
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/fman/of.c b/drivers/bus/dpaa/base/fman/of.c
new file mode 100644
index 0000000..b2d7c02
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/of.c
@@ -0,0 +1,576 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <of.h>
+#include <rte_dpaa_logs.h>
+
+static int alive;
+static struct dt_dir root_dir;
+static const char *base_dir;
+static COMPAT_LIST_HEAD(linear);
+
+static int
+of_open_dir(const char *relative_path, struct dirent ***d)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = scandir(full_path, d, 0, versionsort);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+of_close_dir(struct dirent **d, int num)
+{
+	while (num--)
+		free(d[num]);
+	free(d);
+}
+
+static int
+of_open_file(const char *relative_path)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = open(full_path, O_RDONLY);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+process_file(struct dirent *dent, struct dt_dir *parent)
+{
+	int fd;
+	struct dt_file *f = malloc(sizeof(*f));
+
+	if (!f) {
+		DPAA_BUS_LOG(DEBUG, "Unable to allocate memory for file node");
+		return;
+	}
+	f->node.is_file = 1;
+	snprintf(f->node.node.name, NAME_MAX, "%s", dent->d_name);
+	snprintf(f->node.node.full_name, PATH_MAX, "%s/%s",
+		 parent->node.node.full_name, dent->d_name);
+	f->parent = parent;
+	fd = of_open_file(f->node.node.full_name);
+	if (fd < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to open file node");
+		free(f);
+		return;
+	}
+	f->len = read(fd, f->buf, OF_FILE_BUF_MAX);
+	close(fd);
+	if (f->len < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to read file node");
+		free(f);
+		return;
+	}
+	list_add_tail(&f->node.list, &parent->files);
+}
+
+static const struct dt_dir *
+node2dir(const struct device_node *n)
+{
+	struct dt_node *dn = container_of((struct device_node *)n,
+					  struct dt_node, node);
+	const struct dt_dir *d = container_of(dn, struct dt_dir, node);
+
+	assert(!dn->is_file);
+	return d;
+}
+
+/* process_dir() calls iterate_dir(), but the latter will also call the former
+ * when recursing into sub-directories, so a predeclaration is needed.
+ */
+static int process_dir(const char *relative_path, struct dt_dir *dt);
+
+static int
+iterate_dir(struct dirent **d, int num, struct dt_dir *dt)
+{
+	int loop;
+	/* Iterate the directory contents */
+	for (loop = 0; loop < num; loop++) {
+		struct dt_dir *subdir;
+		int ret;
+		/* Ignore dot files of all types (especially "..") */
+		if (d[loop]->d_name[0] == '.')
+			continue;
+		switch (d[loop]->d_type) {
+		case DT_REG:
+			process_file(d[loop], dt);
+			break;
+		case DT_DIR:
+			subdir = malloc(sizeof(*subdir));
+			if (!subdir) {
+				perror("malloc");
+				return -ENOMEM;
+			}
+			snprintf(subdir->node.node.name, NAME_MAX, "%s",
+				 d[loop]->d_name);
+			snprintf(subdir->node.node.full_name, PATH_MAX,
+				 "%s/%s", dt->node.node.full_name,
+				 d[loop]->d_name);
+			subdir->parent = dt;
+			ret = process_dir(subdir->node.node.full_name, subdir);
+			if (ret)
+				return ret;
+			list_add_tail(&subdir->node.list, &dt->subdirs);
+			break;
+		default:
+			DPAA_BUS_LOG(DEBUG, "Ignoring invalid dt entry %s/%s",
+				     dt->node.node.full_name, d[loop]->d_name);
+		}
+	}
+	return 0;
+}
+
+static int
+process_dir(const char *relative_path, struct dt_dir *dt)
+{
+	struct dirent **d;
+	int ret, num;
+
+	dt->node.is_file = 0;
+	INIT_LIST_HEAD(&dt->subdirs);
+	INIT_LIST_HEAD(&dt->files);
+	ret = of_open_dir(relative_path, &d);
+	if (ret < 0)
+		return ret;
+	num = ret;
+	ret = iterate_dir(d, num, dt);
+	of_close_dir(d, num);
+	return (ret < 0) ? ret : 0;
+}
+
+static void
+linear_dir(struct dt_dir *d)
+{
+	struct dt_file *f;
+	struct dt_dir *dd;
+
+	d->compatible = NULL;
+	d->status = NULL;
+	d->lphandle = NULL;
+	d->a_cells = NULL;
+	d->s_cells = NULL;
+	d->reg = NULL;
+	list_for_each_entry(f, &d->files, node.list) {
+		if (!strcmp(f->node.node.name, "compatible")) {
+			if (d->compatible)
+				DPAA_BUS_LOG(DEBUG, "Duplicate compatible in"
+					     " %s", d->node.node.full_name);
+			d->compatible = f;
+		} else if (!strcmp(f->node.node.name, "status")) {
+			if (d->status)
+				DPAA_BUS_LOG(DEBUG, "Duplicate status in %s",
+					     d->node.node.full_name);
+			d->status = f;
+		} else if (!strcmp(f->node.node.name, "linux,phandle")) {
+			if (d->lphandle)
+				DPAA_BUS_LOG(DEBUG, "Duplicate lphandle in %s",
+					     d->node.node.full_name);
+			d->lphandle = f;
+		} else if (!strcmp(f->node.node.name, "#address-cells")) {
+			if (d->a_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate a_cells in %s",
+					     d->node.node.full_name);
+			d->a_cells = f;
+		} else if (!strcmp(f->node.node.name, "#size-cells")) {
+			if (d->s_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate s_cells in %s",
+					     d->node.node.full_name);
+			d->s_cells = f;
+		} else if (!strcmp(f->node.node.name, "reg")) {
+			if (d->reg)
+				DPAA_BUS_LOG(DEBUG, "Duplicate reg in %s",
+					     d->node.node.full_name);
+			d->reg = f;
+		}
+	}
+
+	list_for_each_entry(dd, &d->subdirs, node.list) {
+		list_add_tail(&dd->linear, &linear);
+		linear_dir(dd);
+	}
+}
+
+int
+of_init_path(const char *dt_path)
+{
+	int ret;
+
+	base_dir = dt_path;
+
+	/* This needs to be singleton initialization */
+	DPAA_BUS_HWWARN(alive, "Double-init of device-tree driver!");
+
+	/* Prepare root node (the remaining fields are set in process_dir()) */
+	root_dir.node.node.name[0] = '\0';
+	root_dir.node.node.full_name[0] = '\0';
+	INIT_LIST_HEAD(&root_dir.node.list);
+	root_dir.parent = NULL;
+
+	/* Kick things off... */
+	ret = process_dir("", &root_dir);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to parse device tree");
+		return ret;
+	}
+
+	/* Now make a flat, linear list of directories */
+	linear_dir(&root_dir);
+	alive = 1;
+	return 0;
+}
+
+static void
+destroy_dir(struct dt_dir *d)
+{
+	struct dt_file *f, *tmpf;
+	struct dt_dir *dd, *tmpd;
+
+	list_for_each_entry_safe(f, tmpf, &d->files, node.list) {
+		list_del(&f->node.list);
+		free(f);
+	}
+	list_for_each_entry_safe(dd, tmpd, &d->subdirs, node.list) {
+		destroy_dir(dd);
+		list_del(&dd->node.list);
+		free(dd);
+	}
+}
+
+void
+of_finish(void)
+{
+	DPAA_BUS_HWWARN(!alive, "Double-finish of device-tree driver!");
+
+	destroy_dir(&root_dir);
+	INIT_LIST_HEAD(&linear);
+	alive = 0;
+}
+
+static const struct dt_dir *
+next_linear(const struct dt_dir *f)
+{
+	if (f->linear.next == &linear)
+		return NULL;
+	return list_entry(f->linear.next, struct dt_dir, linear);
+}
+
+static int
+check_compatible(const struct dt_file *f, const char *compatible)
+{
+	const char *c = (char *)f->buf;
+	unsigned int len, remains = f->len;
+
+	while (remains) {
+		len = strlen(c);
+		if (!strcmp(c, compatible))
+			return 1;
+
+		if (remains < len + 1)
+			break;
+
+		c += (len + 1);
+		remains -= (len + 1);
+	}
+	return 0;
+}
+
+const struct device_node *
+of_find_compatible_node(const struct device_node *from,
+			const char *type __always_unused,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (list_empty(&linear))
+		return NULL;
+	if (!from)
+		d = list_entry(linear.next, struct dt_dir, linear);
+	else
+		d = node2dir(from);
+	for (d = next_linear(d); d && (!d->compatible ||
+				       !check_compatible(d->compatible,
+				       compatible));
+			d = next_linear(d))
+		;
+	if (d)
+		return &d->node.node;
+	return NULL;
+}
+
+const void *
+of_get_property(const struct device_node *from, const char *name,
+		size_t *lenp)
+{
+	const struct dt_dir *d;
+	const struct dt_file *f;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	d = node2dir(from);
+	list_for_each_entry(f, &d->files, node.list)
+		if (!strcmp(f->node.node.name, name)) {
+			if (lenp)
+				*lenp = f->len;
+			return f->buf;
+		}
+	return NULL;
+}
+
+bool
+of_device_is_available(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	d = node2dir(dev_node);
+	if (!d->status)
+		return true;
+	if (!strcmp((char *)d->status->buf, "okay"))
+		return true;
+	if (!strcmp((char *)d->status->buf, "ok"))
+		return true;
+	return false;
+}
+
+const struct device_node *
+of_find_node_by_phandle(phandle ph)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	list_for_each_entry(d, &linear, linear)
+		if (d->lphandle && (d->lphandle->len == 4) &&
+		    !memcmp(d->lphandle->buf, &ph, 4))
+			return &d->node.node;
+	return NULL;
+}
+
+const struct device_node *
+of_get_parent(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	d = node2dir(dev_node);
+	if (!d->parent)
+		return NULL;
+	return &d->parent->node.node;
+}
+
+const struct device_node *
+of_get_next_child(const struct device_node *dev_node,
+		  const struct device_node *prev)
+{
+	const struct dt_dir *p, *c;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	p = node2dir(dev_node);
+	if (prev) {
+		c = node2dir(prev);
+		DPAA_BUS_HWWARN((c->parent != p), "Parent/child mismatch");
+		if (c->parent != p)
+			return NULL;
+		if (c->node.list.next == &p->subdirs)
+			/* prev was the last child */
+			return NULL;
+		c = list_entry(c->node.list.next, struct dt_dir, node.list);
+		return &c->node.node;
+	}
+	/* Return first child */
+	if (list_empty(&p->subdirs))
+		return NULL;
+	c = list_entry(p->subdirs.next, struct dt_dir, node.list);
+	return &c->node.node;
+}
+
+uint32_t
+of_n_addr_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->a_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->a_cells->buf[0];
+			assert(d->a_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NA;
+}
+
+uint32_t
+of_n_size_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->s_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->s_cells->buf[0];
+			assert(d->s_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NS;
+}
+
+const uint32_t *
+of_get_address(const struct device_node *dev_node, size_t idx,
+	       uint64_t *size, uint32_t *flags __rte_unused)
+{
+	const struct dt_dir *d;
+	const unsigned char *buf;
+	uint32_t na = of_n_addr_cells(dev_node);
+	uint32_t ns = of_n_size_cells(dev_node);
+
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (!d->reg)
+		return NULL;
+	assert(d->reg->len % ((na + ns) * 4) == 0);
+	assert(d->reg->len / ((na + ns) * 4) > (unsigned int) idx);
+	buf = (const unsigned char *)&d->reg->buf[0];
+	buf += (na + ns) * idx * 4;
+	if (size)
+		for (*size = 0; ns > 0; ns--, na++)
+			*size = (*size << 32) +
+				(((uint32_t)buf[4 * na] << 24) |
+				((uint32_t)buf[4 * na + 1] << 16) |
+				((uint32_t)buf[4 * na + 2] << 8) |
+				(uint32_t)buf[4 * na + 3]);
+	return (const uint32_t *)buf;
+}
+
+uint64_t
+of_translate_address(const struct device_node *dev_node,
+		     const uint32_t *addr)
+{
+	uint64_t phys_addr, tmp_addr;
+	const struct device_node *parent;
+	const uint32_t *ranges;
+	size_t rlen;
+	uint32_t na, pna;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	assert(dev_node != NULL);
+
+	na = of_n_addr_cells(dev_node);
+	phys_addr = of_read_number(addr, na);
+
+	dev_node = of_get_parent(dev_node);
+	if (!dev_node)
+		return 0;
+	else if (node2dir(dev_node) == &root_dir)
+		return phys_addr;
+
+	do {
+		pna = of_n_addr_cells(dev_node);
+		parent = of_get_parent(dev_node);
+		if (!parent)
+			return 0;
+
+		ranges = of_get_property(dev_node, "ranges", &rlen);
+		/* "ranges" property is missing. Translation breaks */
+		if (!ranges)
+			return 0;
+		/* "ranges" property is empty. Do 1:1 translation */
+		else if (rlen == 0)
+			continue;
+		else
+			tmp_addr = of_read_number(ranges + na, pna);
+
+		na = pna;
+		dev_node = parent;
+		phys_addr += tmp_addr;
+	} while (node2dir(parent) != &root_dir);
+
+	return phys_addr;
+}
+
+bool
+of_device_is_compatible(const struct device_node *dev_node,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (d->compatible && check_compatible(d->compatible, compatible))
+		return true;
+	return false;
+}
diff --git a/drivers/bus/dpaa/include/of.h b/drivers/bus/dpaa/include/of.h
new file mode 100644
index 0000000..2984b1e
--- /dev/null
+++ b/drivers/bus/dpaa/include/of.h
@@ -0,0 +1,190 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor, Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __OF_H
+#define	__OF_H
+
+#include <compat.h>
+
+#ifndef OF_INIT_DEFAULT_PATH
+#define OF_INIT_DEFAULT_PATH "/proc/device-tree"
+#endif
+
+#define OF_DEFAULT_NA 1
+#define OF_DEFAULT_NS 1
+
+#define OF_FILE_BUF_MAX 256
+
+/**
+ * Layout of Device Tree:
+ * dt_dir
+ *  |- dt_dir
+ *  |   |- dt_dir
+ *  |   |  |- dt_dir
+ *  |   |  |  |- dt_file
+ *  |   |  |  ``- dt_file
+ *  |   |  ``- dt_file
+ *  |   `-dt_file`
+ *  ``- dt_file
+ *
+ *  +------------------+
+ *  |dt_dir            |
+ *  |+----------------+|
+ *  ||dt_node         ||
+ *  ||+--------------+||
+ *  |||device_node   |||
+ *  ||+--------------+||
+ *  || list_dt_nodes  ||
+ *  |+----------------+|
+ *  | list of subdir   |
+ *  | list of files    |
+ *  +------------------+
+ */
+
+/**
+ * Device description on of a device node in device tree.
+ */
+struct device_node {
+	char name[NAME_MAX];
+	char full_name[PATH_MAX];
+};
+
+/**
+ * List of device nodes available in a device tree layout
+ */
+struct dt_node {
+	struct device_node node; /**< Property of node */
+	int is_file; /**< FALSE==dir, TRUE==file */
+	struct list_head list; /**< Nodes within a parent subdir */
+};
+
+/**
+ * Types we use to represent directories and files
+ */
+struct dt_file;
+struct dt_dir {
+	struct dt_node node;
+	struct list_head subdirs;
+	struct list_head files;
+	struct list_head linear;
+	struct dt_dir *parent;
+	struct dt_file *compatible;
+	struct dt_file *status;
+	struct dt_file *lphandle;
+	struct dt_file *a_cells;
+	struct dt_file *s_cells;
+	struct dt_file *reg;
+};
+
+struct dt_file {
+	struct dt_node node;
+	struct dt_dir *parent;
+	ssize_t len;
+	uint64_t buf[OF_FILE_BUF_MAX >> 3];
+};
+
+const struct device_node *of_find_compatible_node(
+					const struct device_node *from,
+					const char *type __always_unused,
+					const char *compatible)
+	__attribute__((nonnull(3)));
+
+#define for_each_compatible_node(dev_node, type, compatible) \
+	for (dev_node = of_find_compatible_node(NULL, type, compatible); \
+		dev_node != NULL; \
+		dev_node = of_find_compatible_node(dev_node, type, compatible))
+
+const void *of_get_property(const struct device_node *from, const char *name,
+			    size_t *lenp) __attribute__((nonnull(2)));
+bool of_device_is_available(const struct device_node *dev_node);
+
+const struct device_node *of_find_node_by_phandle(phandle ph);
+
+const struct device_node *of_get_parent(const struct device_node *dev_node);
+
+const struct device_node *of_get_next_child(const struct device_node *dev_node,
+					    const struct device_node *prev);
+
+#define for_each_child_node(parent, child) \
+	for (child = of_get_next_child(parent, NULL); child != NULL; \
+			child = of_get_next_child(parent, child))
+
+uint32_t of_n_addr_cells(const struct device_node *dev_node);
+uint32_t of_n_size_cells(const struct device_node *dev_node);
+
+const uint32_t *of_get_address(const struct device_node *dev_node, size_t idx,
+			       uint64_t *size, uint32_t *flags);
+
+uint64_t of_translate_address(const struct device_node *dev_node,
+			      const u32 *addr) __attribute__((nonnull));
+
+bool of_device_is_compatible(const struct device_node *dev_node,
+			     const char *compatible);
+
+/* of_init() must be called prior to initialisation or use of any driver
+ * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers, etc.
+ * The path should usually be "/proc/device-tree".
+ */
+int of_init_path(const char *dt_path);
+
+/* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
+ * full reload is desired without a process exit.
+ */
+void of_finish(void);
+
+/* Use of this wrapper is recommended. */
+static inline int of_init(void)
+{
+	return of_init_path(OF_INIT_DEFAULT_PATH);
+}
+
+/* Read a numeric property according to its size and return it as a 64-bit
+ * value.
+ */
+static inline uint64_t of_read_number(const __be32 *cell, int size)
+{
+	uint64_t r = 0;
+
+	while (size--)
+		r = (r << 32) | be32toh(*(cell++));
+	return r;
+}
+
+#endif	/*  __OF_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 05/40] bus/dpaa: introducing FMan configurations
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (3 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
                       ` (35 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

FMan or Frame Manager, inspects traffic, splits it into queueson ingress.
It is also responsible for directing traffic on queues on egress.

This patch introduces FMan configurational interfaces. This layer is
used by Bus driver for configuring the hardware block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   2 +
 drivers/bus/dpaa/base/fman/fman.c         | 559 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c | 214 ++++++++++++
 drivers/bus/dpaa/include/fman.h           | 459 ++++++++++++++++++++++++
 drivers/bus/dpaa/include/netcfg.h         |  96 +++++
 5 files changed, 1330 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 488e263..4b1715d 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -64,6 +64,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/fman.c \
 	base/fman/of.c \
+	base/fman/netcfg_layer.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
new file mode 100644
index 0000000..dce91da
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -0,0 +1,559 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+
+#include <rte_malloc.h>
+
+/* This header declares the driver interface we implement */
+#include <fman.h>
+#include <of.h>
+#include <rte_dpaa_logs.h>
+
+#define QMI_PORT_REGS_OFFSET		0x400
+
+/* CCSR map address to access ccsr based register */
+void *fman_ccsr_map;
+/* fman version info */
+u16 fman_ip_rev;
+static int get_once;
+u32 fman_dealloc_bufs_mask_hi;
+u32 fman_dealloc_bufs_mask_lo;
+
+int fman_ccsr_map_fd = -1;
+static COMPAT_LIST_HEAD(__ifs);
+
+/* This is the (const) global variable that callers have read-only access to.
+ * Internally, we have read-write access directly to __ifs.
+ */
+const struct list_head *fman_if_list = &__ifs;
+
+static void
+if_destructor(struct __fman_if *__if)
+{
+	struct fman_if_bpool *bp, *tmpbp;
+
+	if (__if->__if.mac_type == fman_offline)
+		goto cleanup;
+
+	list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+cleanup:
+	rte_free(__if);
+}
+
+static int
+fman_get_ip_rev(const struct device_node *fman_node)
+{
+	const uint32_t *fman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	uint32_t ip_rev_1;
+	int _errno;
+
+	fman_addr = of_get_address(fman_node, 0, &regs_size, NULL);
+	if (!fman_addr) {
+		pr_err("of_get_address cannot return fman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(fman_node, fman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+	fman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, fman_ccsr_map_fd, phys_addr);
+	if (fman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map FMan ccsr base");
+		return -EINVAL;
+	}
+
+	ip_rev_1 = in_be32(fman_ccsr_map + FMAN_IP_REV_1);
+	fman_ip_rev = (ip_rev_1 & FMAN_IP_REV_1_MAJOR_MASK) >>
+			FMAN_IP_REV_1_MAJOR_SHIFT;
+
+	_errno = munmap(fman_ccsr_map, regs_size);
+	if (_errno)
+		pr_err("munmap() of FMan ccsr failed");
+
+	return 0;
+}
+
+static int
+fman_get_mac_index(uint64_t regs_addr_host, uint8_t *mac_idx)
+{
+	int ret = 0;
+
+	/*
+	 * MAC1 : E_0000h
+	 * MAC2 : E_2000h
+	 * MAC3 : E_4000h
+	 * MAC4 : E_6000h
+	 * MAC5 : E_8000h
+	 * MAC6 : E_A000h
+	 * MAC7 : E_C000h
+	 * MAC8 : E_E000h
+	 * MAC9 : F_0000h
+	 * MAC10: F_2000h
+	 */
+	switch (regs_addr_host) {
+	case 0xE0000:
+		*mac_idx = 1;
+		break;
+	case 0xE2000:
+		*mac_idx = 2;
+		break;
+	case 0xE4000:
+		*mac_idx = 3;
+		break;
+	case 0xE6000:
+		*mac_idx = 4;
+		break;
+	case 0xE8000:
+		*mac_idx = 5;
+		break;
+	case 0xEA000:
+		*mac_idx = 6;
+		break;
+	case 0xEC000:
+		*mac_idx = 7;
+		break;
+	case 0xEE000:
+		*mac_idx = 8;
+		break;
+	case 0xF0000:
+		*mac_idx = 9;
+		break;
+	case 0xF2000:
+		*mac_idx = 10;
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+fman_if_init(const struct device_node *dpa_node)
+{
+	const char *rprop, *mprop;
+	uint64_t phys_addr;
+	struct __fman_if *__if;
+	struct fman_if_bpool *bpool;
+
+	const phandle *mac_phandle, *ports_phandle, *pools_phandle;
+	const phandle *tx_channel_id = NULL, *mac_addr, *cell_idx;
+	const phandle *rx_phandle, *tx_phandle;
+	uint64_t tx_phandle_host[4] = {0};
+	uint64_t rx_phandle_host[4] = {0};
+	uint64_t regs_addr_host = 0;
+	uint64_t cell_idx_host = 0;
+
+	const struct device_node *mac_node = NULL, *tx_node;
+	const struct device_node *pool_node, *fman_node, *rx_node;
+	const uint32_t *regs_addr = NULL;
+	const char *mname, *fname;
+	const char *dname = dpa_node->full_name;
+	size_t lenp;
+	int _errno;
+	const char *char_prop;
+	uint32_t na;
+
+	if (of_device_is_available(dpa_node) == false)
+		return 0;
+
+	rprop = "fsl,qman-frame-queues-rx";
+	mprop = "fsl,fman-mac";
+
+	/* Allocate an object for this network interface */
+	__if = rte_malloc(NULL, sizeof(*__if), RTE_CACHE_LINE_SIZE);
+	if (!__if)
+		FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*__if));
+	memset(__if, 0, sizeof(*__if));
+	INIT_LIST_HEAD(&__if->__if.bpool_list);
+	strncpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
+	__if->node_path[PATH_MAX - 1] = '\0';
+
+	/* Obtain the MAC node used by this interface except macless */
+	mac_phandle = of_get_property(dpa_node, mprop, &lenp);
+	if (!mac_phandle)
+		FMAN_ERR(-EINVAL, "%s: no %s\n", dname, mprop);
+	assert(lenp == sizeof(phandle));
+	mac_node = of_find_node_by_phandle(*mac_phandle);
+	if (!mac_node)
+		FMAN_ERR(-ENXIO, "%s: bad 'fsl,fman-mac\n", dname);
+	mname = mac_node->full_name;
+
+	/* Map the CCSR regs for the MAC node */
+	regs_addr = of_get_address(mac_node, 0, &__if->regs_size, NULL);
+	if (!regs_addr)
+		FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+	phys_addr = of_translate_address(mac_node, regs_addr);
+	if (!phys_addr)
+		FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+			 mname, regs_addr);
+			 __if->ccsr_map = mmap(NULL, __if->regs_size,
+			 PROT_READ | PROT_WRITE, MAP_SHARED,
+			 fman_ccsr_map_fd, phys_addr);
+	if (__if->ccsr_map == MAP_FAILED)
+		FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	regs_addr_host = of_read_number(regs_addr, na);
+
+
+	/* Get the index of the Fman this i/f belongs to */
+	fman_node = of_get_parent(mac_node);
+	na = of_n_addr_cells(mac_node);
+	if (!fman_node)
+		FMAN_ERR(-ENXIO, "of_get_parent(%s)\n", mname);
+	fname = fman_node->full_name;
+	cell_idx = of_get_property(fman_node, "cell-index", &lenp);
+	if (!cell_idx)
+		FMAN_ERR(-ENXIO, "%s: no cell-index)\n", fname);
+	assert(lenp == sizeof(*cell_idx));
+	cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+	__if->__if.fman_idx = cell_idx_host;
+	if (!get_once) {
+		_errno = fman_get_ip_rev(fman_node);
+		if (_errno)
+			FMAN_ERR(-ENXIO, "%s: ip_rev is not available\n",
+				 fname);
+	}
+
+	if (fman_ip_rev >= FMAN_V3) {
+		/*
+		 * Set A2V, OVOM, EBD bits in contextA to allow external
+		 * buffer deallocation by fman.
+		 */
+		fman_dealloc_bufs_mask_hi = FMAN_V3_CONTEXTA_EN_A2V |
+						FMAN_V3_CONTEXTA_EN_OVOM;
+		fman_dealloc_bufs_mask_lo = FMAN_V3_CONTEXTA_EN_EBD;
+	} else {
+		fman_dealloc_bufs_mask_hi = 0;
+		fman_dealloc_bufs_mask_lo = 0;
+	}
+	/* Is the MAC node 1G, 10G? */
+	__if->__if.is_memac = 0;
+
+	if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
+		__if->__if.mac_type = fman_mac_1g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
+		__if->__if.mac_type = fman_mac_10g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-memac")) {
+		__if->__if.is_memac = 1;
+		char_prop = of_get_property(mac_node, "phy-connection-type",
+					    NULL);
+		if (!char_prop) {
+			printf("memac: unknown MII type assuming 1G\n");
+			/* Right now forcing memac to 1g in case of error*/
+			__if->__if.mac_type = fman_mac_1g;
+		} else {
+			if (strstr(char_prop, "sgmii"))
+				__if->__if.mac_type = fman_mac_1g;
+			else if (strstr(char_prop, "rgmii")) {
+				__if->__if.mac_type = fman_mac_1g;
+				__if->__if.is_rgmii = 1;
+			} else if (strstr(char_prop, "xgmii"))
+				__if->__if.mac_type = fman_mac_10g;
+		}
+	} else
+		FMAN_ERR(-EINVAL, "%s: unknown MAC type\n", mname);
+
+	/*
+	 * For MAC ports, we cannot rely on cell-index. In
+	 * T2080, two of the 10G ports on single FMAN have same
+	 * duplicate cell-indexes as the other two 10G ports on
+	 * same FMAN. Hence, we now rely upon addresses of the
+	 * ports from device tree to deduce the index.
+	 */
+
+	_errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
+	if (_errno)
+		FMAN_ERR(-EINVAL, "Invalid register address: %lu",
+			 regs_addr_host);
+
+	/* Extract the MAC address for private and shared interfaces */
+	mac_addr = of_get_property(mac_node, "local-mac-address",
+				   &lenp);
+	if (!mac_addr)
+		FMAN_ERR(-EINVAL, "%s: no local-mac-address\n",
+			 mname);
+	memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
+
+	/* Extract the Tx port (it's the second of the two port handles)
+	 * and get its channel ID
+	 */
+	ports_phandle = of_get_property(mac_node, "fsl,port-handles",
+					&lenp);
+	if (!ports_phandle)
+		ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+						&lenp);
+	if (!ports_phandle)
+		FMAN_ERR(-EINVAL, "%s: no fsl,port-handles\n",
+			 mname);
+	assert(lenp == (2 * sizeof(phandle)));
+	tx_node = of_find_node_by_phandle(ports_phandle[1]);
+	if (!tx_node)
+		FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]\n", mname);
+	/* Extract the channel ID (from tx-port-handle) */
+	tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
+					&lenp);
+	if (!tx_channel_id)
+		FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id\n",
+			 tx_node->full_name);
+
+	rx_node = of_find_node_by_phandle(ports_phandle[0]);
+	if (!rx_node)
+		FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]\n", mname);
+	regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+	if (!regs_addr)
+		FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+	phys_addr = of_translate_address(rx_node, regs_addr);
+	if (!phys_addr)
+		FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+			 mname, regs_addr);
+	__if->bmi_map = mmap(NULL, __if->regs_size,
+				 PROT_READ | PROT_WRITE, MAP_SHARED,
+				 fman_ccsr_map_fd, phys_addr);
+	if (__if->bmi_map == MAP_FAILED)
+		FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+
+	/* No channel ID for MAC-less */
+	assert(lenp == sizeof(*tx_channel_id));
+	na = of_n_addr_cells(mac_node);
+	__if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+	/* Extract the Rx FQIDs. (Note, the device representation is silly,
+	 * there are "counts" that must always be 1.)
+	 */
+	rx_phandle = of_get_property(dpa_node, rprop, &lenp);
+	if (!rx_phandle)
+		FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-rx\n", dname);
+
+	assert(lenp == (4 * sizeof(phandle)));
+
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
+	rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
+	rx_phandle_host[2] = of_read_number(&rx_phandle[2], na);
+	rx_phandle_host[3] = of_read_number(&rx_phandle[3], na);
+
+	assert((rx_phandle_host[1] == 1) && (rx_phandle_host[3] == 1));
+	__if->__if.fqid_rx_err = rx_phandle_host[0];
+	__if->__if.fqid_rx_def = rx_phandle_host[2];
+
+	/* Extract the Tx FQIDs */
+	tx_phandle = of_get_property(dpa_node,
+				     "fsl,qman-frame-queues-tx", &lenp);
+	if (!tx_phandle)
+		FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-tx\n", dname);
+
+	assert(lenp == (4 * sizeof(phandle)));
+	/*TODO: Fix for other cases also */
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	tx_phandle_host[0] = of_read_number(&tx_phandle[0], na);
+	tx_phandle_host[1] = of_read_number(&tx_phandle[1], na);
+	tx_phandle_host[2] = of_read_number(&tx_phandle[2], na);
+	tx_phandle_host[3] = of_read_number(&tx_phandle[3], na);
+	assert((tx_phandle_host[1] == 1) && (tx_phandle_host[3] == 1));
+	__if->__if.fqid_tx_err = tx_phandle_host[0];
+	__if->__if.fqid_tx_confirm = tx_phandle_host[2];
+
+	/* Obtain the buffer pool nodes used by this interface */
+	pools_phandle = of_get_property(dpa_node, "fsl,bman-buffer-pools",
+					&lenp);
+	if (!pools_phandle)
+		FMAN_ERR(-EINVAL, "%s: no fsl,bman-buffer-pools\n", dname);
+	/* For each pool, parse the corresponding node and add a pool object
+	 * to the interface's "bpool_list"
+	 */
+	assert(lenp && !(lenp % sizeof(phandle)));
+	while (lenp) {
+		size_t proplen;
+		const phandle *prop;
+		uint64_t bpid_host = 0;
+		uint64_t bpool_host[6] = {0};
+		const char *pname;
+		/* Allocate an object for the pool */
+		bpool = rte_malloc(NULL, sizeof(*bpool), RTE_CACHE_LINE_SIZE);
+		if (!bpool)
+			FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*bpool));
+		/* Find the pool node */
+		pool_node = of_find_node_by_phandle(*pools_phandle);
+		if (!pool_node)
+			FMAN_ERR(-ENXIO, "%s: bad fsl,bman-buffer-pools\n",
+				 dname);
+		pname = pool_node->full_name;
+		/* Extract the BPID property */
+		prop = of_get_property(pool_node, "fsl,bpid", &proplen);
+		if (!prop)
+			FMAN_ERR(-EINVAL, "%s: no fsl,bpid\n", pname);
+		assert(proplen == sizeof(*prop));
+		na = of_n_addr_cells(mac_node);
+		/* Get rid of endianness (issues).
+		 * Convert to host byte-order
+		 */
+		bpid_host = of_read_number(prop, na);
+		bpool->bpid = bpid_host;
+		/* Extract the cfg property (count/size/addr). "fsl,bpool-cfg"
+		 * indicates for the Bman driver to seed the pool.
+		 * "fsl,bpool-ethernet-cfg" is used by the network driver. The
+		 * two are mutually exclusive, so check for either of them.
+		 */
+		prop = of_get_property(pool_node, "fsl,bpool-cfg",
+				       &proplen);
+		if (!prop)
+			prop = of_get_property(pool_node,
+					       "fsl,bpool-ethernet-cfg",
+					       &proplen);
+		if (!prop) {
+			/* It's OK for there to be no bpool-cfg */
+			bpool->count = bpool->size = bpool->addr = 0;
+		} else {
+			assert(proplen == (6 * sizeof(*prop)));
+			na = of_n_addr_cells(mac_node);
+			/* Get rid of endianness (issues).
+			 * Convert to host byte order
+			 */
+			bpool_host[0] = of_read_number(&prop[0], na);
+			bpool_host[1] = of_read_number(&prop[1], na);
+			bpool_host[2] = of_read_number(&prop[2], na);
+			bpool_host[3] = of_read_number(&prop[3], na);
+			bpool_host[4] = of_read_number(&prop[4], na);
+			bpool_host[5] = of_read_number(&prop[5], na);
+
+			bpool->count = ((uint64_t)bpool_host[0] << 32) |
+					bpool_host[1];
+			bpool->size = ((uint64_t)bpool_host[2] << 32) |
+					bpool_host[3];
+			bpool->addr = ((uint64_t)bpool_host[4] << 32) |
+					bpool_host[5];
+		}
+		/* Parsing of the pool is complete, add it to the interface
+		 * list.
+		 */
+		list_add_tail(&bpool->node, &__if->__if.bpool_list);
+		lenp -= sizeof(phandle);
+		pools_phandle++;
+	}
+
+	/* Parsing of the network interface is complete, add it to the list */
+	DPAA_BUS_LOG(DEBUG, "Found %s, Tx Channel = %x, FMAN = %x,"
+		    "Port ID = %x\n",
+		    dname, __if->__if.tx_channel_id, __if->__if.fman_idx,
+		    __if->__if.mac_idx);
+
+	list_add_tail(&__if->__if.node, &__ifs);
+	return 0;
+err:
+	if_destructor(__if);
+	return _errno;
+}
+
+int
+fman_init(void)
+{
+	const struct device_node *dpa_node;
+	int _errno;
+
+	/* If multiple dependencies try to initialise the Fman driver, don't
+	 * panic.
+	 */
+	if (fman_ccsr_map_fd != -1)
+		return 0;
+
+	fman_ccsr_map_fd = open(FMAN_DEVICE_PATH, O_RDWR);
+	if (unlikely(fman_ccsr_map_fd < 0)) {
+		DPAA_BUS_LOG(ERR, "Unable to open (/dev/mem)");
+		return fman_ccsr_map_fd;
+	}
+
+	for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-init") {
+		_errno = fman_if_init(dpa_node);
+		if (_errno)
+			FMAN_ERR(_errno, "if_init(%s)\n", dpa_node->full_name);
+	}
+
+	return 0;
+err:
+	fman_finish();
+	return _errno;
+}
+
+void
+fman_finish(void)
+{
+	struct __fman_if *__if, *tmpif;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	list_for_each_entry_safe(__if, tmpif, &__ifs, __if.node) {
+		int _errno;
+
+		/* disable Rx and Tx */
+		if ((__if->__if.mac_type == fman_mac_1g) &&
+		    (!__if->__if.is_memac))
+			out_be32(__if->ccsr_map + 0x100,
+				 in_be32(__if->ccsr_map + 0x100) & ~(u32)0x5);
+		else
+			out_be32(__if->ccsr_map + 8,
+				 in_be32(__if->ccsr_map + 8) & ~(u32)3);
+		/* release the mapping */
+		_errno = munmap(__if->ccsr_map, __if->regs_size);
+		if (unlikely(_errno < 0))
+			fprintf(stderr, "%s:%hu:%s(): munmap() = %d (%s)\n",
+				__FILE__, __LINE__, __func__,
+				-errno, strerror(errno));
+		printf("Tearing down %s\n", __if->node_path);
+		list_del(&__if->__if.node);
+		rte_free(__if);
+	}
+
+	close(fman_ccsr_map_fd);
+	fman_ccsr_map_fd = -1;
+}
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
new file mode 100644
index 0000000..26cff84
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -0,0 +1,214 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <inttypes.h>
+#include <of.h>
+#include <net/if.h>
+#include <sys/ioctl.h>
+#include <error.h>
+#include <net/if_arp.h>
+#include <assert.h>
+#include <unistd.h>
+
+#include <rte_malloc.h>
+
+#include <rte_dpaa_logs.h>
+#include <netcfg.h>
+
+/* Structure contains information about all the interfaces given by user
+ * on command line.
+ */
+struct netcfg_interface *netcfg_interface;
+
+/* This data structure contaings all configurations information
+ * related to usages of DPA devices.
+ */
+struct netcfg_info *netcfg;
+/* fd to open a socket for making ioctl request to disable/enable shared
+ *  interfaces.
+ */
+static int skfd = -1;
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dump_netcfg(struct netcfg_info *cfg_ptr)
+{
+	int i;
+
+	printf("..........  DPAA Configuration  ..........\n\n");
+
+	/* Network interfaces */
+	printf("Network interfaces: %d\n", cfg_ptr->num_ethports);
+	for (i = 0; i < cfg_ptr->num_ethports; i++) {
+		struct fman_if_bpool *bpool;
+		struct fm_eth_port_cfg *p_cfg = &cfg_ptr->port_cfg[i];
+		struct fman_if *__if = p_cfg->fman_if;
+
+		printf("\n+ Fman %d, MAC %d (%s);\n",
+		       __if->fman_idx, __if->mac_idx,
+		       (__if->mac_type == fman_mac_1g) ? "1G" : "10G");
+
+		printf("\tmac_addr: %02x:%02x:%02x:%02x:%02x:%02x\n",
+		       (&__if->mac_addr)->addr_bytes[0],
+		       (&__if->mac_addr)->addr_bytes[1],
+		       (&__if->mac_addr)->addr_bytes[2],
+		       (&__if->mac_addr)->addr_bytes[3],
+		       (&__if->mac_addr)->addr_bytes[4],
+		       (&__if->mac_addr)->addr_bytes[5]);
+
+		printf("\ttx_channel_id: 0x%02x\n",
+		       __if->tx_channel_id);
+
+		printf("\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
+		printf("\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
+
+		printf("\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
+		printf("\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
+		fman_if_for_each_bpool(bpool, __if)
+			printf("\tbuffer pool: (bpid=%d, count=%"PRId64
+			       " size=%"PRId64", addr=0x%"PRIx64")\n",
+			       bpool->bpid, bpool->count, bpool->size,
+			       bpool->addr);
+	}
+}
+#endif /* RTE_LIBRTE_DPAA_DEBUG_DRIVER */
+
+static inline int
+get_num_netcfg_interfaces(char *str)
+{
+	char *pch;
+	uint8_t count = 0;
+
+	if (str == NULL)
+		return -EINVAL;
+	pch = strtok(str, ",");
+	while (pch != NULL) {
+		count++;
+		pch = strtok(NULL, ",");
+	}
+	return count;
+}
+
+struct netcfg_info *
+netcfg_acquire(void)
+{
+	struct fman_if *__if;
+	int _errno, idx = 0;
+	uint8_t num_ports = 0;
+	uint8_t num_cfg_ports = 0;
+	size_t size;
+
+	/* Extract dpa configuration from fman driver and FMC configuration
+	 * for command-line interfaces.
+	 */
+
+	/* Open a basic socket to enable/disable shared
+	 * interfaces.
+	 */
+	skfd = socket(AF_PACKET, SOCK_RAW, 0);
+	if (unlikely(skfd < 0)) {
+		error(0, errno, "%s(): open(SOCK_RAW)", __func__);
+		return NULL;
+	}
+
+	/* Initialise the Fman driver */
+	_errno = fman_init();
+	if (_errno) {
+		DPAA_BUS_LOG(ERR, "FMAN driver init failed (%d)", errno);
+		close(skfd);
+		skfd = -1;
+		return NULL;
+	}
+
+	/* Number of MAC ports */
+	list_for_each_entry(__if, fman_if_list, node)
+		num_ports++;
+
+	if (!num_ports) {
+		DPAA_BUS_LOG(ERR, "FMAN ports not available");
+		return NULL;
+	}
+	/* Allocate space for all enabled mac ports */
+	size = sizeof(*netcfg) +
+		(num_ports * sizeof(struct fm_eth_port_cfg));
+
+	netcfg = calloc(1, size);
+	if (unlikely(netcfg == NULL)) {
+		DPAA_BUS_LOG(ERR, "Unable to allocat mem for netcfg");
+		goto error;
+	}
+
+	netcfg->num_ethports = num_ports;
+
+	list_for_each_entry(__if, fman_if_list, node) {
+		struct fm_eth_port_cfg *cfg = &netcfg->port_cfg[idx];
+		/* Hook in the fman driver interface */
+		cfg->fman_if = __if;
+		cfg->rx_def = __if->fqid_rx_def;
+		num_cfg_ports++;
+		idx++;
+	}
+
+	if (!num_cfg_ports) {
+		DPAA_BUS_LOG(ERR, "No FMAN ports found");
+		goto error;
+	} else if (num_ports != num_cfg_ports)
+		netcfg->num_ethports = num_cfg_ports;
+
+	return netcfg;
+
+error:
+	if (netcfg) {
+		free(netcfg);
+		netcfg = NULL;
+	}
+
+	return NULL;
+}
+
+void
+netcfg_release(struct netcfg_info *cfg_ptr)
+{
+	free(cfg_ptr);
+	/* Close socket for shared interfaces */
+	if (skfd >= 0) {
+		close(skfd);
+		skfd = -1;
+	}
+}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
new file mode 100644
index 0000000..1143cc9
--- /dev/null
+++ b/drivers/bus/dpaa/include/fman.h
@@ -0,0 +1,459 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FMAN_H
+#define __FMAN_H
+
+#include <stdbool.h>
+#include <net/if.h>
+
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+
+#include <compat.h>
+
+#ifndef FMAN_DEVICE_PATH
+#define FMAN_DEVICE_PATH "/dev/mem"
+#endif
+
+#define MEMAC_NUM_OF_PADDRS 7 /* Num of additional exact match MAC adr regs */
+
+/* Control and Configuration Register (COMMAND_CONFIG) for MEMAC */
+#define CMD_CFG_LOOPBACK_EN	0x00000400
+/**< 21 XGMII/GMII loopback enable */
+#define CMD_CFG_PROMIS_EN	0x00000010
+/**< 27 Promiscuous operation enable */
+#define CMD_CFG_PAUSE_IGNORE	0x00000100
+/**< 23 Ignore Pause frame quanta */
+
+/* Statistics Configuration Register (STATN_CONFIG) */
+#define STATS_CFG_CLR           0x00000004
+/**< 29 Reset all counters */
+#define STATS_CFG_CLR_ON_RD     0x00000002
+/**< 30 Clear on read */
+#define STATS_CFG_SATURATE      0x00000001
+/**< 31 Saturate at the maximum val */
+
+/**< Max receive frame length mask */
+#define MAXFRM_SIZE_MEMAC	0x00007fe0
+#define MAXFRM_RX_MASK		0x0000ffff
+
+/**< Interface Mode Register Register for MEMAC */
+#define IF_MODE_RLP 0x00000820
+
+/**< Pool Limits */
+#define FMAN_PORT_MAX_EXT_POOLS_NUM	8
+#define FMAN_PORT_OBS_EXT_POOLS_NUM	2
+
+#define FMAN_PORT_CG_MAP_NUM		8
+#define FMAN_PORT_PRS_RESULT_WORDS_NUM	8
+#define FMAN_PORT_BMI_FIFO_UNITS	0x100
+#define FMAN_PORT_IC_OFFSET_UNITS	0x10
+
+#define FMAN_ENABLE_BPOOL_DEPLETION	0xF00000F0
+
+#define HASH_CTRL_MCAST_EN	0x00000100
+#define GROUP_ADDRESS		0x0000010000000000LL
+#define HASH_CTRL_ADDR_MASK	0x0000003F
+
+/* Pre definitions of FMAN interface and Bpool structures */
+struct __fman_if;
+struct fman_if_bpool;
+/* Lists of fman interfaces and bpools */
+TAILQ_HEAD(rte_fman_if_list, __fman_if);
+
+/* Represents the different flavour of network interface */
+enum fman_mac_type {
+	fman_offline = 0,
+	fman_mac_1g,
+	fman_mac_10g,
+};
+
+struct mac_addr {
+	uint32_t   mac_addr_l;	/**< Lower 32 bits of 48-bit MAC address */
+	uint32_t   mac_addr_u;	/**< Upper 16 bits of 48-bit MAC address */
+};
+
+struct memac_regs {
+	/* General Control and Status */
+	uint32_t res0000[2];
+	uint32_t command_config;	/**< 0x008 Ctrl and cfg */
+	struct mac_addr mac_addr0;	/**< 0x00C-0x010 MAC_ADDR_0...1 */
+	uint32_t maxfrm;		/**< 0x014 Max frame length */
+	uint32_t res0018[5];
+	uint32_t hashtable_ctrl;	/**< 0x02C Hash table control */
+	uint32_t res0030[4];
+	uint32_t ievent;		/**< 0x040 Interrupt event */
+	uint32_t tx_ipg_length;
+	/**< 0x044 Transmitter inter-packet-gap */
+	uint32_t res0048;
+	uint32_t imask;			/**< 0x04C Interrupt mask */
+	uint32_t res0050;
+	uint32_t pause_quanta[4];	/**< 0x054 Pause quanta */
+	uint32_t pause_thresh[4];	/**< 0x064 Pause quanta threshold */
+	uint32_t rx_pause_status;	/**< 0x074 Receive pause status */
+	uint32_t res0078[2];
+	struct mac_addr mac_addr[MEMAC_NUM_OF_PADDRS];
+	/**< 0x80-0x0B4 mac padr */
+	uint32_t lpwake_timer;
+	/**< 0x0B8 Low Power Wakeup Timer */
+	uint32_t sleep_timer;
+	/**< 0x0BC Transmit EEE Low Power Timer */
+	uint32_t res00c0[8];
+	uint32_t statn_config;
+	/**< 0x0E0 Statistics configuration */
+	uint32_t res00e4[7];
+	/* Rx Statistics Counter */
+	uint32_t reoct_l;
+	uint32_t reoct_u;
+	uint32_t roct_l;
+	uint32_t roct_u;
+	uint32_t raln_l;
+	uint32_t raln_u;
+	uint32_t rxpf_l;
+	uint32_t rxpf_u;
+	uint32_t rfrm_l;
+	uint32_t rfrm_u;
+	uint32_t rfcs_l;
+	uint32_t rfcs_u;
+	uint32_t rvlan_l;
+	uint32_t rvlan_u;
+	uint32_t rerr_l;
+	uint32_t rerr_u;
+	uint32_t ruca_l;
+	uint32_t ruca_u;
+	uint32_t rmca_l;
+	uint32_t rmca_u;
+	uint32_t rbca_l;
+	uint32_t rbca_u;
+	uint32_t rdrp_l;
+	uint32_t rdrp_u;
+	uint32_t rpkt_l;
+	uint32_t rpkt_u;
+	uint32_t rund_l;
+	uint32_t rund_u;
+	uint32_t r64_l;
+	uint32_t r64_u;
+	uint32_t r127_l;
+	uint32_t r127_u;
+	uint32_t r255_l;
+	uint32_t r255_u;
+	uint32_t r511_l;
+	uint32_t r511_u;
+	uint32_t r1023_l;
+	uint32_t r1023_u;
+	uint32_t r1518_l;
+	uint32_t r1518_u;
+	uint32_t r1519x_l;
+	uint32_t r1519x_u;
+	uint32_t rovr_l;
+	uint32_t rovr_u;
+	uint32_t rjbr_l;
+	uint32_t rjbr_u;
+	uint32_t rfrg_l;
+	uint32_t rfrg_u;
+	uint32_t rcnp_l;
+	uint32_t rcnp_u;
+	uint32_t rdrntp_l;
+	uint32_t rdrntp_u;
+	uint32_t res01d0[12];
+	/* Tx Statistics Counter */
+	uint32_t teoct_l;
+	uint32_t teoct_u;
+	uint32_t toct_l;
+	uint32_t toct_u;
+	uint32_t res0210[2];
+	uint32_t txpf_l;
+	uint32_t txpf_u;
+	uint32_t tfrm_l;
+	uint32_t tfrm_u;
+	uint32_t tfcs_l;
+	uint32_t tfcs_u;
+	uint32_t tvlan_l;
+	uint32_t tvlan_u;
+	uint32_t terr_l;
+	uint32_t terr_u;
+	uint32_t tuca_l;
+	uint32_t tuca_u;
+	uint32_t tmca_l;
+	uint32_t tmca_u;
+	uint32_t tbca_l;
+	uint32_t tbca_u;
+	uint32_t res0258[2];
+	uint32_t tpkt_l;
+	uint32_t tpkt_u;
+	uint32_t tund_l;
+	uint32_t tund_u;
+	uint32_t t64_l;
+	uint32_t t64_u;
+	uint32_t t127_l;
+	uint32_t t127_u;
+	uint32_t t255_l;
+	uint32_t t255_u;
+	uint32_t t511_l;
+	uint32_t t511_u;
+	uint32_t t1023_l;
+	uint32_t t1023_u;
+	uint32_t t1518_l;
+	uint32_t t1518_u;
+	uint32_t t1519x_l;
+	uint32_t t1519x_u;
+	uint32_t res02a8[6];
+	uint32_t tcnp_l;
+	uint32_t tcnp_u;
+	uint32_t res02c8[14];
+	/* Line Interface Control */
+	uint32_t if_mode;		/**< 0x300 Interface Mode Control */
+	uint32_t if_status;		/**< 0x304 Interface Status */
+	uint32_t res0308[14];
+	/* HiGig/2 */
+	uint32_t hg_config;		/**< 0x340 Control and cfg */
+	uint32_t res0344[3];
+	uint32_t hg_pause_quanta;	/**< 0x350 Pause quanta */
+	uint32_t res0354[3];
+	uint32_t hg_pause_thresh;	/**< 0x360 Pause quanta threshold */
+	uint32_t res0364[3];
+	uint32_t hgrx_pause_status;	/**< 0x370 Receive pause status */
+	uint32_t hg_fifos_status;	/**< 0x374 fifos status */
+	uint32_t rhm;			/**< 0x378 rx messages counter */
+	uint32_t thm;			/**< 0x37C tx messages counter */
+};
+
+struct rx_bmi_regs {
+	uint32_t fmbm_rcfg;		/**< Rx Configuration */
+	uint32_t fmbm_rst;		/**< Rx Status */
+	uint32_t fmbm_rda;		/**< Rx DMA attributes*/
+	uint32_t fmbm_rfp;		/**< Rx FIFO Parameters*/
+	uint32_t fmbm_rfed;		/**< Rx Frame End Data*/
+	uint32_t fmbm_ricp;		/**< Rx Internal Context Parameters*/
+	uint32_t fmbm_rim;		/**< Rx Internal Buffer Margins*/
+	uint32_t fmbm_rebm;		/**< Rx External Buffer Margins*/
+	uint32_t fmbm_rfne;		/**< Rx Frame Next Engine*/
+	uint32_t fmbm_rfca;		/**< Rx Frame Command Attributes.*/
+	uint32_t fmbm_rfpne;		/**< Rx Frame Parser Next Engine*/
+	uint32_t fmbm_rpso;		/**< Rx Parse Start Offset*/
+	uint32_t fmbm_rpp;		/**< Rx Policer Profile  */
+	uint32_t fmbm_rccb;		/**< Rx Coarse Classification Base */
+	uint32_t fmbm_reth;		/**< Rx Excessive Threshold */
+	uint32_t reserved003c[1];	/**< (0x03C 0x03F) */
+	uint32_t fmbm_rprai[FMAN_PORT_PRS_RESULT_WORDS_NUM];
+					/**< Rx Parse Results Array Init*/
+	uint32_t fmbm_rfqid;		/**< Rx Frame Queue ID*/
+	uint32_t fmbm_refqid;		/**< Rx Error Frame Queue ID*/
+	uint32_t fmbm_rfsdm;		/**< Rx Frame Status Discard Mask*/
+	uint32_t fmbm_rfsem;		/**< Rx Frame Status Error Mask*/
+	uint32_t fmbm_rfene;		/**< Rx Frame Enqueue Next Engine */
+	uint32_t reserved0074[0x2];	/**< (0x074-0x07C)  */
+	uint32_t fmbm_rcmne;
+	/**< Rx Frame Continuous Mode Next Engine */
+	uint32_t reserved0080[0x20];/**< (0x080 0x0FF)  */
+	uint32_t fmbm_ebmpi[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Buffer Manager pool Information-*/
+	uint32_t fmbm_acnt[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Allocate Counter-*/
+	uint32_t reserved0130[8];
+					/**< 0x130/0x140 - 0x15F reserved -*/
+	uint32_t fmbm_rcgm[FMAN_PORT_CG_MAP_NUM];
+					/**< Congestion Group Map*/
+	uint32_t fmbm_mpd;		/**< BM Pool Depletion  */
+	uint32_t reserved0184[0x1F];	/**< (0x184 0x1FF) */
+	uint32_t fmbm_rstc;		/**< Rx Statistics Counters*/
+	uint32_t fmbm_rfrc;		/**< Rx Frame Counter*/
+	uint32_t fmbm_rfbc;		/**< Rx Bad Frames Counter*/
+	uint32_t fmbm_rlfc;		/**< Rx Large Frames Counter*/
+	uint32_t fmbm_rffc;		/**< Rx Filter Frames Counter*/
+	uint32_t fmbm_rfdc;		/**< Rx Frame Discard Counter*/
+	uint32_t fmbm_rfldec;		/**< Rx Frames List DMA Error Counter*/
+	uint32_t fmbm_rodc;		/**< Rx Out of Buffers Discard nntr*/
+	uint32_t fmbm_rbdc;		/**< Rx Buffers Deallocate Counter*/
+	uint32_t reserved0224[0x17];	/**< (0x224 0x27F) */
+	uint32_t fmbm_rpc;		/**< Rx Performance Counters*/
+	uint32_t fmbm_rpcp;		/**< Rx Performance Count Parameters*/
+	uint32_t fmbm_rccn;		/**< Rx Cycle Counter*/
+	uint32_t fmbm_rtuc;		/**< Rx Tasks Utilization Counter*/
+	uint32_t fmbm_rrquc;
+	/**< Rx Receive Queue Utilization cntr*/
+	uint32_t fmbm_rduc;		/**< Rx DMA Utilization Counter*/
+	uint32_t fmbm_rfuc;		/**< Rx FIFO Utilization Counter*/
+	uint32_t fmbm_rpac;		/**< Rx Pause Activation Counter*/
+	uint32_t reserved02a0[0x18];	/**< (0x2A0 0x2FF) */
+	uint32_t fmbm_rdbg;		/**< Rx Debug-*/
+};
+
+struct fman_port_qmi_regs {
+	uint32_t fmqm_pnc;		/**< PortID n Configuration Register */
+	uint32_t fmqm_pns;		/**< PortID n Status Register */
+	uint32_t fmqm_pnts;		/**< PortID n Task Status Register */
+	uint32_t reserved00c[4];	/**< 0xn00C - 0xn01B */
+	uint32_t fmqm_pnen;		/**< PortID n Enqueue NIA Register */
+	uint32_t fmqm_pnetfc;		/**< PortID n Enq Total Frame Counter */
+	uint32_t reserved024[2];	/**< 0xn024 - 0x02B */
+	uint32_t fmqm_pndn;		/**< PortID n Dequeue NIA Register */
+	uint32_t fmqm_pndc;		/**< PortID n Dequeue Config Register */
+	uint32_t fmqm_pndtfc;		/**< PortID n Dequeue tot Frame cntr */
+	uint32_t fmqm_pndfdc;		/**< PortID n Dequeue FQID Dflt Cntr */
+	uint32_t fmqm_pndcc;		/**< PortID n Dequeue Confirm Counter */
+};
+
+/* This struct exports parameters about an Fman network interface, determined
+ * from the device-tree.
+ */
+struct fman_if {
+	/* Which Fman this interface belongs to */
+	uint8_t fman_idx;
+	/* The type/speed of the interface */
+	enum fman_mac_type mac_type;
+	/* Boolean, set when mac type is memac */
+	uint8_t is_memac;
+	/* Boolean, set when PHY is RGMII */
+	uint8_t is_rgmii;
+	/* The index of this MAC (within the Fman it belongs to) */
+	uint8_t mac_idx;
+	/* The MAC address */
+	struct ether_addr mac_addr;
+	/* The Qman channel to schedule Tx FQs to */
+	u16 tx_channel_id;
+	/* The hard-coded FQIDs for this interface. Note: this doesn't cover
+	 * the PCD nor the "Rx default" FQIDs, which are configured via FMC
+	 * and its XML-based configuration.
+	 */
+	uint32_t fqid_rx_def;
+	uint32_t fqid_rx_err;
+	uint32_t fqid_tx_err;
+	uint32_t fqid_tx_confirm;
+
+	struct list_head bpool_list;
+	/* The node for linking this interface into "fman_if_list" */
+	struct list_head node;
+};
+
+/* This struct exposes parameters for buffer pools, extracted from the network
+ * interface settings in the device tree.
+ */
+struct fman_if_bpool {
+	uint32_t bpid;
+	uint64_t count;
+	uint64_t size;
+	uint64_t addr;
+	/* The node for linking this bpool into fman_if::bpool_list */
+	struct list_head node;
+};
+
+/* Internal Context transfer params - FMBM_RICP*/
+struct fman_if_ic_params {
+	/*IC offset in the packet buffer */
+	uint16_t iceof;
+	/*IC internal offset */
+	uint16_t iciof;
+	/*IC size to copy */
+	uint16_t icsz;
+};
+
+/* The exported "struct fman_if" type contains the subset of fields we want
+ * exposed. This struct is embedded in a larger "struct __fman_if" which
+ * contains the extra bits we *don't* want exposed.
+ */
+struct __fman_if {
+	struct fman_if __if;
+	char node_path[PATH_MAX];
+	uint64_t regs_size;
+	void *ccsr_map;
+	void *bmi_map;
+	void *qmi_map;
+	struct list_head node;
+};
+
+/* And this is the base list node that the interfaces are added to. (See
+ * fman_if_enable_all_rx() below for an example of its use.)
+ */
+extern const struct list_head *fman_if_list;
+
+extern int fman_ccsr_map_fd;
+
+/* To iterate the "bpool_list" for an interface. Eg;
+ *        struct fman_if *p = get_ptr_to_some_interface();
+ *        struct fman_if_bpool *bp;
+ *        printf("Interface uses following BPIDs;\n");
+ *        fman_if_for_each_bpool(bp, p) {
+ *            printf("    %d\n", bp->bpid);
+ *            [...]
+ *        }
+ */
+#define fman_if_for_each_bpool(bp, __if) \
+	list_for_each_entry(bp, &(__if)->bpool_list, node)
+
+#define FMAN_ERR(rc, fmt, args...) \
+	do { \
+		_errno = (rc); \
+		DPAA_BUS_LOG(ERR, fmt "(%d)", ##args, errno); \
+		goto err; \
+	} while (0)
+
+#define FMAN_IP_REV_1	0xC30C4
+#define FMAN_IP_REV_1_MAJOR_MASK 0x0000FF00
+#define FMAN_IP_REV_1_MAJOR_SHIFT 8
+#define FMAN_V3	0x06
+#define FMAN_V3_CONTEXTA_EN_A2V	0x10000000
+#define FMAN_V3_CONTEXTA_EN_OVOM	0x02000000
+#define FMAN_V3_CONTEXTA_EN_EBD	0x80000000
+#define FMAN_CONTEXTA_DIS_CHECKSUM	0x7ull
+#define FMAN_CONTEXTA_SET_OPCODE11 0x2000000b00000000
+extern u16 fman_ip_rev;
+extern u32 fman_dealloc_bufs_mask_hi;
+extern u32 fman_dealloc_bufs_mask_lo;
+
+/**
+ * Initialize the FMAN driver
+ *
+ * @args void
+ * @return
+ *	0 for success; error OTHERWISE
+ */
+int fman_init(void);
+
+/**
+ * Teardown the FMAN driver
+ *
+ * @args void
+ * @return void
+ */
+void fman_finish(void);
+
+#endif	/* __FMAN_H */
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
new file mode 100644
index 0000000..b77a678
--- /dev/null
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -0,0 +1,96 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __NETCFG_H
+#define __NETCFG_H
+
+#include <fman.h>
+#include <argp.h>
+
+/* Configuration information related to a specific ethernet port */
+struct fm_eth_port_cfg {
+	/**< A list of PCD FQ ranges, obtained from FMC configuration */
+	struct list_head *list;
+	/**< The "Rx default" FQID, obtained from FMC configuration */
+	uint32_t rx_def;
+	/**< Other interface details are in the fman driver interface */
+	struct fman_if *fman_if;
+};
+
+struct netcfg_info {
+	uint8_t num_ethports;
+	/**< Number of ports */
+	struct fm_eth_port_cfg port_cfg[0];
+	/**< Variable structure array of size num_ethports */
+};
+
+struct interface_info {
+	char *name;
+	struct ether_addr mac_addr;
+	struct ether_addr peer_mac;
+	int mac_present;
+	int fman_enabled_mac_interface;
+};
+
+struct netcfg_interface {
+	uint8_t numof_netcfg_interface;
+	uint8_t numof_fman_enabled_macless;
+	struct interface_info interface_info[0];
+};
+
+/* pcd_file: FMC netpcd XML ("policy") file, that contains PCD information.
+ * cfg_file: FMC config XML file
+ * Returns the configuration information in newly allocated memory.
+ */
+struct netcfg_info *netcfg_acquire(void);
+
+/* cfg_ptr: configuration information pointer.
+ * Frees the resources allocated by the configuration layer.
+ */
+void netcfg_release(struct netcfg_info *cfg_ptr);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* cfg_ptr: configuration information pointer.
+ * This function dumps configuration data to stdout.
+ */
+void dump_netcfg(struct netcfg_info *cfg_ptr);
+#endif
+
+#endif /* __NETCFG_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 06/40] bus/dpaa: add FMan hardware operations
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (4 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
                       ` (34 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/fman/fman_hw.c      | 606 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_fman.h       | 182 +++++++++
 drivers/bus/dpaa/include/fsl_fman_crc64.h | 263 +++++++++++++
 4 files changed, 1052 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 4b1715d..9f416fe 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -65,6 +65,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
+	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c
 
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
new file mode 100644
index 0000000..77908ec
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -0,0 +1,606 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+#include <fman.h>
+/* This header declares things about Fman hardware itself (the format of status
+ * words and an inline implementation of CRC64). We include it only in order to
+ * instantiate the one global variable it depends on.
+ */
+#include <fsl_fman.h>
+#include <fsl_fman_crc64.h>
+
+/* Instantiate the global variable that the inline CRC64 implementation (in
+ * <fsl_fman.h>) depends on.
+ */
+DECLARE_FMAN_CRC64_TABLE();
+
+#define ETH_ADDR_TO_UINT64(eth_addr)                  \
+	(uint64_t)(((uint64_t)(eth_addr)[0] << 40) |   \
+	((uint64_t)(eth_addr)[1] << 32) |   \
+	((uint64_t)(eth_addr)[2] << 24) |   \
+	((uint64_t)(eth_addr)[3] << 16) |   \
+	((uint64_t)(eth_addr)[4] << 8) |    \
+	((uint64_t)(eth_addr)[5]))
+
+void
+fman_if_set_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i|HASH_CTRL_MCAST_EN);
+}
+
+void
+fman_if_reset_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i & ~HASH_CTRL_MCAST_EN);
+}
+
+static
+uint32_t get_mac_hash_code(uint64_t eth_addr)
+{
+	uint64_t	mask1, mask2;
+	uint32_t	xorVal = 0;
+	uint8_t		i, j;
+
+	for (i = 0; i < 6; i++) {
+		mask1 = eth_addr & (uint64_t)0x01;
+		eth_addr >>= 1;
+
+		for (j = 0; j < 7; j++) {
+			mask2 = eth_addr & (uint64_t)0x01;
+			mask1 ^= mask2;
+			eth_addr >>= 1;
+		}
+
+		xorVal |= (mask1 << (5 - i));
+	}
+
+	return xorVal;
+}
+
+int
+fman_memac_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	uint64_t eth_addr;
+	void *hashtable_ctrl;
+	uint32_t hash;
+
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	eth_addr = ETH_ADDR_TO_UINT64(eth);
+
+	if (!(eth_addr & GROUP_ADDRESS))
+		return -1;
+
+	hash = get_mac_hash_code(eth_addr) & HASH_CTRL_ADDR_MASK;
+	hash = hash | HASH_CTRL_MCAST_EN;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	out_be32(hashtable_ctrl, hash);
+
+	return 0;
+}
+
+int
+fman_memac_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *mac_reg =
+		&((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
+	u32 val = in_be32(mac_reg);
+
+	eth[0] = (val & 0x000000ff) >> 0;
+	eth[1] = (val & 0x0000ff00) >> 8;
+	eth[2] = (val & 0x00ff0000) >> 16;
+	eth[3] = (val & 0xff000000) >> 24;
+
+	mac_reg =  &((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_u;
+	val = in_be32(mac_reg);
+
+	eth[4] = (val & 0x000000ff) >> 0;
+	eth[5] = (val & 0x0000ff00) >> 8;
+
+	return 0;
+}
+
+static void
+fman_memac_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	void *reg;
+
+	if (addr_num) {
+		reg = &((struct memac_regs *)m->ccsr_map)->
+				mac_addr[addr_num-1].mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+		out_be32(reg, 0x0);
+	} else {
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+		out_be32(reg, 0x0);
+	}
+}
+
+static int
+fman_memac_add_mac_addr(struct fman_if *p, uint8_t *eth,
+				       uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+
+	void *reg;
+	u32 val;
+
+	memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_l;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+
+	val = (m->__if.mac_addr.addr_bytes[0] |
+	       (m->__if.mac_addr.addr_bytes[1] << 8) |
+	       (m->__if.mac_addr.addr_bytes[2] << 16) |
+	       (m->__if.mac_addr.addr_bytes[3] << 24));
+	out_be32(reg, val);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+
+	val = ((m->__if.mac_addr.addr_bytes[4] << 0) |
+	       (m->__if.mac_addr.addr_bytes[5] << 8));
+	out_be32(reg, val);
+
+	return 0;
+}
+
+
+static void
+fman_memac_stats_get(struct fman_if *p,
+		     struct rte_eth_stats *stats)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+
+	/* read recved packet count */
+	stats->ipackets = ((u64)in_be32(&regs->rfrm_u)) << 32 |
+			in_be32(&regs->rfrm_l);
+	stats->ibytes = ((u64)in_be32(&regs->roct_u)) << 32 |
+			in_be32(&regs->roct_l);
+	stats->ierrors = ((u64)in_be32(&regs->rerr_u)) << 32 |
+			in_be32(&regs->rerr_l);
+
+	/* read xmited packet count */
+	stats->opackets = ((u64)in_be32(&regs->tfrm_u)) << 32 |
+			in_be32(&regs->tfrm_l);
+	stats->obytes = ((u64)in_be32(&regs->toct_u)) << 32 |
+			in_be32(&regs->toct_l);
+	stats->oerrors = ((u64)in_be32(&regs->terr_u)) << 32 |
+			in_be32(&regs->terr_l);
+}
+
+static void
+fman_memac_reset_stat(struct fman_if *p)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+	uint32_t tmp;
+
+	tmp = in_be32(&regs->statn_config);
+
+	tmp |= STATS_CFG_CLR;
+
+	out_be32(&regs->statn_config, tmp);
+
+	while (in_be32(&regs->statn_config) & STATS_CFG_CLR)
+		;
+}
+
+int
+fm_mac_add_exact_match_mac_addr(struct fman_if *p, uint8_t *eth,
+				    uint8_t addr_num)
+{
+	assert(fman_ccsr_map_fd != -1);
+
+	return fman_memac_add_mac_addr(p, eth, addr_num);
+}
+
+int
+fm_mac_rem_exact_match_mac_addr(struct fman_if *p, int8_t addr_num)
+{
+	assert(fman_ccsr_map_fd != -1);
+
+	fman_memac_clear_mac_addr(p, addr_num);
+	return 0;
+}
+
+int
+fm_mac_config(struct fman_if *p,  uint8_t *eth)
+{
+	assert(fman_ccsr_map_fd != -1);
+
+	return fman_memac_get_primary_mac_addr(p, eth);
+}
+
+void
+fm_mac_set_rx_ignore_pause_frames(struct fman_if *p, bool enable)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	u32 value = 0;
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Rx Ignore Pause Frames */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	if (enable)
+		value = in_be32(cmdcfg) | CMD_CFG_PAUSE_IGNORE;
+	else
+		value = in_be32(cmdcfg) & ~CMD_CFG_PAUSE_IGNORE;
+
+	out_be32(cmdcfg, value);
+}
+
+void
+fm_mac_config_loopback(struct fman_if *p, bool enable)
+{
+	if (enable)
+		/* Enable loopback mode */
+		fman_if_loopback_enable(p);
+	else
+		/* Disable loopback mode */
+		fman_if_loopback_disable(p);
+}
+
+void
+fm_mac_conf_max_frame_len(struct fman_if *p,
+			       unsigned int max_frame_len)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	unsigned int *maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Max frame length */
+	maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+	out_be32(maxfrm, (MAXFRM_RX_MASK & max_frame_len));
+}
+
+void
+fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats)
+{
+	fman_memac_stats_get(p, stats);
+}
+
+void
+fman_if_stats_reset(struct fman_if *p)
+{
+	fman_memac_reset_stat(p);
+}
+
+void
+fm_mac_set_promiscuous(struct fman_if *p)
+{
+	fman_if_promiscuous_enable(p);
+}
+
+void
+fman_if_promiscuous_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_PROMIS_EN);
+}
+
+void
+fman_if_promiscuous_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Disable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) & (~CMD_CFG_PROMIS_EN));
+}
+
+void
+fman_if_enable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* enable Rx and Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) | 3);
+}
+
+void
+fman_if_disable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* only disable Rx, not Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) & ~(u32)2);
+}
+
+void
+fman_if_loopback_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) | IF_MODE_RLP);
+	} else{
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_loopback_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+	/* Disable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) & ~IF_MODE_RLP);
+	} else {
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) & ~CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
+		    int bpid, size_t bufsize)
+{
+	u32 fmbm_ebmpi;
+	u32 ebmpi_val_ace = 0xc0000000;
+	u32 ebmpi_mask = 0xffc00000;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ebmpi =
+	       in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0]);
+	fmbm_ebmpi = ebmpi_val_ace | (fmbm_ebmpi & ebmpi_mask) | (bpid << 16) |
+		     (bufsize);
+
+	out_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0],
+		 fmbm_ebmpi);
+}
+
+int
+fman_if_get_fc_quanta(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	return in_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0]);
+}
+
+int
+fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	out_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0],
+		 pause_quanta);
+	return 0;
+}
+
+int
+fman_if_get_fdoff(struct fman_if *fm_if)
+{
+	u32 fmbm_ricp;
+	int fdoff;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ricp =
+		   in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp);
+	/*iceof + icsz*/
+	fdoff = ((fmbm_ricp & iceof_mask) >> 16) * 16 +
+		(fmbm_ricp & icsz_mask) * 16;
+
+	return fdoff;
+}
+
+void
+fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_refqid =
+			&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_refqid;
+	out_be32(fmbm_refqid, err_fqid);
+}
+
+int
+fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	val = in_be32(fmbm_ricp);
+
+	icp->iceof = (val & iceof_mask) >> 12;
+	icp->iciof = (val & iciof_mask) >> 4;
+	icp->icsz = (val & icsz_mask) << 4;
+
+	return 0;
+}
+
+int
+fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	val |= (icp->iceof << 12) & iceof_mask;
+	val |= (icp->iciof << 4) & iciof_mask;
+	val |= (icp->icsz >> 4) & icsz_mask;
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	out_be32(fmbm_ricp, val);
+
+	return 0;
+}
+
+void
+fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rebm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_rebm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rebm;
+
+	out_be32(fmbm_rebm, in_be32(fmbm_rebm) | (fd_offset << 16));
+}
+
+void
+fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	out_be32(reg_maxfrm, (in_be32(reg_maxfrm) & 0xFFFF0000) | max_frm);
+}
+
+uint16_t
+fman_if_get_maxfrm(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	return (in_be32(reg_maxfrm) | 0x0000FFFF);
+}
+
+void
+fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmqm_pndn;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmqm_pndn = &((struct fman_port_qmi_regs *)__if->qmi_map)->fmqm_pndn;
+
+	out_be32(fmqm_pndn, nia);
+}
+
+void
+fman_if_discard_rx_errors(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rfsdm, *fmbm_rfsem;
+
+	fmbm_rfsem = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsem;
+	out_be32(fmbm_rfsem, 0);
+
+	/* Configure the discard mask to discard the error packets which have
+	 * DMA errors, Frame size error, Header error etc. The mask 0x010CE3F0
+	 * is to configured discard all the errors which come in the FD[STATUS]
+	 */
+	fmbm_rfsdm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsdm;
+	out_be32(fmbm_rfsdm, 0x010CE3F0);
+}
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
new file mode 100644
index 0000000..0aff22c
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -0,0 +1,182 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_H
+#define __FSL_FMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Status field in FD is updated on Rx side by FMAN with following information.
+ * Refer to field description in FM BG.
+ */
+struct fm_status_t {
+	unsigned int reserved0:3;
+	unsigned int dcl4c:1; /* Don't Check L4 Checksum */
+	unsigned int reserved1:1;
+	unsigned int ufd:1; /* Unsupported Format */
+	unsigned int lge:1; /* Length Error */
+	unsigned int dme:1; /* DMA Error */
+
+	unsigned int reserved2:4;
+	unsigned int fpe:1; /* Frame physical Error */
+	unsigned int fse:1; /* Frame Size Error */
+	unsigned int dis:1; /* Discard by Classification */
+	unsigned int reserved3:1;
+
+	unsigned int eof:1; /* Key Extraction goes out of frame */
+	unsigned int nss:1; /* No Scheme selected */
+	unsigned int kso:1; /* Key Size Overflow */
+	unsigned int reserved4:1;
+	unsigned int fcl:2; /* Frame Color */
+	unsigned int ipp:1; /* Illegal Policer Profile Selected */
+	unsigned int flm:1; /* Frame Length Mismatch */
+	unsigned int pte:1; /* Parser Timeout */
+	unsigned int isp:1; /* Invalid Soft Parser Instruction */
+	unsigned int phe:1; /* Header Error during parsing */
+	unsigned int frdr:1; /* Frame Dropped by disabled port */
+	unsigned int reserved5:4;
+} __attribute__ ((__packed__));
+
+/* Set promiscuous mode on an interface */
+void fm_mac_set_promiscuous(struct fman_if *p);
+
+/* Get mac config*/
+int fm_mac_config(struct fman_if *p, uint8_t *eth);
+
+/* Set MAC address for a particular interface */
+int fm_mac_add_exact_match_mac_addr(struct fman_if *p, uint8_t *eth,
+					      uint8_t addr_num);
+
+/* Remove a MAC address for a particular interface */
+int fm_mac_rem_exact_match_mac_addr(struct fman_if *p, int8_t addr_num);
+
+/* Get the FMAN statistics */
+void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);
+
+/* Reset the FMAN statistics */
+void fman_if_stats_reset(struct fman_if *p);
+
+/* Set ignore pause option for a specific interface */
+void fm_mac_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
+
+/* Enable Loopback mode */
+void fm_mac_config_loopback(struct fman_if *p, bool enable);
+
+/* Set max frame length */
+void fm_mac_conf_max_frame_len(struct fman_if *p,
+			       unsigned int max_frame_len);
+
+/* Enable/disable Rx promiscuous mode on specified interface */
+void fman_if_promiscuous_enable(struct fman_if *);
+void fman_if_promiscuous_disable(struct fman_if *);
+
+/* Enable/disable Rx on specific interfaces */
+void fman_if_enable_rx(struct fman_if *);
+void fman_if_disable_rx(struct fman_if *);
+
+/* Enable/disable loopback on specific interfaces */
+void fman_if_loopback_enable(struct fman_if *);
+void fman_if_loopback_disable(struct fman_if *);
+
+/* Set buffer pool on specific interface */
+void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
+		    size_t bufsize);
+
+/* Get Flow Control pause quanta on specific interface */
+int fman_if_get_fc_quanta(struct fman_if *fm_if);
+
+/* Set Flow Control pause quanta on specific interface */
+int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
+
+/* Set default error fqid on specific interface */
+void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
+
+/* Get IC transfer params */
+int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
+
+/* Set IC transfer params */
+int fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp);
+
+/* Get interface fd->offset value */
+int fman_if_get_fdoff(struct fman_if *fm_if);
+
+/* Set interface fd->offset value */
+void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
+
+/* Get interface Max Frame length (MTU) */
+uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
+
+/* Set interface  Max Frame length (MTU) */
+void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
+
+/* Set interface next invoked action for dequeue operation */
+void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
+
+/* discard error packets on rx */
+void fman_if_discard_rx_errors(struct fman_if *fm_if);
+
+void fman_if_set_mcast_filter_table(struct fman_if *p);
+
+void fman_if_reset_mcast_filter_table(struct fman_if *p);
+
+int fman_memac_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
+
+int fman_memac_get_primary_mac_addr(struct fman_if *p, uint8_t *eth);
+
+
+/* Enable/disable Rx on all interfaces */
+static inline void fman_if_enable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_enable_rx(__if);
+}
+
+static inline void fman_if_disable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_disable_rx(__if);
+}
+#endif /* __FSL_FMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_fman_crc64.h b/drivers/bus/dpaa/include/fsl_fman_crc64.h
new file mode 100644
index 0000000..af5803f
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman_crc64.h
@@ -0,0 +1,263 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_CRC64_H
+#define __FSL_FMAN_CRC64_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*
+ * This following definitions provide a software implementation of the CRC64
+ * algorithm implemented within Fman.
+ *
+ * The following example shows how to compute a CRC64 hash value based on
+ * SRC_IP, DST_IP and ESP_SPI values
+ *
+ *     #define compute_hash(saddr,daddr,spi) \
+ *        do { \
+ *           uint64_t result; \
+ *           result = fman_crc64_init(); \
+ *           result = fman_crc64_compute_32bit(saddr, result); \
+ *           result = fman_crc64_compute_32bit(daddr, result); \
+ *           result = fman_crc64_compute_32bit(spi, result); \
+ *           return (uint32_t) result & RC_HASH_MASK; \
+ *        } while (0);
+ *
+ * If hashing over a different number of fields (or of different types) is
+ * required, this can be implemented using the following primitives.
+ */
+
+/* The following table provides the constants used by the Fman CRC64
+ * implementation. The table is instantiated within the DPAA fman driver.
+ * However if the application is not going to be linked against the DPAA fman
+ * driver but will use this Fman CRC64 implementation, then it will need to
+ * instantiate this table by using the DECLARE_FMAN_CRC64_TABLE() macro.
+ */
+struct fman_crc64_t {
+	uint64_t initial;
+	uint64_t table[1 << 8];
+};
+extern struct fman_crc64_t FMAN_CRC64_ECMA_182;
+#define DECLARE_FMAN_CRC64_TABLE() \
+struct fman_crc64_t FMAN_CRC64_ECMA_182 = { \
+	0xFFFFFFFFFFFFFFFFULL, \
+	{ \
+		0x0000000000000000ULL, 0xb32e4cbe03a75f6fULL, \
+		0xf4843657a840a05bULL, 0x47aa7ae9abe7ff34ULL, \
+		0x7bd0c384ff8f5e33ULL, 0xc8fe8f3afc28015cULL, \
+		0x8f54f5d357cffe68ULL, 0x3c7ab96d5468a107ULL, \
+		0xf7a18709ff1ebc66ULL, 0x448fcbb7fcb9e309ULL, \
+		0x0325b15e575e1c3dULL, 0xb00bfde054f94352ULL, \
+		0x8c71448d0091e255ULL, 0x3f5f08330336bd3aULL, \
+		0x78f572daa8d1420eULL, 0xcbdb3e64ab761d61ULL, \
+		0x7d9ba13851336649ULL, 0xceb5ed8652943926ULL, \
+		0x891f976ff973c612ULL, 0x3a31dbd1fad4997dULL, \
+		0x064b62bcaebc387aULL, 0xb5652e02ad1b6715ULL, \
+		0xf2cf54eb06fc9821ULL, 0x41e11855055bc74eULL, \
+		0x8a3a2631ae2dda2fULL, 0x39146a8fad8a8540ULL, \
+		0x7ebe1066066d7a74ULL, 0xcd905cd805ca251bULL, \
+		0xf1eae5b551a2841cULL, 0x42c4a90b5205db73ULL, \
+		0x056ed3e2f9e22447ULL, 0xb6409f5cfa457b28ULL, \
+		0xfb374270a266cc92ULL, 0x48190ecea1c193fdULL, \
+		0x0fb374270a266cc9ULL, 0xbc9d3899098133a6ULL, \
+		0x80e781f45de992a1ULL, 0x33c9cd4a5e4ecdceULL, \
+		0x7463b7a3f5a932faULL, 0xc74dfb1df60e6d95ULL, \
+		0x0c96c5795d7870f4ULL, 0xbfb889c75edf2f9bULL, \
+		0xf812f32ef538d0afULL, 0x4b3cbf90f69f8fc0ULL, \
+		0x774606fda2f72ec7ULL, 0xc4684a43a15071a8ULL, \
+		0x83c230aa0ab78e9cULL, 0x30ec7c140910d1f3ULL, \
+		0x86ace348f355aadbULL, 0x3582aff6f0f2f5b4ULL, \
+		0x7228d51f5b150a80ULL, 0xc10699a158b255efULL, \
+		0xfd7c20cc0cdaf4e8ULL, 0x4e526c720f7dab87ULL, \
+		0x09f8169ba49a54b3ULL, 0xbad65a25a73d0bdcULL, \
+		0x710d64410c4b16bdULL, 0xc22328ff0fec49d2ULL, \
+		0x85895216a40bb6e6ULL, 0x36a71ea8a7ace989ULL, \
+		0x0adda7c5f3c4488eULL, 0xb9f3eb7bf06317e1ULL, \
+		0xfe5991925b84e8d5ULL, 0x4d77dd2c5823b7baULL, \
+		0x64b62bcaebc387a1ULL, 0xd7986774e864d8ceULL, \
+		0x90321d9d438327faULL, 0x231c512340247895ULL, \
+		0x1f66e84e144cd992ULL, 0xac48a4f017eb86fdULL, \
+		0xebe2de19bc0c79c9ULL, 0x58cc92a7bfab26a6ULL, \
+		0x9317acc314dd3bc7ULL, 0x2039e07d177a64a8ULL, \
+		0x67939a94bc9d9b9cULL, 0xd4bdd62abf3ac4f3ULL, \
+		0xe8c76f47eb5265f4ULL, 0x5be923f9e8f53a9bULL, \
+		0x1c4359104312c5afULL, 0xaf6d15ae40b59ac0ULL, \
+		0x192d8af2baf0e1e8ULL, 0xaa03c64cb957be87ULL, \
+		0xeda9bca512b041b3ULL, 0x5e87f01b11171edcULL, \
+		0x62fd4976457fbfdbULL, 0xd1d305c846d8e0b4ULL, \
+		0x96797f21ed3f1f80ULL, 0x2557339fee9840efULL, \
+		0xee8c0dfb45ee5d8eULL, 0x5da24145464902e1ULL, \
+		0x1a083bacedaefdd5ULL, 0xa9267712ee09a2baULL, \
+		0x955cce7fba6103bdULL, 0x267282c1b9c65cd2ULL, \
+		0x61d8f8281221a3e6ULL, 0xd2f6b4961186fc89ULL, \
+		0x9f8169ba49a54b33ULL, 0x2caf25044a02145cULL, \
+		0x6b055fede1e5eb68ULL, 0xd82b1353e242b407ULL, \
+		0xe451aa3eb62a1500ULL, 0x577fe680b58d4a6fULL, \
+		0x10d59c691e6ab55bULL, 0xa3fbd0d71dcdea34ULL, \
+		0x6820eeb3b6bbf755ULL, 0xdb0ea20db51ca83aULL, \
+		0x9ca4d8e41efb570eULL, 0x2f8a945a1d5c0861ULL, \
+		0x13f02d374934a966ULL, 0xa0de61894a93f609ULL, \
+		0xe7741b60e174093dULL, 0x545a57dee2d35652ULL, \
+		0xe21ac88218962d7aULL, 0x5134843c1b317215ULL, \
+		0x169efed5b0d68d21ULL, 0xa5b0b26bb371d24eULL, \
+		0x99ca0b06e7197349ULL, 0x2ae447b8e4be2c26ULL, \
+		0x6d4e3d514f59d312ULL, 0xde6071ef4cfe8c7dULL, \
+		0x15bb4f8be788911cULL, 0xa6950335e42fce73ULL, \
+		0xe13f79dc4fc83147ULL, 0x521135624c6f6e28ULL, \
+		0x6e6b8c0f1807cf2fULL, 0xdd45c0b11ba09040ULL, \
+		0x9aefba58b0476f74ULL, 0x29c1f6e6b3e0301bULL, \
+		0xc96c5795d7870f42ULL, 0x7a421b2bd420502dULL, \
+		0x3de861c27fc7af19ULL, 0x8ec62d7c7c60f076ULL, \
+		0xb2bc941128085171ULL, 0x0192d8af2baf0e1eULL, \
+		0x4638a2468048f12aULL, 0xf516eef883efae45ULL, \
+		0x3ecdd09c2899b324ULL, 0x8de39c222b3eec4bULL, \
+		0xca49e6cb80d9137fULL, 0x7967aa75837e4c10ULL, \
+		0x451d1318d716ed17ULL, 0xf6335fa6d4b1b278ULL, \
+		0xb199254f7f564d4cULL, 0x02b769f17cf11223ULL, \
+		0xb4f7f6ad86b4690bULL, 0x07d9ba1385133664ULL, \
+		0x4073c0fa2ef4c950ULL, 0xf35d8c442d53963fULL, \
+		0xcf273529793b3738ULL, 0x7c0979977a9c6857ULL, \
+		0x3ba3037ed17b9763ULL, 0x888d4fc0d2dcc80cULL, \
+		0x435671a479aad56dULL, 0xf0783d1a7a0d8a02ULL, \
+		0xb7d247f3d1ea7536ULL, 0x04fc0b4dd24d2a59ULL, \
+		0x3886b22086258b5eULL, 0x8ba8fe9e8582d431ULL, \
+		0xcc0284772e652b05ULL, 0x7f2cc8c92dc2746aULL, \
+		0x325b15e575e1c3d0ULL, 0x8175595b76469cbfULL, \
+		0xc6df23b2dda1638bULL, 0x75f16f0cde063ce4ULL, \
+		0x498bd6618a6e9de3ULL, 0xfaa59adf89c9c28cULL, \
+		0xbd0fe036222e3db8ULL, 0x0e21ac88218962d7ULL, \
+		0xc5fa92ec8aff7fb6ULL, 0x76d4de52895820d9ULL, \
+		0x317ea4bb22bfdfedULL, 0x8250e80521188082ULL, \
+		0xbe2a516875702185ULL, 0x0d041dd676d77eeaULL, \
+		0x4aae673fdd3081deULL, 0xf9802b81de97deb1ULL, \
+		0x4fc0b4dd24d2a599ULL, 0xfceef8632775faf6ULL, \
+		0xbb44828a8c9205c2ULL, 0x086ace348f355aadULL, \
+		0x34107759db5dfbaaULL, 0x873e3be7d8faa4c5ULL, \
+		0xc094410e731d5bf1ULL, 0x73ba0db070ba049eULL, \
+		0xb86133d4dbcc19ffULL, 0x0b4f7f6ad86b4690ULL, \
+		0x4ce50583738cb9a4ULL, 0xffcb493d702be6cbULL, \
+		0xc3b1f050244347ccULL, 0x709fbcee27e418a3ULL, \
+		0x3735c6078c03e797ULL, 0x841b8ab98fa4b8f8ULL, \
+		0xadda7c5f3c4488e3ULL, 0x1ef430e13fe3d78cULL, \
+		0x595e4a08940428b8ULL, 0xea7006b697a377d7ULL, \
+		0xd60abfdbc3cbd6d0ULL, 0x6524f365c06c89bfULL, \
+		0x228e898c6b8b768bULL, 0x91a0c532682c29e4ULL, \
+		0x5a7bfb56c35a3485ULL, 0xe955b7e8c0fd6beaULL, \
+		0xaeffcd016b1a94deULL, 0x1dd181bf68bdcbb1ULL, \
+		0x21ab38d23cd56ab6ULL, 0x9285746c3f7235d9ULL, \
+		0xd52f0e859495caedULL, 0x6601423b97329582ULL, \
+		0xd041dd676d77eeaaULL, 0x636f91d96ed0b1c5ULL, \
+		0x24c5eb30c5374ef1ULL, 0x97eba78ec690119eULL, \
+		0xab911ee392f8b099ULL, 0x18bf525d915feff6ULL, \
+		0x5f1528b43ab810c2ULL, 0xec3b640a391f4fadULL, \
+		0x27e05a6e926952ccULL, 0x94ce16d091ce0da3ULL, \
+		0xd3646c393a29f297ULL, 0x604a2087398eadf8ULL, \
+		0x5c3099ea6de60cffULL, 0xef1ed5546e415390ULL, \
+		0xa8b4afbdc5a6aca4ULL, 0x1b9ae303c601f3cbULL, \
+		0x56ed3e2f9e224471ULL, 0xe5c372919d851b1eULL, \
+		0xa26908783662e42aULL, 0x114744c635c5bb45ULL, \
+		0x2d3dfdab61ad1a42ULL, 0x9e13b115620a452dULL, \
+		0xd9b9cbfcc9edba19ULL, 0x6a978742ca4ae576ULL, \
+		0xa14cb926613cf817ULL, 0x1262f598629ba778ULL, \
+		0x55c88f71c97c584cULL, 0xe6e6c3cfcadb0723ULL, \
+		0xda9c7aa29eb3a624ULL, 0x69b2361c9d14f94bULL, \
+		0x2e184cf536f3067fULL, 0x9d36004b35545910ULL, \
+		0x2b769f17cf112238ULL, 0x9858d3a9ccb67d57ULL, \
+		0xdff2a94067518263ULL, 0x6cdce5fe64f6dd0cULL, \
+		0x50a65c93309e7c0bULL, 0xe388102d33392364ULL, \
+		0xa4226ac498dedc50ULL, 0x170c267a9b79833fULL, \
+		0xdcd7181e300f9e5eULL, 0x6ff954a033a8c131ULL, \
+		0x28532e49984f3e05ULL, 0x9b7d62f79be8616aULL, \
+		0xa707db9acf80c06dULL, 0x14299724cc279f02ULL, \
+		0x5383edcd67c06036ULL, 0xe0ada17364673f59ULL} \
+}
+
+/*
+ * Return the initial CRC seed. Use the value returned from this API as the
+ * "crc" parameter to the first call to add data.
+ */
+static inline uint64_t fman_crc64_init(void)
+{
+	return FMAN_CRC64_ECMA_182.initial;
+}
+
+/* Updates the CRC with arbitrary data */
+static inline uint64_t fman_crc64_update(uint64_t crc,
+					 void *data, unsigned int len)
+{
+	uint8_t *p = data;
+	while (len--)
+		crc = FMAN_CRC64_ECMA_182.table[(crc ^ *(p++)) & 0xff] ^
+				(crc >> 8);
+	return crc;
+}
+
+/* Shorthands for updating the CRC with 8/16/32 bits of data.
+ * IMPORTANT NOTE: the typed "data" arguments should not be mistaken for
+ * host-endian numerical values, the assumption is that these values contain
+ * big-endian (ie. network byte order) data.
+ */
+static inline uint64_t fman_crc64_compute_32bit(uint32_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_16bit(uint16_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_8bit(uint8_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+
+/*
+ * Finalise the CRC (using 2's complement)
+ */
+static inline uint64_t fman_crc64_finish(uint64_t seed)
+{
+	return ~seed;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_FMAN_CRC64_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 07/40] bus/dpaa: enable DPAA IOCTL portal driver
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (5 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
                       ` (33 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Userspace applications interact with DPAA blocks using this IOCTL driver.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile             |   4 +-
 drivers/bus/dpaa/base/qbman/process.c | 331 ++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h    |  88 +++++++++
 drivers/bus/dpaa/include/process.h    | 107 +++++++++++
 4 files changed, 529 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/process.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 9f416fe..b0083c9 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -50,6 +50,7 @@ CFLAGS += -D _GNU_SOURCE
 
 CFLAGS += -I$(RTE_BUS_DPAA)/
 CFLAGS += -I$(RTE_BUS_DPAA)/include
+CFLAGS += -I$(RTE_BUS_DPAA)/base/qbman
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -67,6 +68,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
-	base/fman/netcfg_layer.c
+	base/fman/netcfg_layer.c \
+	base/qbman/process.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
new file mode 100644
index 0000000..b8ec539
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -0,0 +1,331 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <assert.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+
+#include "process.h"
+
+#include <fsl_usd.h>
+
+/* As higher-level drivers will be built on top of this (dma_mem, qbman, ...),
+ * it's preferable that the process driver itself not provide any exported API.
+ * As such, combined with the fact that none of these operations are
+ * performance critical, it is justified to use lazy initialisation, so that's
+ * what the lock is for.
+ */
+static int fd = -1;
+static pthread_mutex_t fd_init_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static int check_fd(void)
+{
+	int ret;
+
+	if (fd >= 0)
+		return 0;
+	ret = pthread_mutex_lock(&fd_init_lock);
+	assert(!ret);
+	/* check again with the lock held */
+	if (fd < 0)
+		fd = open(PROCESS_PATH, O_RDWR);
+	ret = pthread_mutex_unlock(&fd_init_lock);
+	assert(!ret);
+	return (fd >= 0) ? 0 : -ENODEV;
+}
+
+#define DPAA_IOCTL_MAGIC 'u'
+struct dpaa_ioctl_id_alloc {
+	uint32_t base; /* Return value, the start of the allocated range */
+	enum dpaa_id_type id_type; /* what kind of resource(s) to allocate */
+	uint32_t num; /* how many IDs to allocate (and return value) */
+	uint32_t align; /* must be a power of 2, 0 is treated like 1 */
+	int partial; /* whether to allow less than 'num' */
+};
+
+struct dpaa_ioctl_id_release {
+	/* Input; */
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+struct dpaa_ioctl_id_reserve {
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+#define DPAA_IOCTL_ID_ALLOC \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x01, struct dpaa_ioctl_id_alloc)
+#define DPAA_IOCTL_ID_RELEASE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x02, struct dpaa_ioctl_id_release)
+#define DPAA_IOCTL_ID_RESERVE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x0A, struct dpaa_ioctl_id_reserve)
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial)
+{
+	struct dpaa_ioctl_id_alloc id = {
+		.id_type = id_type,
+		.num = num,
+		.align = align,
+		.partial = partial
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	ret = ioctl(fd, DPAA_IOCTL_ID_ALLOC, &id);
+	if (ret)
+		return ret;
+	for (ret = 0; ret < (int)id.num; ret++)
+		base[ret] = id.base + ret;
+	return id.num;
+}
+
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_release id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret) {
+		fprintf(stderr, "Process FD failure\n");
+		return;
+	}
+	ret = ioctl(fd, DPAA_IOCTL_ID_RELEASE, &id);
+	if (ret)
+		fprintf(stderr, "Process FD ioctl failure type %d base 0x%x num %d\n",
+			id_type, base, num);
+}
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_reserve id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	return ioctl(fd, DPAA_IOCTL_ID_RESERVE, &id);
+}
+
+/***************************************/
+/* Mapping and using QMan/BMan portals */
+/***************************************/
+
+#define DPAA_IOCTL_PORTAL_MAP \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x07, struct dpaa_ioctl_portal_map)
+#define DPAA_IOCTL_PORTAL_UNMAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x08, struct dpaa_portal_map)
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_MAP, params);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_MAP)");
+		return ret;
+	}
+	return 0;
+}
+
+int process_portal_unmap(struct dpaa_portal_map *map)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_UNMAP, map);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_UNMAP)");
+		return ret;
+	}
+	return 0;
+}
+
+#define DPAA_IOCTL_PORTAL_IRQ_MAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x09, struct dpaa_ioctl_irq_map)
+
+int process_portal_irq_map(int ifd, struct dpaa_ioctl_irq_map *map)
+{
+	map->fd = fd;
+	return ioctl(ifd, DPAA_IOCTL_PORTAL_IRQ_MAP, map);
+}
+
+int process_portal_irq_unmap(int ifd)
+{
+	return close(ifd);
+}
+
+struct dpaa_ioctl_raw_portal {
+	/* inputs */
+	enum dpaa_portal_type type; /* Type of portal to allocate */
+
+	uint8_t enable_stash; /* set to non zero to turn on stashing */
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+#define DPAA_IOCTL_ALLOC_RAW_PORTAL \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x0C, struct dpaa_ioctl_raw_portal)
+
+#define DPAA_IOCTL_FREE_RAW_PORTAL \
+	_IOR(DPAA_IOCTL_MAGIC, 0x0D, struct dpaa_ioctl_raw_portal)
+
+static int process_portal_allocate(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_ALLOC_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_ALLOC_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+static int process_portal_free(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_FREE_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_FREE_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.enable_stash = portal->enable_stash;
+	input.cpu = portal->cpu;
+	input.cache = portal->cache;
+	input.window = portal->window;
+	input.sdest = portal->sdest;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int qman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.enable_stash = 0;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int bman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
new file mode 100644
index 0000000..4ff48c6
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_USD_H
+#define __FSL_USD_H
+
+#include <compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define QBMAN_ANY_PORTAL_IDX 0xffffffff
+
+/* Obtain and free raw (unitialized) portals */
+
+struct dpaa_raw_portal {
+	/* inputs */
+
+	/* set to non zero to turn on stashing */
+	uint8_t enable_stash;
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int qman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int bman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_USD_H */
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
new file mode 100644
index 0000000..989ddcd
--- /dev/null
+++ b/drivers/bus/dpaa/include/process.h
@@ -0,0 +1,107 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __PROCESS_H
+#define	__PROCESS_H
+
+#include <compat.h>
+
+/* The process device underlies process-wide user/kernel interactions, such as
+ * mapping dma_mem memory and providing accompanying ioctl()s. (This isn't used
+ * for portals, which use one UIO device each.).
+ */
+#define PROCESS_PATH		"/dev/fsl-usdpaa"
+
+/* Allocation of resource IDs uses a generic interface. This enum is used to
+ * distinguish between the type of underlying object being manipulated.
+ */
+enum dpaa_id_type {
+	dpaa_id_fqid,
+	dpaa_id_bpid,
+	dpaa_id_qpool,
+	dpaa_id_cgrid,
+	dpaa_id_max /* <-- not a valid type, represents the number of types */
+};
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial);
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+/* Mapping and using QMan/BMan portals */
+enum dpaa_portal_type {
+	dpaa_portal_qman,
+	dpaa_portal_bman,
+};
+
+struct dpaa_ioctl_portal_map {
+	/* Input parameter, is a qman or bman portal required. */
+	enum dpaa_portal_type type;
+	/* Specifes a specific portal index to map or 0xffffffff
+	 * for don't care.
+	 */
+	uint32_t index;
+
+	/* Return value if the map succeeds, this gives the mapped
+	 * cache-inhibited (cinh) and cache-enabled (cena) addresses.
+	 */
+	struct dpaa_portal_map {
+		void *cinh;
+		void *cena;
+	} addr;
+	/* Qman-specific return values */
+	u16 channel;
+	uint32_t pools;
+};
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params);
+int process_portal_unmap(struct dpaa_portal_map *map);
+
+struct dpaa_ioctl_irq_map {
+	enum dpaa_portal_type type; /* Type of portal to map */
+	int fd; /* File descriptor that contains the portal */
+	void *portal_cinh; /* Cache inhibited area to identify the portal */
+};
+
+int process_portal_irq_map(int fd,  struct dpaa_ioctl_irq_map *irq);
+int process_portal_irq_unmap(int fd);
+
+#endif	/*  __PROCESS_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 08/40] bus/dpaa: add layer for interrupt emulation using pthread
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (6 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
                       ` (32 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

An interrupt manager is implemented by emulating over pthreads.
Handlers are registered by QBMAN layer for being notified about
any interrupt request from DPAA blocks in userspace.

Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile              |   3 +-
 drivers/bus/dpaa/base/qbman/dpaa_sys.c | 136 +++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h |  65 ++++++++++++++++
 3 files changed, 203 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index b0083c9..ad6f8c0 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -69,6 +69,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
-	base/qbman/process.c
+	base/qbman/process.c \
+	base/qbman/dpaa_sys.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.c b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
new file mode 100644
index 0000000..0017da5
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
@@ -0,0 +1,136 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <process.h>
+#include "dpaa_sys.h"
+
+struct process_interrupt {
+	int irq;
+	irqreturn_t (*isr)(int irq, void *arg);
+	unsigned long flags;
+	const char *name;
+	void *arg;
+	struct list_head node;
+};
+
+static COMPAT_LIST_HEAD(process_irq_list);
+static pthread_mutex_t process_irq_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static void process_interrupt_install(struct process_interrupt *irq)
+{
+	int ret;
+	/* Add the irq to the end of the list */
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_add_tail(&irq->node, &process_irq_list);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static void process_interrupt_remove(struct process_interrupt *irq)
+{
+	int ret;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_del(&irq->node);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static struct process_interrupt *process_interrupt_find(int irq_num)
+{
+	int ret;
+	struct process_interrupt *i = NULL;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_for_each_entry(i, &process_irq_list, node) {
+		if (i->irq == irq_num)
+			goto done;
+	}
+done:
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+	return i;
+}
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name,
+		      void *arg __maybe_unused)
+{
+	struct process_interrupt *irq_node =
+		kmalloc(sizeof(*irq_node), GFP_KERNEL);
+
+	if (!irq_node)
+		return -ENOMEM;
+	irq_node->irq = irq;
+	irq_node->isr = isr;
+	irq_node->flags = flags;
+	irq_node->name = name;
+	irq_node->arg = arg;
+	process_interrupt_install(irq_node);
+	return 0;
+}
+
+int qbman_free_irq(int irq, __maybe_unused void *arg)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (!irq_node)
+		return -EINVAL;
+	process_interrupt_remove(irq_node);
+	kfree(irq_node);
+	return 0;
+}
+
+/* This is the interface from the platform-specific driver code to obtain
+ * interrupt handlers that have been registered.
+ */
+void qbman_invoke_irq(int irq)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (irq_node)
+		irq_node->isr(irq, irq_node->arg);
+}
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.h b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
new file mode 100644
index 0000000..c53035a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
@@ -0,0 +1,65 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_SYS_H
+#define __DPAA_SYS_H
+
+#include <of.h>
+
+/* For 2-element tables related to cache-inhibited and cache-enabled mappings */
+#define DPAA_PORTAL_CE 0
+#define DPAA_PORTAL_CI 1
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+#define DPAA_ASSERT(x) ASSERT(x)
+#else
+#define DPAA_ASSERT(x)	do {  } while (0)
+#endif
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name, void *arg);
+int qbman_free_irq(int irq, void *arg);
+
+void qbman_invoke_irq(int irq);
+
+#endif /* __DPAA_SYS_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 09/40] bus/dpaa: add routines for managing a RB tree
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (7 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
                       ` (31 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

QMAN frames are managed over a RB tree data structure.
This patch introduces necessary routines for implementing a RB tree.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/include/dpaa_rbtree.h | 143 +++++++++++++++++++++++++++++++++
 1 file changed, 143 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h

diff --git a/drivers/bus/dpaa/include/dpaa_rbtree.h b/drivers/bus/dpaa/include/dpaa_rbtree.h
new file mode 100644
index 0000000..f8c9b59
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_rbtree.h
@@ -0,0 +1,143 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_RBTREE_H
+#define __DPAA_RBTREE_H
+
+#include <rte_common.h>
+/************/
+/* RB-trees */
+/************/
+
+/* Linux has a good RB-tree implementation, that we can't use (GPL). It also has
+ * a flat/hooked-in interface that virtually requires license-contamination in
+ * order to write a caller-compatible implementation. Instead, I've created an
+ * RB-tree encapsulation on top of linux's primitives (it does some of the work
+ * the client logic would normally do), and this gives us something we can
+ * reimplement on LWE. Unfortunately there's no good+free RB-tree
+ * implementations out there that are license-compatible and "flat" (ie. no
+ * dynamic allocation). I did find a malloc-based one that I could convert, but
+ * that will be a task for later on. For now, LWE's RB-tree is implemented using
+ * an ordered linked-list.
+ *
+ * Note, the only linux-esque type is "struct rb_node", because it's used
+ * statically in the exported header, so it can't be opaque. Our version doesn't
+ * include a "rb_parent_color" field because we're doing linked-list instead of
+ * a true rb-tree.
+ */
+
+struct rb_node {
+	struct rb_node *prev, *next;
+};
+
+struct dpa_rbtree {
+	struct rb_node *head, *tail;
+};
+
+#define DPAA_RBTREE { NULL, NULL }
+static inline void dpa_rbtree_init(struct dpa_rbtree *tree)
+{
+	tree->head = tree->tail = NULL;
+}
+
+#define QMAN_NODE2OBJ(ptr, type, node_field) \
+	(type *)((char *)ptr - offsetof(type, node_field))
+
+#define IMPLEMENT_DPAA_RBTREE(name, type, node_field, val_field) \
+static inline int name##_push(struct dpa_rbtree *tree, type *obj) \
+{ \
+	struct rb_node *node = tree->head; \
+	if (!node) { \
+		tree->head = tree->tail = &obj->node_field; \
+		obj->node_field.prev = obj->node_field.next = NULL; \
+		return 0; \
+	} \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (obj->val_field == item->val_field) \
+			return -EBUSY; \
+		if (obj->val_field < item->val_field) { \
+			if (tree->head == node) \
+				tree->head = &obj->node_field; \
+			else \
+				node->prev->next = &obj->node_field; \
+			obj->node_field.prev = node->prev; \
+			obj->node_field.next = node; \
+			node->prev = &obj->node_field; \
+			return 0; \
+		} \
+		node = node->next; \
+	} \
+	obj->node_field.prev = tree->tail; \
+	obj->node_field.next = NULL; \
+	tree->tail->next = &obj->node_field; \
+	tree->tail = &obj->node_field; \
+	return 0; \
+} \
+static inline void name##_del(struct dpa_rbtree *tree, type *obj) \
+{ \
+	if (tree->head == &obj->node_field) { \
+		if (tree->tail == &obj->node_field) \
+			/* Only item in the list */ \
+			tree->head = tree->tail = NULL; \
+		else { \
+			/* Is the head, next != NULL */ \
+			tree->head = tree->head->next; \
+			tree->head->prev = NULL; \
+		} \
+	} else { \
+		if (tree->tail == &obj->node_field) { \
+			/* Is the tail, prev != NULL */ \
+			tree->tail = tree->tail->prev; \
+			tree->tail->next = NULL; \
+		} else { \
+			/* Is neither the head nor the tail */ \
+			obj->node_field.prev->next = obj->node_field.next; \
+			obj->node_field.next->prev = obj->node_field.prev; \
+		} \
+	} \
+} \
+static inline type *name##_find(struct dpa_rbtree *tree, u32 val) \
+{ \
+	struct rb_node *node = tree->head; \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (val == item->val_field) \
+			return item; \
+		if (val < item->val_field) \
+			return NULL; \
+		node = node->next; \
+	} \
+	return NULL; \
+}
+
+#endif /* __DPAA_RBTREE_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 10/40] bus/dpaa: add QMAN interface driver
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (8 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
                       ` (30 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Queue Manager (QMan) is a hardware queue management block that
allows software and accelerators on the datapath to enqueue and dequeue
frames in order to communicate.

This part of QBMAN DPAA Block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |    4 +
 drivers/bus/dpaa/base/qbman/qman_driver.c |  271 +++++++
 drivers/bus/dpaa/base/qbman/qman_priv.h   |  303 +++++++
 drivers/bus/dpaa/include/fsl_qman.h       | 1254 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   13 +
 5 files changed, 1845 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index ad6f8c0..29f01df 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -70,6 +70,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman_driver.c \
 	base/qbman/dpaa_sys.c
 
+# Link Pthread
+LDLIBS += -lpthread
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
new file mode 100644
index 0000000..80dde20
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -0,0 +1,271 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "qman_priv.h"
+#include <sys/ioctl.h>
+#include <rte_branch_prediction.h>
+
+/* Global variable containing revision id (even on non-control plane systems
+ * where CCSR isn't available).
+ */
+u16 qman_ip_rev;
+u16 qm_channel_pool1 = QMAN_CHANNEL_POOL1;
+u16 qm_channel_caam = QMAN_CHANNEL_CAAM;
+u16 qm_channel_pme = QMAN_CHANNEL_PME;
+
+/* Ccsr map address to access ccsrbased register */
+void *qman_ccsr_map;
+/* The qman clock frequency */
+u32 qman_clk;
+
+static __thread int fd = -1;
+static __thread struct qm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_qman
+};
+
+static int fsl_qman_portal_init(uint32_t index, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu\n");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!\n");
+		return -EINVAL;
+	}
+
+	/* Allocate and map a qman portal */
+	map.index = index;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	pcfg.channel = map.channel;
+	pcfg.pools = map.pools;
+	pcfg.index = map.index;
+
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+
+	fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("QMan irq init failed\n");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
+	pcfg.is_shared = is_shared;
+	pcfg.node = NULL;
+	pcfg.irq = fd;
+
+	irq_map.type = dpaa_portal_qman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_qman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int qman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_qman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int qman_thread_finish(void)
+{
+	return fsl_qman_portal_finish();
+}
+
+void qman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int qman_global_init(void)
+{
+	const struct device_node *dt_node;
+	int ret = 0;
+	size_t lenp;
+	const u32 *chanid;
+	static int ccsr_map_fd;
+	const uint32_t *qman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	const u32 *clk;
+
+	static int done;
+
+	if (done)
+		return -EBUSY;
+
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman-portal");
+	if (!dt_node) {
+		pr_err("No qman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,qman-portal-1.0.0"))
+		pr_err("QMan rev1.0 on P4080 rev1 is not supported!\n");
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.1") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.1.0"))
+		qman_ip_rev = QMAN_REV11;
+	else if	(of_device_is_compatible(dt_node, "fsl,qman-portal-1.2") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.2.0"))
+		qman_ip_rev = QMAN_REV12;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-2.0.0"))
+		qman_ip_rev = QMAN_REV20;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.1"))
+		qman_ip_rev = QMAN_REV30;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.3"))
+		qman_ip_rev = QMAN_REV31;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.1"))
+		qman_ip_rev = QMAN_REV32;
+	else
+		qman_ip_rev = QMAN_REV11;
+
+	if (!qman_ip_rev) {
+		pr_err("Unknown qman portal version\n");
+		return -ENODEV;
+	}
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30) {
+		qm_channel_pool1 = QMAN_CHANNEL_POOL1_REV3;
+		qm_channel_caam = QMAN_CHANNEL_CAAM_REV3;
+		qm_channel_pme = QMAN_CHANNEL_PME_REV3;
+	}
+
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,pool-channel-range");
+	if (!dt_node) {
+		pr_err("No qman pool channel range available\n");
+		return -ENODEV;
+	}
+	chanid = of_get_property(dt_node, "fsl,pool-channel-range", &lenp);
+	if (!chanid) {
+		pr_err("Can not get pool-channel-range property\n");
+		return -EINVAL;
+	}
+
+	/* get ccsr base */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman");
+	if (!dt_node) {
+		pr_err("No qman device node available\n");
+		return -ENODEV;
+	}
+	qman_addr = of_get_address(dt_node, 0, &regs_size, NULL);
+	if (!qman_addr) {
+		pr_err("of_get_address cannot return qman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(dt_node, qman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open("/dev/mem", O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for qman ccsr map\n");
+		return ccsr_map_fd;
+	}
+
+	qman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (qman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map qman ccsr base\n");
+		return -EINVAL;
+	}
+
+	clk = of_get_property(dt_node, "clock-frequency", NULL);
+	if (!clk)
+		pr_warn("Can't find Qman clock frequency\n");
+	else
+		qman_clk = be32_to_cpu(*clk);
+
+	return ret;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
new file mode 100644
index 0000000..4a11e40
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -0,0 +1,303 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __QMAN_PRIV_H
+#define __QMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_qman.h>
+
+/* Congestion Groups */
+/*
+ * This wrapper represents a bit-array for the state of the 256 QMan congestion
+ * groups. Is also used as a *mask* for congestion groups, eg. so we ignore
+ * those that don't concern us. We harness the structure and accessor details
+ * already used in the management command to query congestion groups.
+ */
+struct qman_cgrs {
+	struct __qm_mcr_querycongestion q;
+};
+
+static inline void qman_cgrs_init(struct qman_cgrs *c)
+{
+	memset(c, 0, sizeof(*c));
+}
+
+static inline void qman_cgrs_fill(struct qman_cgrs *c)
+{
+	memset(c, 0xff, sizeof(*c));
+}
+
+static inline int qman_cgrs_get(struct qman_cgrs *c, int num)
+{
+	return QM_MCR_QUERYCONGESTION(&c->q, num);
+}
+
+static inline void qman_cgrs_set(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] |= (0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline void qman_cgrs_unset(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] &= ~(0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline int qman_cgrs_next(struct qman_cgrs *c, int num)
+{
+	while ((++num < (int)__CGR_NUM) && !qman_cgrs_get(c, num))
+		;
+	return num;
+}
+
+static inline void qman_cgrs_cp(struct qman_cgrs *dest,
+				const struct qman_cgrs *src)
+{
+	memcpy(dest, src, sizeof(*dest));
+}
+
+static inline void qman_cgrs_and(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) & *(_b++);
+}
+
+static inline void qman_cgrs_xor(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) ^ *(_b++);
+}
+
+/* used by CCSR and portal interrupt code */
+enum qm_isr_reg {
+	qm_isr_status = 0,
+	qm_isr_enable = 1,
+	qm_isr_disable = 2,
+	qm_isr_inhibit = 3
+};
+
+struct qm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	struct device_node *node;
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* If the caller enables DQRR stashing (and thus wishes to operate the
+	 * portal from only one cpu), this is the logical CPU that the portal
+	 * will stash to. Whether stashing is enabled or not, this setting is
+	 * also used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* The portal's dedicated channel id, use this value for initialising
+	 * frame queues to target this portal when scheduled.
+	 */
+	u16 channel;
+	/* A mask of which pool channels this portal has dequeue access to
+	 * (using QM_SDQCR_CHANNELS_POOL(n) for the bitmask).
+	 */
+	u32 pools;
+
+};
+
+/* Revision info (for errata and feature handling) */
+#define QMAN_REV11 0x0101
+#define QMAN_REV12 0x0102
+#define QMAN_REV20 0x0200
+#define QMAN_REV30 0x0300
+#define QMAN_REV31 0x0301
+#define QMAN_REV32 0x0302
+extern u16 qman_ip_rev; /* 0 if uninitialised, otherwise QMAN_REVx */
+extern u32 qman_clk;
+
+int qm_set_wpm(int wpm);
+int qm_get_wpm(int *wpm);
+
+struct qman_portal *qman_create_affine_portal(
+			const struct qm_portal_config *config,
+			const struct qman_cgrs *cgrs);
+const struct qm_portal_config *qman_destroy_affine_portal(void);
+
+struct qm_portal_config *qm_get_unused_portal(void);
+struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx);
+
+void qm_put_unused_portal(struct qm_portal_config *pcfg);
+void qm_set_liodns(struct qm_portal_config *pcfg);
+
+/* This CGR feature is supported by h/w and required by unit-tests and the
+ * debugfs hooks, so is implemented in the driver. However it allows an explicit
+ * corruption of h/w fields by s/w that are usually incorruptible (because the
+ * counters are usually maintained entirely within h/w). As such, we declare
+ * this API internally.
+ */
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result);
+
+/*   QMan s/w corenet portal, low-level i/face	 */
+
+/*
+ * For Choose one SOURCE. Choose one COUNT. Choose one
+ * dequeue TYPE. Choose TOKEN (8-bit).
+ * If SOURCE == CHANNELS,
+ *   Choose CHANNELS_DEDICATED and/or CHANNELS_POOL(n).
+ *   You can choose DEDICATED_PRECEDENCE if the portal channel should have
+ *   priority.
+ * If SOURCE == SPECIFICWQ,
+ *     Either select the work-queue ID with SPECIFICWQ_WQ(), or select the
+ *     channel (SPECIFICWQ_DEDICATED or SPECIFICWQ_POOL()) and specify the
+ *     work-queue priority (0-7) with SPECIFICWQ_WQ() - either way, you get the
+ *     same value.
+ */
+#define QM_SDQCR_SOURCE_CHANNELS	0x0
+#define QM_SDQCR_SOURCE_SPECIFICWQ	0x40000000
+#define QM_SDQCR_COUNT_EXACT1		0x0
+#define QM_SDQCR_COUNT_UPTO3		0x20000000
+#define QM_SDQCR_DEDICATED_PRECEDENCE	0x10000000
+#define QM_SDQCR_TYPE_MASK		0x03000000
+#define QM_SDQCR_TYPE_NULL		0x0
+#define QM_SDQCR_TYPE_PRIO_QOS		0x01000000
+#define QM_SDQCR_TYPE_ACTIVE_QOS	0x02000000
+#define QM_SDQCR_TYPE_ACTIVE		0x03000000
+#define QM_SDQCR_TOKEN_MASK		0x00ff0000
+#define QM_SDQCR_TOKEN_SET(v)		(((v) & 0xff) << 16)
+#define QM_SDQCR_TOKEN_GET(v)		(((v) >> 16) & 0xff)
+#define QM_SDQCR_CHANNELS_DEDICATED	0x00008000
+#define QM_SDQCR_SPECIFICWQ_MASK	0x000000f7
+#define QM_SDQCR_SPECIFICWQ_DEDICATED	0x00000000
+#define QM_SDQCR_SPECIFICWQ_POOL(n)	((n) << 4)
+#define QM_SDQCR_SPECIFICWQ_WQ(n)	(n)
+
+#define QM_VDQCR_FQID_MASK		0x00ffffff
+#define QM_VDQCR_FQID(n)		((n) & QM_VDQCR_FQID_MASK)
+
+#define QM_EQCR_VERB_VBIT		0x80
+#define QM_EQCR_VERB_CMD_MASK		0x61	/* but only one value; */
+#define QM_EQCR_VERB_CMD_ENQUEUE	0x01
+#define QM_EQCR_VERB_COLOUR_MASK	0x18	/* 4 possible values; */
+#define QM_EQCR_VERB_COLOUR_GREEN	0x00
+#define QM_EQCR_VERB_COLOUR_YELLOW	0x08
+#define QM_EQCR_VERB_COLOUR_RED		0x10
+#define QM_EQCR_VERB_COLOUR_OVERRIDE	0x18
+#define QM_EQCR_VERB_INTERRUPT		0x04	/* on command consumption */
+#define QM_EQCR_VERB_ORP		0x02	/* enable order restoration */
+#define QM_EQCR_DCA_ENABLE		0x80
+#define QM_EQCR_DCA_PARK		0x40
+#define QM_EQCR_DCA_IDXMASK		0x0f	/* "DQRR::idx" goes here */
+#define QM_EQCR_SEQNUM_NESN		0x8000	/* Advance NESN */
+#define QM_EQCR_SEQNUM_NLIS		0x4000	/* More fragments to come */
+#define QM_EQCR_SEQNUM_SEQMASK		0x3fff	/* sequence number goes here */
+#define QM_EQCR_FQID_NULL		0	/* eg. for an ORP seqnum hole */
+
+#define QM_MCC_VERB_VBIT		0x80
+#define QM_MCC_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_MCC_VERB_INITFQ_PARKED	0x40
+#define QM_MCC_VERB_INITFQ_SCHED	0x41
+#define QM_MCC_VERB_QUERYFQ		0x44
+#define QM_MCC_VERB_QUERYFQ_NP		0x45	/* "non-programmable" fields */
+#define QM_MCC_VERB_QUERYWQ		0x46
+#define QM_MCC_VERB_QUERYWQ_DEDICATED	0x47
+#define QM_MCC_VERB_ALTER_SCHED		0x48	/* Schedule FQ */
+#define QM_MCC_VERB_ALTER_FE		0x49	/* Force Eligible FQ */
+#define QM_MCC_VERB_ALTER_RETIRE	0x4a	/* Retire FQ */
+#define QM_MCC_VERB_ALTER_OOS		0x4b	/* Take FQ out of service */
+#define QM_MCC_VERB_ALTER_FQXON		0x4d	/* FQ XON */
+#define QM_MCC_VERB_ALTER_FQXOFF	0x4e	/* FQ XOFF */
+#define QM_MCC_VERB_INITCGR		0x50
+#define QM_MCC_VERB_MODIFYCGR		0x51
+#define QM_MCC_VERB_CGRTESTWRITE	0x52
+#define QM_MCC_VERB_QUERYCGR		0x58
+#define QM_MCC_VERB_QUERYCONGESTION	0x59
+
+/*
+ * Used by all portal interrupt registers except 'inhibit'
+ * Channels with frame availability
+ */
+#define QM_PIRQ_DQAVAIL	0x0000ffff
+
+/* The DQAVAIL interrupt fields break down into these bits; */
+#define QM_DQAVAIL_PORTAL	0x8000		/* Portal channel */
+#define QM_DQAVAIL_POOL(n)	(0x8000 >> (n))	/* Pool channel, n==[1..15] */
+#define QM_DQAVAIL_MASK		0xffff
+/* This mask contains all the "irqsource" bits visible to API users */
+#define QM_PIRQ_VISIBLE	(QM_PIRQ_SLOW | QM_PIRQ_DQRI)
+
+/* These are qm_<reg>_<verb>(). So for example, qm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define qm_isr_status_read(qm)		__qm_isr_read(qm, qm_isr_status)
+#define qm_isr_status_clear(qm, m)	__qm_isr_write(qm, qm_isr_status, m)
+#define qm_isr_enable_read(qm)		__qm_isr_read(qm, qm_isr_enable)
+#define qm_isr_enable_write(qm, v)	__qm_isr_write(qm, qm_isr_enable, v)
+#define qm_isr_disable_read(qm)		__qm_isr_read(qm, qm_isr_disable)
+#define qm_isr_disable_write(qm, v)	__qm_isr_write(qm, qm_isr_disable, v)
+/* TODO: unfortunate name-clash here, reword? */
+#define qm_isr_inhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 1)
+#define qm_isr_uninhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 0)
+
+#define QMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+
+#endif /* _QMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
new file mode 100644
index 0000000..784fe60
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -0,0 +1,1254 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_QMAN_H
+#define __FSL_QMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <dpaa_rbtree.h>
+
+/* Last updated for v00.800 of the BG */
+
+/* Hardware constants */
+#define QM_CHANNEL_SWPORTAL0 0
+#define QMAN_CHANNEL_POOL1 0x21
+#define QMAN_CHANNEL_CAAM 0x80
+#define QMAN_CHANNEL_PME 0xa0
+#define QMAN_CHANNEL_POOL1_REV3 0x401
+#define QMAN_CHANNEL_CAAM_REV3 0x840
+#define QMAN_CHANNEL_PME_REV3 0x860
+extern u16 qm_channel_pool1;
+extern u16 qm_channel_caam;
+extern u16 qm_channel_pme;
+enum qm_dc_portal {
+	qm_dc_portal_fman0 = 0,
+	qm_dc_portal_fman1 = 1,
+	qm_dc_portal_caam = 2,
+	qm_dc_portal_pme = 3
+};
+
+/* Portal processing (interrupt) sources */
+#define QM_PIRQ_CCSCI	0x00200000	/* CEETM Congestion State Change */
+#define QM_PIRQ_CSCI	0x00100000	/* Congestion State Change */
+#define QM_PIRQ_EQCI	0x00080000	/* Enqueue Command Committed */
+#define QM_PIRQ_EQRI	0x00040000	/* EQCR Ring (below threshold) */
+#define QM_PIRQ_DQRI	0x00020000	/* DQRR Ring (non-empty) */
+#define QM_PIRQ_MRI	0x00010000	/* MR Ring (non-empty) */
+/*
+ * This mask contains all the interrupt sources that need handling except DQRI,
+ * ie. that if present should trigger slow-path processing.
+ */
+#define QM_PIRQ_SLOW	(QM_PIRQ_CSCI | QM_PIRQ_EQCI | QM_PIRQ_EQRI | \
+			QM_PIRQ_MRI | QM_PIRQ_CCSCI)
+
+/* For qman_static_dequeue_*** APIs */
+#define QM_SDQCR_CHANNELS_POOL_MASK	0x00007fff
+/* for n in [1,15] */
+#define QM_SDQCR_CHANNELS_POOL(n)	(0x00008000 >> (n))
+/* for conversion from n of qm_channel */
+static inline u32 QM_SDQCR_CHANNELS_POOL_CONV(u16 channel)
+{
+	return QM_SDQCR_CHANNELS_POOL(channel + 1 - qm_channel_pool1);
+}
+
+/* For qman_volatile_dequeue(); Choose one PRECEDENCE. EXACT is optional. Use
+ * NUMFRAMES(n) (6-bit) or NUMFRAMES_TILLEMPTY to fill in the frame-count. Use
+ * FQID(n) to fill in the frame queue ID.
+ */
+#define QM_VDQCR_PRECEDENCE_VDQCR	0x0
+#define QM_VDQCR_PRECEDENCE_SDQCR	0x80000000
+#define QM_VDQCR_EXACT			0x40000000
+#define QM_VDQCR_NUMFRAMES_MASK		0x3f000000
+#define QM_VDQCR_NUMFRAMES_SET(n)	(((n) & 0x3f) << 24)
+#define QM_VDQCR_NUMFRAMES_GET(n)	(((n) >> 24) & 0x3f)
+#define QM_VDQCR_NUMFRAMES_TILLEMPTY	QM_VDQCR_NUMFRAMES_SET(0)
+
+/* --- QMan data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct qm_eqcr_entry;	/* EQCR (EnQueue Command Ring) entries */
+struct qm_dqrr_entry;	/* DQRR (DeQueue Response Ring) entries */
+struct qm_mr_entry;	/* MR (Message Ring) entries */
+struct qm_mc_command;	/* MC (Management Command) command */
+struct qm_mc_result;	/* MC result */
+
+#define QM_FD_FORMAT_SG		0x4
+#define QM_FD_FORMAT_LONG	0x2
+#define QM_FD_FORMAT_COMPOUND	0x1
+enum qm_fd_format {
+	/*
+	 * 'contig' implies a contiguous buffer, whereas 'sg' implies a
+	 * scatter-gather table. 'big' implies a 29-bit length with no offset
+	 * field, otherwise length is 20-bit and offset is 9-bit. 'compound'
+	 * implies a s/g-like table, where each entry itself represents a frame
+	 * (contiguous or scatter-gather) and the 29-bit "length" is
+	 * interpreted purely for congestion calculations, ie. a "congestion
+	 * weight".
+	 */
+	qm_fd_contig = 0,
+	qm_fd_contig_big = QM_FD_FORMAT_LONG,
+	qm_fd_sg = QM_FD_FORMAT_SG,
+	qm_fd_sg_big = QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG,
+	qm_fd_compound = QM_FD_FORMAT_COMPOUND
+};
+
+/* Capitalised versions are un-typed but can be used in static expressions */
+#define QM_FD_CONTIG	0
+#define QM_FD_CONTIG_BIG QM_FD_FORMAT_LONG
+#define QM_FD_SG	QM_FD_FORMAT_SG
+#define QM_FD_SG_BIG	(QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG)
+#define QM_FD_COMPOUND	QM_FD_FORMAT_COMPOUND
+
+/* "Frame Descriptor (FD)" */
+struct qm_fd {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 dd:2;	/* dynamic debug */
+			u8 liodn_offset:6;
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 eliodn_offset:4;
+			u8 __reserved:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u8 liodn_offset:6;
+			u8 dd:2;	/* dynamic debug */
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 __reserved:4;
+			u8 eliodn_offset:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#endif
+		};
+		struct {
+			u64 __notaddress:24;
+			/* More efficient address accessor */
+			u64 addr:40;
+		};
+		u64 opaque_addr;
+	};
+	/* The 'format' field indicates the interpretation of the remaining 29
+	 * bits of the 32-bit word. For packing reasons, it is duplicated in the
+	 * other union elements. Note, union'd structs are difficult to use with
+	 * static initialisation under gcc, in which case use the "opaque" form
+	 * with one of the macros.
+	 */
+	union {
+		/* For easier/faster copying of this part of the fd (eg. from a
+		 * DQRR entry to an EQCR entry) copy 'opaque'
+		 */
+		u32 opaque;
+		/* If 'format' is _contig or _sg, 20b length and 9b offset */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format format:3;
+			u16 offset:9;
+			u32 length20:20;
+#else
+			u32 length20:20;
+			u16 offset:9;
+			enum qm_fd_format format:3;
+#endif
+		};
+		/* If 'format' is _contig_big or _sg_big, 29b length */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format1:3;
+			u32 length29:29;
+#else
+			u32 length29:29;
+			enum qm_fd_format _format1:3;
+#endif
+		};
+		/* If 'format' is _compound, 29b "congestion weight" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format2:3;
+			u32 cong_weight:29;
+#else
+			u32 cong_weight:29;
+			enum qm_fd_format _format2:3;
+#endif
+		};
+	};
+	union {
+		u32 cmd;
+		u32 status;
+	};
+} __attribute__((aligned(8)));
+#define QM_FD_DD_NULL		0x00
+#define QM_FD_PID_MASK		0x3f
+static inline u64 qm_fd_addr_get64(const struct qm_fd *fd)
+{
+	return fd->addr;
+}
+
+static inline dma_addr_t qm_fd_addr(const struct qm_fd *fd)
+{
+	return (dma_addr_t)fd->addr;
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_fd_addr_set64(fd, v) \
+	do { \
+		struct qm_fd *__fd931 = (fd); \
+		__fd931->addr = v; \
+	} while (0)
+
+/* Scatter/Gather table entry */
+struct qm_sg_entry {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1[3];
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u8 __reserved1[3];
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:24;
+			u64 addr:40;
+#else
+			u64 addr:40;
+			u64 __notaddress:24;
+#endif
+		};
+		u64 opaque;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 extension:1;	/* Extension bit */
+			u32 final:1;		/* Final bit */
+			u32 length:30;
+#else
+			u32 length:30;
+			u32 final:1;		/* Final bit */
+			u32 extension:1;	/* Extension bit */
+#endif
+		};
+		u32 val;
+	};
+	u8 __reserved2;
+	u8 bpid;
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved3:3;
+			u16 offset:13;
+#else
+			u16 offset:13;
+			u16 __reserved3:3;
+#endif
+		};
+		u16 val_off;
+	};
+} __packed;
+static inline u64 qm_sg_entry_get64(const struct qm_sg_entry *sg)
+{
+	return sg->addr;
+}
+
+static inline dma_addr_t qm_sg_addr(const struct qm_sg_entry *sg)
+{
+	return (dma_addr_t)sg->addr;
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_sg_entry_set64(sg, v) \
+	do { \
+		struct qm_sg_entry *__sg931 = (sg); \
+		__sg931->addr = v; \
+	} while (0)
+
+/* See 1.5.8.1: "Enqueue Command" */
+struct qm_eqcr_entry {
+	u8 __dont_write_directly__verb;
+	u8 dca;
+	u16 seqnum;
+	u32 orp;	/* 24-bit */
+	u32 fqid;	/* 24-bit */
+	u32 tag;
+	struct qm_fd fd;
+	u8 __reserved3[32];
+} __packed;
+
+
+/* "Frame Dequeue Response" */
+struct qm_dqrr_entry {
+	u8 verb;
+	u8 stat;
+	u16 seqnum;	/* 15-bit */
+	u8 tok;
+	u8 __reserved2[3];
+	u32 fqid;	/* 24-bit */
+	u32 contextB;
+	struct qm_fd fd;
+	u8 __reserved4[32];
+};
+
+#define QM_DQRR_VERB_VBIT		0x80
+#define QM_DQRR_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_DQRR_VERB_FRAME_DEQUEUE	0x60	/* "this format" */
+#define QM_DQRR_STAT_FQ_EMPTY		0x80	/* FQ empty */
+#define QM_DQRR_STAT_FQ_HELDACTIVE	0x40	/* FQ held active */
+#define QM_DQRR_STAT_FQ_FORCEELIGIBLE	0x20	/* FQ was force-eligible'd */
+#define QM_DQRR_STAT_FD_VALID		0x10	/* has a non-NULL FD */
+#define QM_DQRR_STAT_UNSCHEDULED	0x02	/* Unscheduled dequeue */
+#define QM_DQRR_STAT_DQCR_EXPIRED	0x01	/* VDQCR or PDQCR expired*/
+
+
+/* "ERN Message Response" */
+/* "FQ State Change Notification" */
+struct qm_mr_entry {
+	u8 verb;
+	union {
+		struct {
+			u8 dca;
+			u16 seqnum;
+			u8 rc;		/* Rejection Code */
+			u32 orp:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed ern;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+			u8 __reserved1:4;
+			enum qm_dc_portal portal:2;
+#else
+			enum qm_dc_portal portal:3;
+			u8 __reserved1:3;
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+#endif
+			u16 __reserved2;
+			u8 rc;		/* Rejection Code */
+			u32 __reserved3:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed dcern;
+		struct {
+			u8 fqs;		/* Frame Queue Status */
+			u8 __reserved1[6];
+			u32 fqid;	/* 24-bit */
+			u32 contextB;
+			u8 __reserved2[16];
+		} __packed fq;		/* FQRN/FQRNI/FQRL/FQPN */
+	};
+	u8 __reserved2[32];
+} __packed;
+#define QM_MR_VERB_VBIT			0x80
+/*
+ * ERNs originating from direct-connect portals ("dcern") use 0x20 as a verb
+ * which would be invalid as a s/w enqueue verb. A s/w ERN can be distinguished
+ * from the other MR types by noting if the 0x20 bit is unset.
+ */
+#define QM_MR_VERB_TYPE_MASK		0x27
+#define QM_MR_VERB_DC_ERN		0x20
+#define QM_MR_VERB_FQRN			0x21
+#define QM_MR_VERB_FQRNI		0x22
+#define QM_MR_VERB_FQRL			0x23
+#define QM_MR_VERB_FQPN			0x24
+#define QM_MR_RC_MASK			0xf0	/* contains one of; */
+#define QM_MR_RC_CGR_TAILDROP		0x00
+#define QM_MR_RC_WRED			0x10
+#define QM_MR_RC_ERROR			0x20
+#define QM_MR_RC_ORPWINDOW_EARLY	0x30
+#define QM_MR_RC_ORPWINDOW_LATE		0x40
+#define QM_MR_RC_FQ_TAILDROP		0x50
+#define QM_MR_RC_ORPWINDOW_RETIRED	0x60
+#define QM_MR_RC_ORP_ZERO		0x70
+#define QM_MR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+#define QM_MR_DCERN_COLOUR_GREEN	0x00
+#define QM_MR_DCERN_COLOUR_YELLOW	0x01
+#define QM_MR_DCERN_COLOUR_RED		0x02
+#define QM_MR_DCERN_COLOUR_OVERRIDE	0x03
+/*
+ * An identical structure of FQD fields is present in the "Init FQ" command and
+ * the "Query FQ" result, it's suctioned out into the "struct qm_fqd" type.
+ * Within that, the 'stashing' and 'taildrop' pieces are also factored out, the
+ * latter has two inlines to assist with converting to/from the mant+exp
+ * representation.
+ */
+struct qm_fqd_stashing {
+	/* See QM_STASHING_EXCL_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 exclusive;
+	u8 __reserved1:2;
+	/* Numbers of cachelines */
+	u8 annotation_cl:2;
+	u8 data_cl:2;
+	u8 context_cl:2;
+#else
+	u8 context_cl:2;
+	u8 data_cl:2;
+	u8 annotation_cl:2;
+	u8 __reserved1:2;
+	u8 exclusive;
+#endif
+} __packed;
+struct qm_fqd_taildrop {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved1:3;
+	u16 mant:8;
+	u16 exp:5;
+#else
+	u16 exp:5;
+	u16 mant:8;
+	u16 __reserved1:3;
+#endif
+} __packed;
+struct qm_fqd_oac {
+	/* "Overhead Accounting Control", see QM_OAC_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 oac:2; /* "Overhead Accounting Control" */
+	u8 __reserved1:6;
+#else
+	u8 __reserved1:6;
+	u8 oac:2; /* "Overhead Accounting Control" */
+#endif
+	/* Two's-complement value (-128 to +127) */
+	signed char oal; /* "Overhead Accounting Length" */
+} __packed;
+struct qm_fqd {
+	union {
+		u8 orpc;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1:2;
+			u8 orprws:3;
+			u8 oa:1;
+			u8 olws:2;
+#else
+			u8 olws:2;
+			u8 oa:1;
+			u8 orprws:3;
+			u8 __reserved1:2;
+#endif
+		} __packed;
+	};
+	u8 cgid;
+	u16 fq_ctrl;	/* See QM_FQCTRL_<...> */
+	union {
+		u16 dest_wq;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 channel:13; /* qm_channel */
+			u16 wq:3;
+#else
+			u16 wq:3;
+			u16 channel:13; /* qm_channel */
+#endif
+		} __packed dest;
+	};
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#else
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#endif
+	/*
+	 * For "Initialize Frame Queue" commands, the write-enable mask
+	 * determines whether 'td' or 'oac_init' is observed. For query
+	 * commands, this field is always 'td', and 'oac_query' (below) reflects
+	 * the Overhead ACcounting values.
+	 */
+	union {
+		uint16_t opaque_td;
+		struct qm_fqd_taildrop td;
+		struct qm_fqd_oac oac_init;
+	};
+	u32 context_b;
+	union {
+		/* Treat it as 64-bit opaque */
+		u64 opaque;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 hi;
+			u32 lo;
+#else
+			u32 lo;
+			u32 hi;
+#endif
+		};
+		/* Treat it as s/w portal stashing config */
+		/* see "FQD Context_A field used for [...]" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			struct qm_fqd_stashing stashing;
+			/*
+			 * 48-bit address of FQ context to
+			 * stash, must be cacheline-aligned
+			 */
+			u16 context_hi;
+			u32 context_lo;
+#else
+			u32 context_lo;
+			u16 context_hi;
+			struct qm_fqd_stashing stashing;
+#endif
+		} __packed;
+	} context_a;
+	struct qm_fqd_oac oac_query;
+} __packed;
+/* 64-bit converters for context_hi/lo */
+static inline u64 qm_fqd_stashing_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.context_hi << 32) |
+		(u64)fqd->context_a.context_lo;
+}
+
+static inline dma_addr_t qm_fqd_stashing_addr(const struct qm_fqd *fqd)
+{
+	return (dma_addr_t)qm_fqd_stashing_get64(fqd);
+}
+
+static inline u64 qm_fqd_context_a_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.hi << 32) |
+		(u64)fqd->context_a.lo;
+}
+
+static inline void qm_fqd_stashing_set64(struct qm_fqd *fqd, u64 addr)
+{
+		fqd->context_a.context_hi = upper_32_bits(addr);
+		fqd->context_a.context_lo = lower_32_bits(addr);
+}
+
+static inline void qm_fqd_context_a_set64(struct qm_fqd *fqd, u64 addr)
+{
+	fqd->context_a.hi = upper_32_bits(addr);
+	fqd->context_a.lo = lower_32_bits(addr);
+}
+
+/* convert a threshold value into mant+exp representation */
+static inline int qm_fqd_taildrop_set(struct qm_fqd_taildrop *td, u32 val,
+				      int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	if (val > 0xe0000000)
+		return -ERANGE;
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	td->exp = e;
+	td->mant = val;
+	return 0;
+}
+
+/* and the other direction */
+static inline u32 qm_fqd_taildrop_get(const struct qm_fqd_taildrop *td)
+{
+	return (u32)td->mant << td->exp;
+}
+
+
+/* See "Frame Queue Descriptor (FQD)" */
+/* Frame Queue Descriptor (FQD) field 'fq_ctrl' uses these constants */
+#define QM_FQCTRL_MASK		0x07ff	/* 'fq_ctrl' flags; */
+#define QM_FQCTRL_CGE		0x0400	/* Congestion Group Enable */
+#define QM_FQCTRL_TDE		0x0200	/* Tail-Drop Enable */
+#define QM_FQCTRL_ORP		0x0100	/* ORP Enable */
+#define QM_FQCTRL_CTXASTASHING	0x0080	/* Context-A stashing */
+#define QM_FQCTRL_CPCSTASH	0x0040	/* CPC Stash Enable */
+#define QM_FQCTRL_FORCESFDR	0x0008	/* High-priority SFDRs */
+#define QM_FQCTRL_AVOIDBLOCK	0x0004	/* Don't block active */
+#define QM_FQCTRL_HOLDACTIVE	0x0002	/* Hold active in portal */
+#define QM_FQCTRL_PREFERINCACHE	0x0001	/* Aggressively cache FQD */
+#define QM_FQCTRL_LOCKINCACHE	QM_FQCTRL_PREFERINCACHE /* older naming */
+
+/* See "FQD Context_A field used for [...] */
+/* Frame Queue Descriptor (FQD) field 'CONTEXT_A' uses these constants */
+#define QM_STASHING_EXCL_ANNOTATION	0x04
+#define QM_STASHING_EXCL_DATA		0x02
+#define QM_STASHING_EXCL_CTX		0x01
+
+/* See "Intra Class Scheduling" */
+/* FQD field 'OAC' (Overhead ACcounting) uses these constants */
+#define QM_OAC_ICS		0x2 /* Accounting for Intra-Class Scheduling */
+#define QM_OAC_CG		0x1 /* Accounting for Congestion Groups */
+
+/*
+ * This struct represents the 32-bit "WR_PARM_[GYR]" parameters in CGR fields
+ * and associated commands/responses. The WRED parameters are calculated from
+ * these fields as follows;
+ *   MaxTH = MA * (2 ^ Mn)
+ *   Slope = SA / (2 ^ Sn)
+ *    MaxP = 4 * (Pn + 1)
+ */
+struct qm_cgr_wr_parm {
+	union {
+		u32 word;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 MA:8;
+			u32 Mn:5;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Sn:6;
+			u32 Pn:6;
+#else
+			u32 Pn:6;
+			u32 Sn:6;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Mn:5;
+			u32 MA:8;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This struct represents the 13-bit "CS_THRES" CGR field. In the corresponding
+ * management commands, this is padded to a 16-bit structure field, so that's
+ * how we represent it here. The congestion state threshold is calculated from
+ * these fields as follows;
+ *   CS threshold = TA * (2 ^ Tn)
+ */
+struct qm_cgr_cs_thres {
+	union {
+		u16 hword;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved:3;
+			u16 TA:8;
+			u16 Tn:5;
+#else
+			u16 Tn:5;
+			u16 TA:8;
+			u16 __reserved:3;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This identical structure of CGR fields is present in the "Init/Modify CGR"
+ * commands and the "Query CGR" result. It's suctioned out here into its own
+ * struct.
+ */
+struct __qm_mc_cgr {
+	struct qm_cgr_wr_parm wr_parm_g;
+	struct qm_cgr_wr_parm wr_parm_y;
+	struct qm_cgr_wr_parm wr_parm_r;
+	u8 wr_en_g;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_y;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_r;	/* boolean, use QM_CGR_EN */
+	u8 cscn_en;	/* boolean, use QM_CGR_EN */
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+#else
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+#endif
+		};
+		u32 cscn_targ;	/* use QM_CGR_TARG_* */
+	};
+	u8 cstd_en;	/* boolean, use QM_CGR_EN */
+	u8 cs;		/* boolean, only used in query response */
+	union {
+		struct qm_cgr_cs_thres cs_thres;
+		/* use qm_cgr_cs_thres_set64() */
+		u16 __cs_thres;
+	};
+	u8 mode;	/* QMAN_CGR_MODE_FRAME not supported in rev1.0 */
+} __packed;
+#define QM_CGR_EN		0x01 /* For wr_en_*, cscn_en, cstd_en */
+#define QM_CGR_TARG_UDP_CTRL_WRITE_BIT	0x8000 /* value written to portal bit*/
+#define QM_CGR_TARG_UDP_CTRL_DCP	0x4000 /* 0: SWP, 1: DCP */
+#define QM_CGR_TARG_PORTAL(n)	(0x80000000 >> (n)) /* s/w portal, 0-9 */
+#define QM_CGR_TARG_FMAN0	0x00200000 /* direct-connect portal: fman0 */
+#define QM_CGR_TARG_FMAN1	0x00100000 /*			   : fman1 */
+/* Convert CGR thresholds to/from "cs_thres" format */
+static inline u64 qm_cgr_cs_thres_get64(const struct qm_cgr_cs_thres *th)
+{
+	return (u64)th->TA << th->Tn;
+}
+
+static inline int qm_cgr_cs_thres_set64(struct qm_cgr_cs_thres *th, u64 val,
+					int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	th->Tn = e;
+	th->TA = val;
+	return 0;
+}
+
+/* See 1.5.8.5.1: "Initialize FQ" */
+/* See 1.5.8.5.2: "Query FQ" */
+/* See 1.5.8.5.3: "Query FQ Non-Programmable Fields" */
+/* See 1.5.8.5.4: "Alter FQ State Commands " */
+/* See 1.5.8.6.1: "Initialize/Modify CGR" */
+/* See 1.5.8.6.2: "CGR Test Write" */
+/* See 1.5.8.6.3: "Query CGR" */
+/* See 1.5.8.6.4: "Query Congestion Group State" */
+struct qm_mcc_initfq {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	u32 fqid;	/* 24-bit */
+	u16 count;	/* Initialises 'count+1' FQDs */
+	struct qm_fqd fqd; /* the FQD fields go here */
+	u8 __reserved3[30];
+} __packed;
+struct qm_mcc_queryfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_queryfq_np {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_alterfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2;
+	u8 count;	/* number of consecutive FQID */
+	u8 __reserved3[10];
+	u32 context_b;	/* frame queue context b */
+	u8 __reserved4[40];
+} __packed;
+struct qm_mcc_initcgr {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	struct __qm_mc_cgr cgr;	/* CGR fields */
+	u8 __reserved2[2];
+	u8 cgid;
+	u8 __reserved4[32];
+} __packed;
+struct qm_mcc_cgrtestwrite {
+	u8 __reserved1[2];
+	u8 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u8 __reserved2[23];
+	u8 cgid;
+	u8 __reserved3[32];
+} __packed;
+struct qm_mcc_querycgr {
+	u8 __reserved1[30];
+	u8 cgid;
+	u8 __reserved2[32];
+} __packed;
+struct qm_mcc_querycongestion {
+	u8 __reserved[63];
+} __packed;
+struct qm_mcc_querywq {
+	u8 __reserved;
+	/* select channel if verb != QUERYWQ_DEDICATED */
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved1:3;
+#else
+			u16 __reserved1:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved2[60];
+} __packed;
+
+struct qm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct qm_mcc_initfq initfq;
+		struct qm_mcc_queryfq queryfq;
+		struct qm_mcc_queryfq_np queryfq_np;
+		struct qm_mcc_alterfq alterfq;
+		struct qm_mcc_initcgr initcgr;
+		struct qm_mcc_cgrtestwrite cgrtestwrite;
+		struct qm_mcc_querycgr querycgr;
+		struct qm_mcc_querycongestion querycongestion;
+		struct qm_mcc_querywq querywq;
+	};
+} __packed;
+
+/* INITFQ-specific flags */
+#define QM_INITFQ_WE_MASK		0x01ff	/* 'Write Enable' flags; */
+#define QM_INITFQ_WE_OAC		0x0100
+#define QM_INITFQ_WE_ORPC		0x0080
+#define QM_INITFQ_WE_CGID		0x0040
+#define QM_INITFQ_WE_FQCTRL		0x0020
+#define QM_INITFQ_WE_DESTWQ		0x0010
+#define QM_INITFQ_WE_ICSCRED		0x0008
+#define QM_INITFQ_WE_TDTHRESH		0x0004
+#define QM_INITFQ_WE_CONTEXTB		0x0002
+#define QM_INITFQ_WE_CONTEXTA		0x0001
+/* INITCGR/MODIFYCGR-specific flags */
+#define QM_CGR_WE_MASK			0x07ff	/* 'Write Enable Mask'; */
+#define QM_CGR_WE_WR_PARM_G		0x0400
+#define QM_CGR_WE_WR_PARM_Y		0x0200
+#define QM_CGR_WE_WR_PARM_R		0x0100
+#define QM_CGR_WE_WR_EN_G		0x0080
+#define QM_CGR_WE_WR_EN_Y		0x0040
+#define QM_CGR_WE_WR_EN_R		0x0020
+#define QM_CGR_WE_CSCN_EN		0x0010
+#define QM_CGR_WE_CSCN_TARG		0x0008
+#define QM_CGR_WE_CSTD_EN		0x0004
+#define QM_CGR_WE_CS_THRES		0x0002
+#define QM_CGR_WE_MODE			0x0001
+
+struct qm_mcr_initfq {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_queryfq {
+	u8 __reserved1[8];
+	struct qm_fqd fqd;	/* the FQD fields are here */
+	u8 __reserved2[30];
+} __packed;
+struct qm_mcr_queryfq_np {
+	u8 __reserved1;
+	u8 state;	/* QM_MCR_NP_STATE_*** */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 __reserved2;
+	u32 fqd_link:24;
+	u16 __reserved3:2;
+	u16 odp_seq:14;
+	u16 __reserved4:2;
+	u16 orp_nesn:14;
+	u16 __reserved5:1;
+	u16 orp_ea_hseq:15;
+	u16 __reserved6:1;
+	u16 orp_ea_tseq:15;
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+	u8 __reserved11[5];
+	u8 __reserved12:7;
+	u8 is:1;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#else
+	u8 __reserved2;
+	u32 fqd_link:24;
+
+	u16 odp_seq:14;
+	u16 __reserved3:2;
+
+	u16 orp_nesn:14;
+	u16 __reserved4:2;
+
+	u16 orp_ea_hseq:15;
+	u16 __reserved5:1;
+
+	u16 orp_ea_tseq:15;
+	u16 __reserved6:1;
+
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+
+	u8 __reserved11[5];
+	u8 is:1;
+	u8 __reserved12:7;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#endif
+} __packed;
+
+struct qm_mcr_alterfq {
+	u8 fqs;		/* Frame Queue Status */
+	u8 __reserved1[61];
+} __packed;
+struct qm_mcr_initcgr {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_cgrtestwrite {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	u32 __reserved3:24;
+	u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u32 __reserved4:24;
+	u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+	u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+	u16 lgt;	/* Last Group Tick */
+	u16 wr_prob_g;
+	u16 wr_prob_y;
+	u16 wr_prob_r;
+	u8 __reserved5[8];
+} __packed;
+struct qm_mcr_querycgr {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved3:24;
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 __reserved3:24;
+#endif
+		};
+		u64 i_bcnt;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved4:24;
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 __reserved4:24;
+#endif
+		};
+		u64 a_bcnt;
+	};
+	union {
+		u32 cscn_targ_swp[4];
+		u8 __reserved5[16];
+	};
+} __packed;
+
+struct __qm_mcr_querycongestion {
+	u32 state[8];
+};
+
+struct qm_mcr_querycongestion {
+	u8 __reserved[30];
+	/* Access this struct using QM_MCR_QUERYCONGESTION() */
+	struct __qm_mcr_querycongestion state;
+} __packed;
+struct qm_mcr_querywq {
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved:3;
+#else
+			u16 __reserved:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved[28];
+	u32 wq_len[8];
+} __packed;
+
+struct qm_mc_result {
+	u8 verb;
+	u8 result;
+	union {
+		struct qm_mcr_initfq initfq;
+		struct qm_mcr_queryfq queryfq;
+		struct qm_mcr_queryfq_np queryfq_np;
+		struct qm_mcr_alterfq alterfq;
+		struct qm_mcr_initcgr initcgr;
+		struct qm_mcr_cgrtestwrite cgrtestwrite;
+		struct qm_mcr_querycgr querycgr;
+		struct qm_mcr_querycongestion querycongestion;
+		struct qm_mcr_querywq querywq;
+	};
+} __packed;
+
+#define QM_MCR_VERB_RRID		0x80
+#define QM_MCR_VERB_MASK		QM_MCC_VERB_MASK
+#define QM_MCR_VERB_INITFQ_PARKED	QM_MCC_VERB_INITFQ_PARKED
+#define QM_MCR_VERB_INITFQ_SCHED	QM_MCC_VERB_INITFQ_SCHED
+#define QM_MCR_VERB_QUERYFQ		QM_MCC_VERB_QUERYFQ
+#define QM_MCR_VERB_QUERYFQ_NP		QM_MCC_VERB_QUERYFQ_NP
+#define QM_MCR_VERB_QUERYWQ		QM_MCC_VERB_QUERYWQ
+#define QM_MCR_VERB_QUERYWQ_DEDICATED	QM_MCC_VERB_QUERYWQ_DEDICATED
+#define QM_MCR_VERB_ALTER_SCHED		QM_MCC_VERB_ALTER_SCHED
+#define QM_MCR_VERB_ALTER_FE		QM_MCC_VERB_ALTER_FE
+#define QM_MCR_VERB_ALTER_RETIRE	QM_MCC_VERB_ALTER_RETIRE
+#define QM_MCR_VERB_ALTER_OOS		QM_MCC_VERB_ALTER_OOS
+#define QM_MCR_RESULT_NULL		0x00
+#define QM_MCR_RESULT_OK		0xf0
+#define QM_MCR_RESULT_ERR_FQID		0xf1
+#define QM_MCR_RESULT_ERR_FQSTATE	0xf2
+#define QM_MCR_RESULT_ERR_NOTEMPTY	0xf3	/* OOS fails if FQ is !empty */
+#define QM_MCR_RESULT_ERR_BADCHANNEL	0xf4
+#define QM_MCR_RESULT_PENDING		0xf8
+#define QM_MCR_RESULT_ERR_BADCOMMAND	0xff
+#define QM_MCR_NP_STATE_FE		0x10
+#define QM_MCR_NP_STATE_R		0x08
+#define QM_MCR_NP_STATE_MASK		0x07	/* Reads FQD::STATE; */
+#define QM_MCR_NP_STATE_OOS		0x00
+#define QM_MCR_NP_STATE_RETIRED		0x01
+#define QM_MCR_NP_STATE_TEN_SCHED	0x02
+#define QM_MCR_NP_STATE_TRU_SCHED	0x03
+#define QM_MCR_NP_STATE_PARKED		0x04
+#define QM_MCR_NP_STATE_ACTIVE		0x05
+#define QM_MCR_NP_PTR_MASK		0x07ff	/* for RA[12] & OD[123] */
+#define QM_MCR_NP_RA1_NRA(v)		(((v) >> 14) & 0x3)	/* FQD::NRA */
+#define QM_MCR_NP_RA2_IT(v)		(((v) >> 14) & 0x1)	/* FQD::IT */
+#define QM_MCR_NP_OD1_NOD(v)		(((v) >> 14) & 0x3)	/* FQD::NOD */
+#define QM_MCR_NP_OD3_NPC(v)		(((v) >> 14) & 0x3)	/* FQD::NPC */
+#define QM_MCR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MCR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+/* This extracts the state for congestion group 'n' from a query response.
+ * Eg.
+ *   u8 cgr = [...];
+ *   struct qm_mc_result *res = [...];
+ *   printf("congestion group %d congestion state: %d\n", cgr,
+ *       QM_MCR_QUERYCONGESTION(&res->querycongestion.state, cgr));
+ */
+#define __CGR_WORD(num)		(num >> 5)
+#define __CGR_SHIFT(num)	(num & 0x1f)
+#define __CGR_NUM		(sizeof(struct __qm_mcr_querycongestion) << 3)
+static inline int QM_MCR_QUERYCONGESTION(struct __qm_mcr_querycongestion *p,
+					 u8 cgr)
+{
+	return p->state[__CGR_WORD(cgr)] & (0x80000000 >> __CGR_SHIFT(cgr));
+}
+
+	/* Portal and Frame Queues */
+/* Represents a managed portal */
+struct qman_portal;
+
+/*
+ * This object type represents QMan frame queue descriptors (FQD), it is
+ * cacheline-aligned, and initialised by qman_create_fq(). The structure is
+ * defined further down.
+ */
+struct qman_fq;
+
+/*
+ * This object type represents a QMan congestion group, it is defined further
+ * down.
+ */
+struct qman_cgr;
+
+/*
+ * This enum, and the callback type that returns it, are used when handling
+ * dequeued frames via DQRR. Note that for "null" callbacks registered with the
+ * portal object (for handling dequeues that do not demux because context_b is
+ * NULL), the return value *MUST* be qman_cb_dqrr_consume.
+ */
+enum qman_cb_dqrr_result {
+	/* DQRR entry can be consumed */
+	qman_cb_dqrr_consume,
+	/* Like _consume, but requests parking - FQ must be held-active */
+	qman_cb_dqrr_park,
+	/* Does not consume, for DCA mode only. This allows out-of-order
+	 * consumes by explicit calls to qman_dca() and/or the use of implicit
+	 * DCA via EQCR entries.
+	 */
+	qman_cb_dqrr_defer,
+	/*
+	 * Stop processing without consuming this ring entry. Exits the current
+	 * qman_p_poll_dqrr() or interrupt-handling, as appropriate. If within
+	 * an interrupt handler, the callback would typically call
+	 * qman_irqsource_remove(QM_PIRQ_DQRI) before returning this value,
+	 * otherwise the interrupt will reassert immediately.
+	 */
+	qman_cb_dqrr_stop,
+	/* Like qman_cb_dqrr_stop, but consumes the current entry. */
+	qman_cb_dqrr_consume_stop
+};
+
+typedef enum qman_cb_dqrr_result (*qman_cb_dqrr)(struct qman_portal *qm,
+					struct qman_fq *fq,
+					const struct qm_dqrr_entry *dqrr);
+
+/*
+ * This callback type is used when handling ERNs, FQRNs and FQRLs via MR. They
+ * are always consumed after the callback returns.
+ */
+typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq,
+				const struct qm_mr_entry *msg);
+
+/* This callback type is used when handling DCP ERNs */
+typedef void (*qman_cb_dc_ern)(struct qman_portal *qm,
+				const struct qm_mr_entry *msg);
+/*
+ * s/w-visible states. Ie. tentatively scheduled + truly scheduled + active +
+ * held-active + held-suspended are just "sched". Things like "retired" will not
+ * be assumed until it is complete (ie. QMAN_FQ_STATE_CHANGING is set until
+ * then, to indicate it's completing and to gate attempts to retry the retire
+ * command). Note, park commands do not set QMAN_FQ_STATE_CHANGING because it's
+ * technically impossible in the case of enqueue DCAs (which refer to DQRR ring
+ * index rather than the FQ that ring entry corresponds to), so repeated park
+ * commands are allowed (if you're silly enough to try) but won't change FQ
+ * state, and the resulting park notifications move FQs from "sched" to
+ * "parked".
+ */
+enum qman_fq_state {
+	qman_fq_state_oos,
+	qman_fq_state_parked,
+	qman_fq_state_sched,
+	qman_fq_state_retired
+};
+
+
+/*
+ * Frame queue objects (struct qman_fq) are stored within memory passed to
+ * qman_create_fq(), as this allows stashing of caller-provided demux callback
+ * pointers at no extra cost to stashing of (driver-internal) FQ state. If the
+ * caller wishes to add per-FQ state and have it benefit from dequeue-stashing,
+ * they should;
+ *
+ * (a) extend the qman_fq structure with their state; eg.
+ *
+ *     // myfq is allocated and driver_fq callbacks filled in;
+ *     struct my_fq {
+ *	   struct qman_fq base;
+ *	   int an_extra_field;
+ *	   [ ... add other fields to be associated with each FQ ...]
+ *     } *myfq = some_my_fq_allocator();
+ *     struct qman_fq *fq = qman_create_fq(fqid, flags, &myfq->base);
+ *
+ *     // in a dequeue callback, access extra fields from 'fq' via a cast;
+ *     struct my_fq *myfq = (struct my_fq *)fq;
+ *     do_something_with(myfq->an_extra_field);
+ *     [...]
+ *
+ * (b) when and if configuring the FQ for context stashing, specify how ever
+ *     many cachelines are required to stash 'struct my_fq', to accelerate not
+ *     only the QMan driver but the callback as well.
+ */
+
+struct qman_fq_cb {
+	qman_cb_dqrr dqrr;	/* for dequeued frames */
+	qman_cb_mr ern;		/* for s/w ERNs */
+	qman_cb_mr fqs;		/* frame-queue state changes*/
+};
+
+struct qman_fq {
+	/* Caller of qman_create_fq() provides these demux callbacks */
+	struct qman_fq_cb cb;
+	/*
+	 * These are internal to the driver, don't touch. In particular, they
+	 * may change, be removed, or extended (so you shouldn't rely on
+	 * sizeof(qman_fq) being a constant).
+	 */
+	spinlock_t fqlock;
+	u32 fqid;
+	/* DPDK Interface */
+	void *dpaa_intf;
+
+	volatile unsigned long flags;
+	enum qman_fq_state state;
+	int cgr_groupid;
+	struct rb_node node;
+};
+
+/*
+ * This callback type is used when handling congestion group entry/exit.
+ * 'congested' is non-zero on congestion-entry, and zero on congestion-exit.
+ */
+typedef void (*qman_cb_cgr)(struct qman_portal *qm,
+			    struct qman_cgr *cgr, int congested);
+
+struct qman_cgr {
+	/* Set these prior to qman_create_cgr() */
+	u32 cgrid; /* 0..255, but u32 to allow specials like -1, 256, etc.*/
+	qman_cb_cgr cb;
+	/* These are private to the driver */
+	u16 chan; /* portal channel this object is created on */
+	struct list_head node;
+};
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_QMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index 4ff48c6..b0d953f 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -47,6 +47,10 @@
 extern "C" {
 #endif
 
+/* Thread-entry/exit hooks; */
+int qman_thread_init(void);
+int qman_thread_finish(void);
+
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
 /* Obtain and free raw (unitialized) portals */
@@ -81,6 +85,15 @@ int qman_free_raw_portal(struct dpaa_raw_portal *portal);
 int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
 int bman_free_raw_portal(struct dpaa_raw_portal *portal);
 
+/* Post-process interrupts. NB, the kernel IRQ handler disables the interrupt
+ * line before notifying us, and this post-processing re-enables it once
+ * processing is complete. As such, it is essential to call this before going
+ * into another blocking read/select/poll.
+ */
+void qman_thread_irq(void);
+
+/* Global setup */
+int qman_global_init(void);
 #ifdef __cplusplus
 }
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 11/40] bus/dpaa: add QMan driver core routines
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (9 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
                       ` (29 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |    2 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |   88 ++
 drivers/bus/dpaa/base/qbman/qman.c        | 2402 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h        |  888 +++++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c |   12 +
 drivers/bus/dpaa/include/fsl_qman.h       |  755 +++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |    1 +
 7 files changed, 4148 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 29f01df..ba87386 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -70,7 +70,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
+	base/qbman/dpaa_alloc.c \
 	base/qbman/dpaa_sys.c
 
 # Link Pthread
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
new file mode 100644
index 0000000..690576a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2009-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "dpaa_sys.h"
+#include <process.h>
+#include <fsl_qman.h>
+
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_fqid, result, count, align, partial);
+}
+
+void qman_release_fqid_range(u32 fqid, u32 count)
+{
+	process_release(dpaa_id_fqid, fqid, count);
+}
+
+int qman_reserve_fqid_range(u32 fqid, unsigned int count)
+{
+	return process_reserve(dpaa_id_fqid, fqid, count);
+}
+
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_qpool, result, count, align, partial);
+}
+
+void qman_release_pool_range(u32 pool, u32 count)
+{
+	process_release(dpaa_id_qpool, pool, count);
+}
+
+int qman_reserve_pool_range(u32 pool, u32 count)
+{
+	return process_reserve(dpaa_id_qpool, pool, count);
+}
+
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_cgrid, result, count, align, partial);
+}
+
+void qman_release_cgrid_range(u32 cgrid, u32 count)
+{
+	process_release(dpaa_id_cgrid, cgrid, count);
+}
+
+int qman_reserve_cgrid_range(u32 cgrid, u32 count)
+{
+	return process_reserve(dpaa_id_cgrid, cgrid, count);
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
new file mode 100644
index 0000000..494d54c
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -0,0 +1,2402 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define DQRR_MAXFILL	15
+#define EQCR_ITHRESH	4	/* if EQCR congests, interrupt threshold */
+#define IRQNAME		"QMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "QMan portal %d" */
+/* maximum number of DQRR entries to process in qman_poll() */
+#define FSL_QMAN_POLL_LIMIT 8
+
+/* Lock/unlock frame queues, subject to the "LOCKED" flag. This is about
+ * inter-processor locking only. Note, FQLOCK() is always called either under a
+ * local_irq_save() or from interrupt context - hence there's no need for irq
+ * protection (and indeed, attempting to nest irq-protection doesn't work, as
+ * the "irq en/disable" machinery isn't recursive...).
+ */
+#define FQLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_lock(&__fq478->fqlock); \
+	} while (0)
+#define FQUNLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_unlock(&__fq478->fqlock); \
+	} while (0)
+
+static inline void fq_set(struct qman_fq *fq, u32 mask)
+{
+	dpaa_set_bits(mask, &fq->flags);
+}
+
+static inline void fq_clear(struct qman_fq *fq, u32 mask)
+{
+	dpaa_clear_bits(mask, &fq->flags);
+}
+
+static inline int fq_isset(struct qman_fq *fq, u32 mask)
+{
+	return fq->flags & mask;
+}
+
+static inline int fq_isclear(struct qman_fq *fq, u32 mask)
+{
+	return !(fq->flags & mask);
+}
+
+struct qman_portal {
+	struct qm_portal p;
+	/* PORTAL_BITS_*** - dynamic, strictly internal */
+	unsigned long bits;
+	/* interrupt sources processed by portal_isr(), configurable */
+	unsigned long irq_sources;
+	u32 use_eqcr_ci_stashing;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* only 1 volatile dequeue at a time */
+	struct qman_fq *vdqcr_owned;
+	u32 sdqcr;
+	int dqrr_disable_ref;
+	/* A portal-specific handler for DCP ERNs. If this is NULL, the global
+	 * handler is called instead.
+	 */
+	qman_cb_dc_ern cb_dc_ern;
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct qm_portal_config *config;
+	struct dpa_rbtree retire_table;
+	char irqname[MAX_IRQNAME];
+	/* 2-element array. cgrs[0] is mask, cgrs[1] is snapshot. */
+	struct qman_cgrs *cgrs;
+	/* linked-list of CSCN handlers. */
+	struct list_head cgr_cbs;
+	/* list lock */
+	spinlock_t cgr_lock;
+	/* track if memory was allocated by the driver */
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* Keep a shadow copy of the DQRR on LE systems as the SW needs to
+	 * do byte swaps of DQRR read only memory.  First entry must be aligned
+	 * to 2 ** 10 to ensure DQRR index calculations based shadow copy
+	 * address (6 bits for address shift + 4 bits for the DQRR size).
+	 */
+	struct qm_dqrr_entry shadow_dqrr[QM_DQRR_SIZE]
+		    __attribute__((aligned(1024)));
+#endif
+};
+
+/* Global handler for DCP ERNs. Used when the portal receiving the message does
+ * not have a portal-specific handler.
+ */
+static qman_cb_dc_ern cb_dc_ern;
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static u16 affine_channels[NR_CPUS];
+static RTE_DEFINE_PER_LCORE(struct qman_portal, qman_affine_portal);
+
+static inline struct qman_portal *get_affine_portal(void)
+{
+	return &RTE_PER_LCORE(qman_affine_portal);
+}
+
+/* This gives a FQID->FQ lookup to cover the fact that we can't directly demux
+ * retirement notifications (the fact they are sometimes h/w-consumed means that
+ * contextB isn't always a s/w demux - and as we can't know which case it is
+ * when looking at the notification, we have to use the slow lookup for all of
+ * them). NB, it's possible to have multiple FQ objects refer to the same FQID
+ * (though at most one of them should be the consumer), so this table isn't for
+ * all FQs - FQs are added when retirement commands are issued, and removed when
+ * they complete, which also massively reduces the size of this table.
+ */
+IMPLEMENT_DPAA_RBTREE(fqtree, struct qman_fq, node, fqid);
+/*
+ * This is what everything can wait on, even if it migrates to a different cpu
+ * to the one whose affine portal it is waiting on.
+ */
+static DECLARE_WAIT_QUEUE_HEAD(affine_queue);
+
+static inline int table_push_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	int ret = fqtree_push(&p->retire_table, fq);
+
+	if (ret)
+		pr_err("ERROR: double FQ-retirement %d\n", fq->fqid);
+	return ret;
+}
+
+static inline void table_del_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	fqtree_del(&p->retire_table, fq);
+}
+
+static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
+{
+	return fqtree_find(&p->retire_table, fqid);
+}
+
+static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to HW format */
+	fqd->fq_ctrl = cpu_to_be16(fqd->fq_ctrl);
+	fqd->dest_wq = cpu_to_be16(fqd->dest_wq);
+	fqd->ics_cred = cpu_to_be16(fqd->ics_cred);
+	fqd->context_b = cpu_to_be32(fqd->context_b);
+	fqd->context_a.opaque = cpu_to_be64(fqd->context_a.opaque);
+	fqd->opaque_td = cpu_to_be16(fqd->opaque_td);
+}
+
+static inline void hw_fqd_to_cpu(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to CPU format */
+	fqd->fq_ctrl = be16_to_cpu(fqd->fq_ctrl);
+	fqd->dest_wq = be16_to_cpu(fqd->dest_wq);
+	fqd->ics_cred = be16_to_cpu(fqd->ics_cred);
+	fqd->context_b = be32_to_cpu(fqd->context_b);
+	fqd->context_a.opaque = be64_to_cpu(fqd->context_a.opaque);
+}
+
+static inline void cpu_to_hw_fd(struct qm_fd *fd)
+{
+	fd->addr = cpu_to_be40(fd->addr);
+	fd->status = cpu_to_be32(fd->status);
+	fd->opaque = cpu_to_be32(fd->opaque);
+}
+
+static inline void hw_fd_to_cpu(struct qm_fd *fd)
+{
+	fd->addr = be40_to_cpu(fd->addr);
+	fd->status = be32_to_cpu(fd->status);
+	fd->opaque = be32_to_cpu(fd->opaque);
+}
+
+/* In the case that slow- and fast-path handling are both done by qman_poll()
+ * (ie. because there is no interrupt handling), we ought to balance how often
+ * we do the fast-path poll versus the slow-path poll. We'll use two decrementer
+ * sources, so we call the fast poll 'n' times before calling the slow poll
+ * once. The idle decrementer constant is used when the last slow-poll detected
+ * no work to do, and the busy decrementer constant when the last slow-poll had
+ * work to do.
+ */
+#define SLOW_POLL_IDLE   1000
+#define SLOW_POLL_BUSY   10
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is);
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit);
+
+/* Portal interrupt handler */
+static irqreturn_t portal_isr(__always_unused int irq, void *ptr)
+{
+	struct qman_portal *p = ptr;
+	/*
+	 * The CSCI/CCSCI source is cleared inside __poll_portal_slow(), because
+	 * it could race against a Query Congestion State command also given
+	 * as part of the handling of this interrupt source. We mustn't
+	 * clear it a second time in this top-level function.
+	 */
+	u32 clear = QM_DQAVAIL_MASK | (p->irq_sources &
+		~(QM_PIRQ_CSCI | QM_PIRQ_CCSCI));
+	u32 is = qm_isr_status_read(&p->p) & p->irq_sources;
+	/* DQRR-handling if it's interrupt-driven */
+	if (is & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+	/* Handling of anything else that's interrupt-driven */
+	clear |= __poll_portal_slow(p, is);
+	qm_isr_status_clear(&p->p, clear);
+	return IRQ_HANDLED;
+}
+
+/* This inner version is used privately by qman_create_affine_portal(), as well
+ * as by the exported qman_stop_dequeues().
+ */
+static inline void qman_stop_dequeues_ex(struct qman_portal *p)
+{
+	if (!(p->dqrr_disable_ref++))
+		qm_dqrr_set_maxfill(&p->p, 0);
+}
+
+static int drain_mr_fqrni(struct qm_portal *p)
+{
+	const struct qm_mr_entry *msg;
+loop:
+	msg = qm_mr_current(p);
+	if (!msg) {
+		/*
+		 * if MR was full and h/w had other FQRNI entries to produce, we
+		 * need to allow it time to produce those entries once the
+		 * existing entries are consumed. A worst-case situation
+		 * (fully-loaded system) means h/w sequencers may have to do 3-4
+		 * other things before servicing the portal's MR pump, each of
+		 * which (if slow) may take ~50 qman cycles (which is ~200
+		 * processor cycles). So rounding up and then multiplying this
+		 * worst-case estimate by a factor of 10, just to be
+		 * ultra-paranoid, goes as high as 10,000 cycles. NB, we consume
+		 * one entry at a time, so h/w has an opportunity to produce new
+		 * entries well before the ring has been fully consumed, so
+		 * we're being *really* paranoid here.
+		 */
+		u64 now, then = mfatb();
+
+		do {
+			now = mfatb();
+		} while ((then + 10000) > now);
+		msg = qm_mr_current(p);
+		if (!msg)
+			return 0;
+	}
+	if ((msg->verb & QM_MR_VERB_TYPE_MASK) != QM_MR_VERB_FQRNI) {
+		/* We aren't draining anything but FQRNIs */
+		pr_err("Found verb 0x%x in MR\n", msg->verb);
+		return -1;
+	}
+	qm_mr_next(p);
+	qm_mr_cci_consume(p, 1);
+	goto loop;
+}
+
+static inline int qm_eqcr_init(struct qm_portal *portal,
+			       enum qm_eqcr_pmode pmode,
+			       unsigned int eq_stash_thresh,
+			       int eq_stash_prio)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u32 cfg;
+	u8 pi;
+
+	eqcr->ring = portal->addr.ce + QM_CL_EQCR;
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	eqcr->cursor = eqcr->ring + pi;
+	eqcr->vbit = (qm_in(EQCR_PI_CINH) & QM_EQCR_SIZE) ?
+			QM_EQCR_VERB_VBIT : 0;
+	eqcr->available = QM_EQCR_SIZE - 1 -
+			qm_cyc_diff(QM_EQCR_SIZE, eqcr->ci, pi);
+	eqcr->ithresh = qm_in(EQCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+	eqcr->pmode = pmode;
+#endif
+	cfg = (qm_in(CFG) & 0x00ffffff) |
+		(eq_stash_thresh << 28) | /* QCSP_CFG: EST */
+		(eq_stash_prio << 26)	| /* QCSP_CFG: EP */
+		((pmode & 0x3) << 24);	/* QCSP_CFG::EPM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_eqcr_finish(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 pi, ci;
+	u32 cfg;
+
+	/*
+	 * Disable EQCI stashing because the QMan only
+	 * presents the value it previously stashed to
+	 * maintain coherency.  Setting the stash threshold
+	 * to 1 then 0 ensures that QMan has resyncronized
+	 * its internal copy so that the portal is clean
+	 * when it is reinitialized in the future
+	 */
+	cfg = (qm_in(CFG) & 0x0fffffff) |
+		(1 << 28); /* QCSP_CFG: EST */
+	qm_out(CFG, cfg);
+	cfg &= 0x0fffffff; /* stash threshold = 0 */
+	qm_out(CFG, cfg);
+
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+
+	/* Refresh EQCR CI cache value */
+	qm_cl_invalidate(EQCR_CI);
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (pi != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("losing uncommitted EQCR entries\n");
+	if (ci != eqcr->ci)
+		pr_crit("missing existing EQCR completions\n");
+	if (eqcr->ci != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("EQCR destroyed unquiesced\n");
+}
+
+static inline int qm_dqrr_init(struct qm_portal *portal,
+			__maybe_unused const struct qm_portal_config *config,
+			enum qm_dqrr_dmode dmode,
+			__maybe_unused enum qm_dqrr_pmode pmode,
+			enum qm_dqrr_cmode cmode, u8 max_fill)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u32 cfg;
+
+	/* Make sure the DQRR will be idle when we enable */
+	qm_out(DQRR_SDQCR, 0);
+	qm_out(DQRR_VDQCR, 0);
+	qm_out(DQRR_PDQCR, 0);
+	dqrr->ring = portal->addr.ce + QM_CL_DQRR;
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->cursor = dqrr->ring + dqrr->ci;
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+	dqrr->vbit = (qm_in(DQRR_PI_CINH) & QM_DQRR_SIZE) ?
+			QM_DQRR_VERB_VBIT : 0;
+	dqrr->ithresh = qm_in(DQRR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	dqrr->dmode = dmode;
+	dqrr->pmode = pmode;
+	dqrr->cmode = cmode;
+#endif
+	/* Invalidate every ring entry before beginning */
+	for (cfg = 0; cfg < QM_DQRR_SIZE; cfg++)
+		dccivac(qm_cl(dqrr->ring, cfg));
+	cfg = (qm_in(CFG) & 0xff000f00) |
+		((max_fill & (QM_DQRR_SIZE - 1)) << 20) | /* DQRR_MF */
+		((dmode & 1) << 18) |			/* DP */
+		((cmode & 3) << 16) |			/* DCM */
+		0xa0 |					/* RE+SE */
+		(0 ? 0x40 : 0) |			/* Ignore RP */
+		(0 ? 0x10 : 0);				/* Ignore SP */
+	qm_out(CFG, cfg);
+	qm_dqrr_set_maxfill(portal, max_fill);
+	return 0;
+}
+
+static inline void qm_dqrr_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if ((dqrr->cmode != qm_dqrr_cdc) &&
+	    (dqrr->ci != DQRR_PTR2IDX(dqrr->cursor)))
+		pr_crit("Ignoring completed DQRR entries\n");
+#endif
+}
+
+static inline int qm_mr_init(struct qm_portal *portal,
+			     __maybe_unused enum qm_mr_pmode pmode,
+			     enum qm_mr_cmode cmode)
+{
+	register struct qm_mr *mr = &portal->mr;
+	u32 cfg;
+
+	mr->ring = portal->addr.ce + QM_CL_MR;
+	mr->pi = qm_in(MR_PI_CINH) & (QM_MR_SIZE - 1);
+	mr->ci = qm_in(MR_CI_CINH) & (QM_MR_SIZE - 1);
+	mr->cursor = mr->ring + mr->ci;
+	mr->fill = qm_cyc_diff(QM_MR_SIZE, mr->ci, mr->pi);
+	mr->vbit = (qm_in(MR_PI_CINH) & QM_MR_SIZE) ? QM_MR_VERB_VBIT : 0;
+	mr->ithresh = qm_in(MR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mr->pmode = pmode;
+	mr->cmode = cmode;
+#endif
+	cfg = (qm_in(CFG) & 0xfffff0ff) |
+		((cmode & 1) << 8);		/* QCSP_CFG:MM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+	const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+	DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+		mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+		if (!mr->pi)
+			mr->vbit ^= QM_MR_VERB_VBIT;
+		mr->fill++;
+		res = MR_INC(res);
+	}
+	dcbit_ro(res);
+}
+
+static inline
+struct qman_portal *qman_create_portal(
+			struct qman_portal *portal,
+			      const struct qm_portal_config *c,
+			      const struct qman_cgrs *cgrs)
+{
+	struct qm_portal *p;
+	char buf[16];
+	int ret;
+	u32 isdr;
+
+	p = &portal->p;
+
+	portal->use_eqcr_ci_stashing = ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	/*
+	 * If CI-stashing is used, the current defaults use a threshold of 3,
+	 * and stash with high-than-DQRR priority.
+	 */
+	if (qm_eqcr_init(p, qm_eqcr_pvb,
+			 portal->use_eqcr_ci_stashing ? 3 : 0, 1)) {
+		pr_err("Qman EQCR initialisation failed\n");
+		goto fail_eqcr;
+	}
+	if (qm_dqrr_init(p, c, qm_dqrr_dpush, qm_dqrr_pvb,
+			 qm_dqrr_cdc, DQRR_MAXFILL)) {
+		pr_err("Qman DQRR initialisation failed\n");
+		goto fail_dqrr;
+	}
+	if (qm_mr_init(p, qm_mr_pvb, qm_mr_cci)) {
+		pr_err("Qman MR initialisation failed\n");
+		goto fail_mr;
+	}
+	if (qm_mc_init(p)) {
+		pr_err("Qman MC initialisation failed\n");
+		goto fail_mc;
+	}
+
+	/* static interrupt-gating controls */
+	qm_dqrr_set_ithresh(p, 0);
+	qm_mr_set_ithresh(p, 0);
+	qm_isr_set_iperiod(p, 0);
+	portal->cgrs = kmalloc(2 * sizeof(*cgrs), GFP_KERNEL);
+	if (!portal->cgrs)
+		goto fail_cgrs;
+	/* initial snapshot is no-depletion */
+	qman_cgrs_init(&portal->cgrs[1]);
+	if (cgrs)
+		portal->cgrs[0] = *cgrs;
+	else
+		/* if the given mask is NULL, assume all CGRs can be seen */
+		qman_cgrs_fill(&portal->cgrs[0]);
+	INIT_LIST_HEAD(&portal->cgr_cbs);
+	spin_lock_init(&portal->cgr_lock);
+	portal->bits = 0;
+	portal->slowpoll = 0;
+	portal->sdqcr = QM_SDQCR_SOURCE_CHANNELS | QM_SDQCR_COUNT_UPTO3 |
+			QM_SDQCR_DEDICATED_PRECEDENCE | QM_SDQCR_TYPE_PRIO_QOS |
+			QM_SDQCR_TOKEN_SET(0xab) | QM_SDQCR_CHANNELS_DEDICATED;
+	portal->dqrr_disable_ref = 0;
+	portal->cb_dc_ern = NULL;
+	sprintf(buf, "qportal-%d", c->channel);
+	dpa_rbtree_init(&portal->retire_table);
+	isdr = 0xffffffff;
+	qm_isr_disable_write(p, isdr);
+	portal->irq_sources = 0;
+	qm_isr_enable_write(p, portal->irq_sources);
+	qm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, portal_isr, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need EQCR to be empty before continuing */
+	isdr &= ~QM_PIRQ_EQCI;
+	qm_isr_disable_write(p, isdr);
+	ret = qm_eqcr_get_fill(p);
+	if (ret) {
+		pr_err("Qman EQCR unclean\n");
+		goto fail_eqcr_empty;
+	}
+	isdr &= ~(QM_PIRQ_DQRI | QM_PIRQ_MRI);
+	qm_isr_disable_write(p, isdr);
+	if (qm_dqrr_current(p)) {
+		pr_err("Qman DQRR unclean\n");
+		qm_dqrr_cdc_consume_n(p, 0xffff);
+	}
+	if (qm_mr_current(p) && drain_mr_fqrni(p)) {
+		/* special handling, drain just in case it's a few FQRNIs */
+		if (drain_mr_fqrni(p))
+			goto fail_dqrr_mr_empty;
+	}
+	/* Success */
+	portal->config = c;
+	qm_isr_disable_write(p, 0);
+	qm_isr_uninhibit(p);
+	/* Write a sane SDQCR */
+	qm_dqrr_sdqcr_set(p, portal->sdqcr);
+	return portal;
+fail_dqrr_mr_empty:
+fail_eqcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->cgrs);
+	spin_lock_destroy(&portal->cgr_lock);
+fail_cgrs:
+	qm_mc_finish(p);
+fail_mc:
+	qm_mr_finish(p);
+fail_mr:
+	qm_dqrr_finish(p);
+fail_dqrr:
+	qm_eqcr_finish(p);
+fail_eqcr:
+	return NULL;
+}
+
+struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c,
+					      const struct qman_cgrs *cgrs)
+{
+	struct qman_portal *res;
+	struct qman_portal *portal = get_affine_portal();
+	/* A criteria for calling this function (from qman_driver.c) is that
+	 * we're already affine to the cpu and won't schedule onto another cpu.
+	 */
+
+	res = qman_create_portal(portal, c, cgrs);
+	if (res) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		affine_channels[c->cpu] =
+			c->channel;
+		spin_unlock(&affine_mask_lock);
+	}
+	return res;
+}
+
+static inline
+void qman_destroy_portal(struct qman_portal *qm)
+{
+	const struct qm_portal_config *pcfg;
+
+	/* Stop dequeues on the portal */
+	qm_dqrr_sdqcr_set(&qm->p, 0);
+
+	/*
+	 * NB we do this to "quiesce" EQCR. If we add enqueue-completions or
+	 * something related to QM_PIRQ_EQCI, this may need fixing.
+	 * Also, due to the prefetching model used for CI updates in the enqueue
+	 * path, this update will only invalidate the CI cacheline *after*
+	 * working on it, so we need to call this twice to ensure a full update
+	 * irrespective of where the enqueue processing was at when the teardown
+	 * began.
+	 */
+	qm_eqcr_cce_update(&qm->p);
+	qm_eqcr_cce_update(&qm->p);
+	pcfg = qm->config;
+
+	free_irq(pcfg->irq, qm);
+
+	kfree(qm->cgrs);
+	qm_mc_finish(&qm->p);
+	qm_mr_finish(&qm->p);
+	qm_dqrr_finish(&qm->p);
+	qm_eqcr_finish(&qm->p);
+
+	qm->config = NULL;
+
+	spin_lock_destroy(&qm->cgr_lock);
+}
+
+const struct qm_portal_config *qman_destroy_affine_portal(void)
+{
+	/* We don't want to redirect if we're a slave, use "raw" */
+	struct qman_portal *qm = get_affine_portal();
+	const struct qm_portal_config *pcfg;
+	int cpu;
+
+	pcfg = qm->config;
+	cpu = pcfg->cpu;
+
+	qman_destroy_portal(qm);
+
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int qman_get_portal_index(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+/* Inline helper to reduce nesting in __poll_portal_slow() */
+static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,
+				   const struct qm_mr_entry *msg, u8 verb)
+{
+	FQLOCK(fq);
+	switch (verb) {
+	case QM_MR_VERB_FQRL:
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_ORL));
+		fq_clear(fq, QMAN_FQ_STATE_ORL);
+		table_del_fq(p, fq);
+		break;
+	case QM_MR_VERB_FQRN:
+		DPAA_ASSERT((fq->state == qman_fq_state_parked) ||
+			    (fq->state == qman_fq_state_sched));
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_CHANGING));
+		fq_clear(fq, QMAN_FQ_STATE_CHANGING);
+		if (msg->fq.fqs & QM_MR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (msg->fq.fqs & QM_MR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		fq->state = qman_fq_state_retired;
+		break;
+	case QM_MR_VERB_FQPN:
+		DPAA_ASSERT(fq->state == qman_fq_state_sched);
+		DPAA_ASSERT(fq_isclear(fq, QMAN_FQ_STATE_CHANGING));
+		fq->state = qman_fq_state_parked;
+	}
+	FQUNLOCK(fq);
+}
+
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
+{
+	const struct qm_mr_entry *msg;
+	struct qm_mr_entry swapped_msg;
+
+	if (is & QM_PIRQ_CSCI) {
+		struct qman_cgrs rr, c;
+		struct qm_mc_result *mcr;
+		struct qman_cgr *cgr;
+
+		spin_lock(&p->cgr_lock);
+		/*
+		 * The CSCI bit must be cleared _before_ issuing the
+		 * Query Congestion State command, to ensure that a long
+		 * CGR State Change callback cannot miss an intervening
+		 * state change.
+		 */
+		qm_isr_status_clear(&p->p, QM_PIRQ_CSCI);
+		qm_mc_start(&p->p);
+		qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+		while (!(mcr = qm_mc_result(&p->p)))
+			cpu_relax();
+		/* mask out the ones I'm not interested in */
+		qman_cgrs_and(&rr, (const struct qman_cgrs *)
+			&mcr->querycongestion.state, &p->cgrs[0]);
+		/* check previous snapshot for delta, enter/exit congestion */
+		qman_cgrs_xor(&c, &rr, &p->cgrs[1]);
+		/* update snapshot */
+		qman_cgrs_cp(&p->cgrs[1], &rr);
+		/* Invoke callback */
+		list_for_each_entry(cgr, &p->cgr_cbs, node)
+			if (cgr->cb && qman_cgrs_get(&c, cgr->cgrid))
+				cgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid));
+		spin_unlock(&p->cgr_lock);
+	}
+
+	if (is & QM_PIRQ_EQRI) {
+		qm_eqcr_cce_update(&p->p);
+		qm_eqcr_set_ithresh(&p->p, 0);
+		wake_up(&affine_queue);
+	}
+
+	if (is & QM_PIRQ_MRI) {
+		struct qman_fq *fq;
+		u8 verb, num = 0;
+mr_loop:
+		qm_mr_pvb_update(&p->p);
+		msg = qm_mr_current(&p->p);
+		if (!msg)
+			goto mr_done;
+		swapped_msg = *msg;
+		hw_fd_to_cpu(&swapped_msg.ern.fd);
+		verb = msg->verb & QM_MR_VERB_TYPE_MASK;
+		/* The message is a software ERN iff the 0x20 bit is set */
+		if (verb & 0x20) {
+			switch (verb) {
+			case QM_MR_VERB_FQRNI:
+				/* nada, we drop FQRNIs on the floor */
+				break;
+			case QM_MR_VERB_FQRN:
+			case QM_MR_VERB_FQRL:
+				/* Lookup in the retirement table */
+				fq = table_find_fq(p,
+						   be32_to_cpu(msg->fq.fqid));
+				DPAA_BUG_ON(!fq);
+				fq_state_change(p, fq, &swapped_msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_FQPN:
+				/* Parked */
+				fq = (void *)(uintptr_t)
+					be32_to_cpu(msg->fq.contextB);
+				fq_state_change(p, fq, msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_DC_ERN:
+				/* DCP ERN */
+				if (p->cb_dc_ern)
+					p->cb_dc_ern(p, msg);
+				else if (cb_dc_ern)
+					cb_dc_ern(p, msg);
+				else {
+					static int warn_once;
+
+					if (!warn_once) {
+						pr_crit("Leaking DCP ERNs!\n");
+						warn_once = 1;
+					}
+				}
+				break;
+			default:
+				pr_crit("Invalid MR verb 0x%02x\n", verb);
+			}
+		} else {
+			/* Its a software ERN */
+			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+			fq->cb.ern(p, fq, &swapped_msg);
+		}
+		num++;
+		qm_mr_next(&p->p);
+		goto mr_loop;
+mr_done:
+		qm_mr_cci_consume(&p->p, num);
+	}
+	/*
+	 * QM_PIRQ_CSCI/CCSCI has already been cleared, as part of its specific
+	 * processing. If that interrupt source has meanwhile been re-asserted,
+	 * we mustn't clear it here (or in the top-level interrupt handler).
+	 */
+	return is & (QM_PIRQ_EQCI | QM_PIRQ_EQRI | QM_PIRQ_MRI);
+}
+
+/*
+ * remove some slowish-path stuff from the "fast path" and make sure it isn't
+ * inlined.
+ */
+static noinline void clear_vdqcr(struct qman_portal *p, struct qman_fq *fq)
+{
+	p->vdqcr_owned = NULL;
+	FQLOCK(fq);
+	fq_clear(fq, QMAN_FQ_STATE_VDQCR);
+	FQUNLOCK(fq);
+	wake_up(&affine_queue);
+}
+
+/*
+ * The only states that would conflict with other things if they ran at the
+ * same time on the same cpu are:
+ *
+ *   (i) setting/clearing vdqcr_owned, and
+ *  (ii) clearing the NE (Not Empty) flag.
+ *
+ * Both are safe. Because;
+ *
+ *   (i) this clearing can only occur after qman_set_vdq() has set the
+ *	 vdqcr_owned field (which it does before setting VDQCR), and
+ *	 qman_volatile_dequeue() blocks interrupts and preemption while this is
+ *	 done so that we can't interfere.
+ *  (ii) the NE flag is only cleared after qman_retire_fq() has set it, and as
+ *	 with (i) that API prevents us from interfering until it's safe.
+ *
+ * The good thing is that qman_set_vdq() and qman_retire_fq() run far
+ * less frequently (ie. per-FQ) than __poll_portal_fast() does, so the nett
+ * advantage comes from this function not having to "lock" anything at all.
+ *
+ * Note also that the callbacks are invoked at points which are safe against the
+ * above potential conflicts, but that this function itself is not re-entrant
+ * (this is because the function tracks one end of each FIFO in the portal and
+ * we do *not* want to lock that). So the consequence is that it is safe for
+ * user callbacks to call into any QMan API.
+ */
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit)
+{
+	const struct qm_dqrr_entry *dq;
+	struct qman_fq *fq;
+	enum qman_cb_dqrr_result res;
+	unsigned int limit = 0;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+	do {
+		qm_dqrr_pvb_update(&p->p);
+		dq = qm_dqrr_current(&p->p);
+		if (!dq)
+			break;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* If running on an LE system the fields of the
+	 * dequeue entry must be swapper.  Because the
+	 * QMan HW will ignore writes the DQRR entry is
+	 * copied and the index stored within the copy
+	 */
+		shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+		*shadow = *dq;
+		dq = shadow;
+		shadow->fqid = be32_to_cpu(shadow->fqid);
+		shadow->contextB = be32_to_cpu(shadow->contextB);
+		shadow->seqnum = be16_to_cpu(shadow->seqnum);
+		hw_fd_to_cpu(&shadow->fd);
+#endif
+
+		if (dq->stat & QM_DQRR_STAT_UNSCHEDULED) {
+			/*
+			 * VDQCR: don't trust context_b as the FQ may have
+			 * been configured for h/w consumption and we're
+			 * draining it post-retirement.
+			 */
+			fq = p->vdqcr_owned;
+			/*
+			 * We only set QMAN_FQ_STATE_NE when retiring, so we
+			 * only need to check for clearing it when doing
+			 * volatile dequeues.  It's one less thing to check
+			 * in the critical path (SDQCR).
+			 */
+			if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+				fq_clear(fq, QMAN_FQ_STATE_NE);
+			/*
+			 * This is duplicated from the SDQCR code, but we
+			 * have stuff to do before *and* after this callback,
+			 * and we don't want multiple if()s in the critical
+			 * path (SDQCR).
+			 */
+			res = fq->cb.dqrr(p, fq, dq);
+			if (res == qman_cb_dqrr_stop)
+				break;
+			/* Check for VDQCR completion */
+			if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+				clear_vdqcr(p, fq);
+		} else {
+			/* SDQCR: context_b points to the FQ */
+			fq = (void *)(uintptr_t)dq->contextB;
+			/* Now let the callback do its stuff */
+			res = fq->cb.dqrr(p, fq, dq);
+			/*
+			 * The callback can request that we exit without
+			 * consuming this entry nor advancing;
+			 */
+			if (res == qman_cb_dqrr_stop)
+				break;
+		}
+		/* Interpret 'dq' from a driver perspective. */
+		/*
+		 * Parking isn't possible unless HELDACTIVE was set. NB,
+		 * FORCEELIGIBLE implies HELDACTIVE, so we only need to
+		 * check for HELDACTIVE to cover both.
+		 */
+		DPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||
+			    (res != qman_cb_dqrr_park));
+		/* just means "skip it, I'll consume it myself later on" */
+		if (res != qman_cb_dqrr_defer)
+			qm_dqrr_cdc_consume_1ptr(&p->p, dq,
+						 res == qman_cb_dqrr_park);
+		/* Move forward */
+		qm_dqrr_next(&p->p);
+		/*
+		 * Entry processed and consumed, increment our counter.  The
+		 * callback can request that we exit after consuming the
+		 * entry, and we also exit if we reach our processing limit,
+		 * so loop back only if neither of these conditions is met.
+		 */
+	} while (++limit < poll_limit && res != qman_cb_dqrr_consume_stop);
+
+	return limit;
+}
+
+u16 qman_affine_channel(int cpu)
+{
+	if (cpu < 0) {
+		struct qman_portal *portal = get_affine_portal();
+
+		cpu = portal->config->cpu;
+	}
+	DPAA_BUG_ON(!CPU_ISSET(cpu, &affine_mask));
+	return affine_channels[cpu];
+}
+
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)
+{
+	struct qman_portal *p = get_affine_portal();
+	const struct qm_dqrr_entry *dq;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+
+	qm_dqrr_pvb_update(&p->p);
+	dq = qm_dqrr_current(&p->p);
+	if (!dq)
+		return NULL;
+
+	if (!(dq->stat & QM_DQRR_STAT_FD_VALID)) {
+		/* Invalid DQRR - put the portal and consume the DQRR.
+		 * Return NULL to user as no packet is seen.
+		 */
+		qman_dqrr_consume(fq, (struct qm_dqrr_entry *)dq);
+		return NULL;
+	}
+
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+	*shadow = *dq;
+	dq = shadow;
+	shadow->fqid = be32_to_cpu(shadow->fqid);
+	shadow->contextB = be32_to_cpu(shadow->contextB);
+	shadow->seqnum = be16_to_cpu(shadow->seqnum);
+	hw_fd_to_cpu(&shadow->fd);
+#endif
+
+	if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+		fq_clear(fq, QMAN_FQ_STATE_NE);
+
+	return (struct qm_dqrr_entry *)dq;
+}
+
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+		clear_vdqcr(p, fq);
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, 0);
+	qm_dqrr_next(&p->p);
+}
+
+int qman_poll_dqrr(unsigned int limit)
+{
+	struct qman_portal *p = get_affine_portal();
+	int ret;
+
+	ret = __poll_portal_fast(p, limit);
+	return ret;
+}
+
+void qman_poll(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if ((~p->irq_sources) & QM_PIRQ_SLOW) {
+		if (!(p->slowpoll--)) {
+			u32 is = qm_isr_status_read(&p->p) & ~p->irq_sources;
+			u32 active = __poll_portal_slow(p, is);
+
+			if (active) {
+				qm_isr_status_clear(&p->p, active);
+				p->slowpoll = SLOW_POLL_BUSY;
+			} else
+				p->slowpoll = SLOW_POLL_IDLE;
+		}
+	}
+	if ((~p->irq_sources) & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+}
+
+void qman_stop_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qman_stop_dequeues_ex(p);
+}
+
+void qman_start_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	DPAA_ASSERT(p->dqrr_disable_ref > 0);
+	if (!(--p->dqrr_disable_ref))
+		qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
+}
+
+void qman_static_dequeue_add(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr |= pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+void qman_static_dequeue_del(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr &= ~pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+u32 qman_static_dequeue_get(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->sdqcr;
+}
+
+void qman_dca(struct qm_dqrr_entry *dq, int park_request)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, park_request);
+}
+
+/* Frame queue API */
+static const char *mcr_result_str(u8 result)
+{
+	switch (result) {
+	case QM_MCR_RESULT_NULL:
+		return "QM_MCR_RESULT_NULL";
+	case QM_MCR_RESULT_OK:
+		return "QM_MCR_RESULT_OK";
+	case QM_MCR_RESULT_ERR_FQID:
+		return "QM_MCR_RESULT_ERR_FQID";
+	case QM_MCR_RESULT_ERR_FQSTATE:
+		return "QM_MCR_RESULT_ERR_FQSTATE";
+	case QM_MCR_RESULT_ERR_NOTEMPTY:
+		return "QM_MCR_RESULT_ERR_NOTEMPTY";
+	case QM_MCR_RESULT_PENDING:
+		return "QM_MCR_RESULT_PENDING";
+	case QM_MCR_RESULT_ERR_BADCOMMAND:
+		return "QM_MCR_RESULT_ERR_BADCOMMAND";
+	}
+	return "<unknown MCR result>";
+}
+
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
+{
+	struct qm_fqd fqd;
+	struct qm_mcr_queryfq_np np;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID) {
+		int ret = qman_alloc_fqid(&fqid);
+
+		if (ret)
+			return ret;
+	}
+	spin_lock_init(&fq->fqlock);
+	fq->fqid = fqid;
+	fq->flags = flags;
+	fq->state = qman_fq_state_oos;
+	fq->cgr_groupid = 0;
+
+	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
+		return 0;
+	/* Everything else is AS_IS support */
+	p = get_affine_portal();
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(&fqd);
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ_NP);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ_NP failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	np = mcr->queryfq_np;
+	/* Phew, have queryfq and queryfq_np results, stitch together
+	 * the FQ object from those.
+	 */
+	fq->cgr_groupid = fqd.cgid;
+	switch (np.state & QM_MCR_NP_STATE_MASK) {
+	case QM_MCR_NP_STATE_OOS:
+		break;
+	case QM_MCR_NP_STATE_RETIRED:
+		fq->state = qman_fq_state_retired;
+		if (np.frm_cnt)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		break;
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+		fq->state = qman_fq_state_sched;
+		if (np.state & QM_MCR_NP_STATE_R)
+			fq_set(fq, QMAN_FQ_STATE_CHANGING);
+		break;
+	case QM_MCR_NP_STATE_PARKED:
+		fq->state = qman_fq_state_parked;
+		break;
+	default:
+		DPAA_ASSERT(NULL == "invalid FQ state");
+	}
+	if (fqd.fq_ctrl & QM_FQCTRL_CGE)
+		fq->state |= QMAN_FQ_STATE_CGR_EN;
+	return 0;
+err:
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID)
+		qman_release_fqid(fqid);
+	return -EIO;
+}
+
+void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
+{
+	/*
+	 * We don't need to lock the FQ as it is a pre-condition that the FQ be
+	 * quiesced. Instead, run some checks.
+	 */
+	switch (fq->state) {
+	case qman_fq_state_parked:
+		DPAA_ASSERT(flags & QMAN_FQ_DESTROY_PARKED);
+	case qman_fq_state_oos:
+		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
+			qman_release_fqid(fq->fqid);
+
+		return;
+	default:
+		break;
+	}
+	DPAA_ASSERT(NULL == "qman_free_fq() on unquiesced FQ!");
+}
+
+u32 qman_fq_fqid(struct qman_fq *fq)
+{
+	return fq->fqid;
+}
+
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags)
+{
+	if (state)
+		*state = fq->state;
+	if (flags)
+		*flags = fq->flags;
+}
+
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	u8 res, myverb = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		QM_MCC_VERB_INITFQ_SCHED : QM_MCC_VERB_INITFQ_PARKED;
+
+	if ((fq->state != qman_fq_state_oos) &&
+	    (fq->state != qman_fq_state_parked))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	if (opts && (opts->we_mask & QM_INITFQ_WE_OAC)) {
+		/* And can't be set at the same time as TDTHRESH */
+		if (opts->we_mask & QM_INITFQ_WE_TDTHRESH)
+			return -EINVAL;
+	}
+	/* Issue an INITFQ_[PARKED|SCHED] management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     ((fq->state != qman_fq_state_oos) &&
+				(fq->state != qman_fq_state_parked)))) {
+		FQUNLOCK(fq);
+		return -EBUSY;
+	}
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initfq = *opts;
+	mcc->initfq.fqid = cpu_to_be32(fq->fqid);
+	mcc->initfq.count = 0;
+	/*
+	 * If the FQ does *not* have the TO_DCPORTAL flag, context_b is set as a
+	 * demux pointer. Otherwise, the caller-provided value is allowed to
+	 * stand, don't overwrite it.
+	 */
+	if (fq_isclear(fq, QMAN_FQ_FLAG_TO_DCPORTAL)) {
+		dma_addr_t phys_fq;
+
+		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+		/*
+		 *  and the physical address - NB, if the user wasn't trying to
+		 * set CONTEXTA, clear the stashing settings.
+		 */
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_CONTEXTA)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;
+			memset(&mcc->initfq.fqd.context_a, 0,
+			       sizeof(mcc->initfq.fqd.context_a));
+		} else {
+			phys_fq = rte_mem_virt2phy(fq);
+			qm_fqd_stashing_set64(&mcc->initfq.fqd, phys_fq);
+		}
+	}
+	if (flags & QMAN_INITFQ_FLAG_LOCAL) {
+		mcc->initfq.fqd.dest.channel = p->config->channel;
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_DESTWQ)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_DESTWQ;
+			mcc->initfq.fqd.dest.wq = 4;
+		}
+	}
+	mcc->initfq.we_mask = cpu_to_be16(mcc->initfq.we_mask);
+	cpu_to_hw_fqd(&mcc->initfq.fqd);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		FQUNLOCK(fq);
+		return -EIO;
+	}
+	if (opts) {
+		if (opts->we_mask & QM_INITFQ_WE_FQCTRL) {
+			if (opts->fqd.fq_ctrl & QM_FQCTRL_CGE)
+				fq_set(fq, QMAN_FQ_STATE_CGR_EN);
+			else
+				fq_clear(fq, QMAN_FQ_STATE_CGR_EN);
+		}
+		if (opts->we_mask & QM_INITFQ_WE_CGID)
+			fq->cgr_groupid = opts->fqd.cgid;
+	}
+	fq->state = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		qman_fq_state_sched : qman_fq_state_parked;
+	FQUNLOCK(fq);
+	return 0;
+}
+
+int qman_schedule_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_parked)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTERFQ_SCHED management command */
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state != qman_fq_state_parked))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_SCHED);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_SCHED);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_sched;
+out:
+	FQUNLOCK(fq);
+
+	return ret;
+}
+
+int qman_retire_fq(struct qman_fq *fq, u32 *flags)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int rval;
+	u8 res;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_sched))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_retired) ||
+				(fq->state == qman_fq_state_oos))) {
+		rval = -EBUSY;
+		goto out;
+	}
+	rval = table_push_fq(p, fq);
+	if (rval)
+		goto out;
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_RETIRE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_RETIRE);
+	res = mcr->result;
+	/*
+	 * "Elegant" would be to treat OK/PENDING the same way; set CHANGING,
+	 * and defer the flags until FQRNI or FQRN (respectively) show up. But
+	 * "Friendly" is to process OK immediately, and not set CHANGING. We do
+	 * friendly, otherwise the caller doesn't necessarily have a fully
+	 * "retired" FQ on return even if the retirement was immediate. However
+	 * this does mean some code duplication between here and
+	 * fq_state_change().
+	 */
+	if (likely(res == QM_MCR_RESULT_OK)) {
+		rval = 0;
+		/* Process 'fq' right away, we'll ignore FQRNI */
+		if (mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		if (flags)
+			*flags = fq->flags;
+		fq->state = qman_fq_state_retired;
+		if (fq->cb.fqs) {
+			/*
+			 * Another issue with supporting "immediate" retirement
+			 * is that we're forced to drop FQRNIs, because by the
+			 * time they're seen it may already be "too late" (the
+			 * fq may have been OOS'd and free()'d already). But if
+			 * the upper layer wants a callback whether it's
+			 * immediate or not, we have to fake a "MR" entry to
+			 * look like an FQRNI...
+			 */
+			struct qm_mr_entry msg;
+
+			msg.verb = QM_MR_VERB_FQRNI;
+			msg.fq.fqs = mcr->alterfq.fqs;
+			msg.fq.fqid = fq->fqid;
+			msg.fq.contextB = (u32)(uintptr_t)fq;
+			fq->cb.fqs(p, fq, &msg);
+		}
+	} else if (res == QM_MCR_RESULT_PENDING) {
+		rval = 1;
+		fq_set(fq, QMAN_FQ_STATE_CHANGING);
+	} else {
+		rval = -EIO;
+		table_del_fq(p, fq);
+	}
+out:
+	FQUNLOCK(fq);
+	return rval;
+}
+
+int qman_oos_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_retired)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_BLOCKOOS)) ||
+		     (fq->state != qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_OOS);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_OOS);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_oos;
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_fq_flow_control(struct qman_fq *fq, int xon)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+	u8 myverb;
+
+	if ((fq->state == qman_fq_state_oos) ||
+	    (fq->state == qman_fq_state_retired) ||
+		(fq->state == qman_fq_state_parked))
+		return -EINVAL;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTER_FQXON or ALTER_FQXOFF management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_parked) ||
+			(fq->state == qman_fq_state_oos) ||
+			(fq->state == qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = fq->fqid;
+	mcc->alterfq.count = 0;
+	myverb = xon ? QM_MCC_VERB_ALTER_FQXON : QM_MCC_VERB_ALTER_FQXOFF;
+
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(fqd);
+	if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_fq_has_pkts(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	int ret = 0;
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		ret = !!mcr->queryfq_np.frm_cnt;
+	return ret;
+}
+
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		*np = mcr->queryfq_np;
+		np->fqd_link = be24_to_cpu(np->fqd_link);
+		np->odp_seq = be16_to_cpu(np->odp_seq);
+		np->orp_nesn = be16_to_cpu(np->orp_nesn);
+		np->orp_ea_hseq  = be16_to_cpu(np->orp_ea_hseq);
+		np->orp_ea_tseq  = be16_to_cpu(np->orp_ea_tseq);
+		np->orp_ea_hptr = be24_to_cpu(np->orp_ea_hptr);
+		np->orp_ea_tptr = be24_to_cpu(np->orp_ea_tptr);
+		np->pfdr_hptr = be24_to_cpu(np->pfdr_hptr);
+		np->pfdr_tptr = be24_to_cpu(np->pfdr_tptr);
+		np->ics_surp = be16_to_cpu(np->ics_surp);
+		np->byte_cnt = be32_to_cpu(np->byte_cnt);
+		np->frm_cnt = be24_to_cpu(np->frm_cnt);
+		np->ra1_sfdr = be16_to_cpu(np->ra1_sfdr);
+		np->ra2_sfdr = be16_to_cpu(np->ra2_sfdr);
+		np->od1_sfdr = be16_to_cpu(np->od1_sfdr);
+		np->od2_sfdr = be16_to_cpu(np->od2_sfdr);
+		np->od3_sfdr = be16_to_cpu(np->od3_sfdr);
+	}
+	if (res == QM_MCR_RESULT_ERR_FQID)
+		return -ERANGE;
+	else if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res, myverb;
+
+	myverb = (query_dedicated) ? QM_MCR_VERB_QUERYWQ_DEDICATED :
+				 QM_MCR_VERB_QUERYWQ;
+	mcc = qm_mc_start(&p->p);
+	mcc->querywq.channel.id = cpu_to_be16(wq->channel.id);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		int i, array_len;
+
+		wq->channel.id = be16_to_cpu(mcr->querywq.channel.id);
+		array_len = ARRAY_SIZE(mcr->querywq.wq_len);
+		for (i = 0; i < array_len; i++)
+			wq->wq_len[i] = be32_to_cpu(mcr->querywq.wq_len[i]);
+	}
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERYWQ failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->cgrtestwrite.cgid = cgr->cgrid;
+	mcc->cgrtestwrite.i_bcnt_hi = (u8)(i_bcnt >> 32);
+	mcc->cgrtestwrite.i_bcnt_lo = (u32)i_bcnt;
+	qm_mc_commit(&p->p, QM_MCC_VERB_CGRTESTWRITE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_CGRTESTWRITE);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*result = mcr->cgrtestwrite;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("CGR TEST WRITE failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->querycgr.cgid = cgr->cgrid;
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCGR);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYCGR);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*cgrd = mcr->querycgr;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CGR failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	cgrd->cgr.wr_parm_g.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_g.word);
+	cgrd->cgr.wr_parm_y.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_y.word);
+	cgrd->cgr.wr_parm_r.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_r.word);
+	cgrd->cgr.cscn_targ =  be32_to_cpu(cgrd->cgr.cscn_targ);
+	cgrd->cgr.__cs_thres = be16_to_cpu(cgrd->cgr.__cs_thres);
+	for (i = 0; i < ARRAY_SIZE(cgrd->cscn_targ_swp); i++)
+		cgrd->cscn_targ_swp[i] =
+			be32_to_cpu(cgrd->cscn_targ_swp[i]);
+	return 0;
+}
+
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion)
+{
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	qm_mc_start(&p->p);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			QM_MCC_VERB_QUERYCONGESTION);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*congestion = mcr->querycongestion;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CONGESTION failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	for (i = 0; i < ARRAY_SIZE(congestion->state.state); i++)
+		congestion->state.state[i] =
+			be32_to_cpu(congestion->state.state[i]);
+	return 0;
+}
+
+int qman_set_vdq(struct qman_fq *fq, u16 num)
+{
+	struct qman_portal *p = get_affine_portal();
+	uint32_t vdqcr;
+	int ret = -EBUSY;
+
+	vdqcr = QM_VDQCR_EXACT;
+	vdqcr |= QM_VDQCR_NUMFRAMES_SET(num);
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired)) {
+		ret = -EINVAL;
+		goto out;
+	}
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR)) {
+		ret = -EBUSY;
+		goto out;
+	}
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (!ret)
+		qm_dqrr_vdqcr_set(&p->p, vdqcr);
+
+out:
+	return ret;
+}
+
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
+			  u32 vdqcr)
+{
+	struct qman_portal *p;
+	int ret = -EBUSY;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired))
+		return -EINVAL;
+	if (vdqcr & QM_VDQCR_FQID_MASK)
+		return -EINVAL;
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+		return -EBUSY;
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	p = get_affine_portal();
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (ret)
+		return ret;
+
+	/* VDQCR is set */
+	qm_dqrr_vdqcr_set(&p->p, vdqcr);
+	return 0;
+}
+
+static noinline void update_eqcr_ci(struct qman_portal *p, u8 avail)
+{
+	if (avail)
+		qm_eqcr_cce_prefetch(&p->p);
+	else
+		qm_eqcr_cce_update(&p->p);
+}
+
+int qman_eqcr_is_empty(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	u8 avail;
+
+	update_eqcr_ci(p, 0);
+	avail = qm_eqcr_get_fill(&p->p);
+	return (avail == 0);
+}
+
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine)
+{
+	if (affine) {
+		struct qman_portal *p = get_affine_portal();
+
+		p->cb_dc_ern = handler;
+	} else
+		cb_dc_ern = handler;
+}
+
+static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
+					struct qman_fq *fq,
+					const struct qm_fd *fd,
+					u32 flags)
+{
+	struct qm_eqcr_entry *eq;
+	u8 avail;
+
+	if (p->use_eqcr_ci_stashing) {
+		/*
+		 * The stashing case is easy, only update if we need to in
+		 * order to try and liberate ring entries.
+		 */
+		eq = qm_eqcr_start_stash(&p->p);
+	} else {
+		/*
+		 * The non-stashing case is harder, need to prefetch ahead of
+		 * time.
+		 */
+		avail = qm_eqcr_get_avail(&p->p);
+		if (avail < 2)
+			update_eqcr_ci(p, avail);
+		eq = qm_eqcr_start_no_stash(&p->p);
+	}
+
+	if (unlikely(!eq))
+		return NULL;
+
+	if (flags & QMAN_ENQUEUE_FLAG_DCA)
+		eq->dca = QM_EQCR_DCA_ENABLE |
+			((flags & QMAN_ENQUEUE_FLAG_DCA_PARK) ?
+					QM_EQCR_DCA_PARK : 0) |
+			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
+	eq->fqid = cpu_to_be32(fq->fqid);
+	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+	eq->fd = *fd;
+	cpu_to_hw_fd(&eq->fd);
+	return eq;
+}
+
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_CMD_ENQUEUE |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+	/* Factor the below out, it's used from qman_enqueue_orp() too */
+	return 0;
+}
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_portal *portal = &p->p;
+
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
+
+	u8 i, diff, old_ci, sent = 0;
+
+	/* Update the available entries if no entry is free */
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return 0;
+	}
+
+	/* try to send as many frames as possible */
+	while (eqcr->available && frames_to_send--) {
+		eq->fqid = cpu_to_be32(fq->fqid);
+		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+		eq->fd.opaque_addr = fd->opaque_addr;
+		eq->fd.addr = cpu_to_be40(fd->addr);
+		eq->fd.status = cpu_to_be32(fd->status);
+		eq->fd.opaque = cpu_to_be32(fd->opaque);
+
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		eqcr->available--;
+		sent++;
+		fd++;
+	}
+	lwsync();
+
+	/* In order for flushes to complete faster, all lines are recorded in
+	 * 32 bit word.
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		eq->__dont_write_directly__verb =
+			QM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;
+		prev_eq = eq;
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		if (unlikely((prev_eq + 1) != eq))
+			eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+	}
+
+	/* We need  to flush all the lines but without load/store operations
+	 * between them
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		dcbf(eq);
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+	}
+	/* Update cursor for the next call */
+	eqcr->cursor = eq;
+	return sent;
+}
+
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum)
+{
+	struct qman_portal *p  = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Process ORP-specifics here */
+	if (flags & QMAN_ENQUEUE_FLAG_NLIS)
+		orp_seqnum |= QM_EQCR_SEQNUM_NLIS;
+	else {
+		orp_seqnum &= ~QM_EQCR_SEQNUM_NLIS;
+		if (flags & QMAN_ENQUEUE_FLAG_NESN)
+			orp_seqnum |= QM_EQCR_SEQNUM_NESN;
+		else
+			/* No need to check 4 QMAN_ENQUEUE_FLAG_HOLE */
+			orp_seqnum &= ~QM_EQCR_SEQNUM_NESN;
+	}
+	eq->seqnum = cpu_to_be16(orp_seqnum);
+	eq->orp = cpu_to_be32(orp->fqid);
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_ORP |
+		((flags & (QMAN_ENQUEUE_FLAG_HOLE | QMAN_ENQUEUE_FLAG_NESN)) ?
+				0 : QM_EQCR_VERB_CMD_ENQUEUE) |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+
+	return 0;
+}
+
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+	u8 verb = QM_MCC_VERB_MODIFYCGR;
+
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initcgr = *opts;
+	mcc->initcgr.we_mask = cpu_to_be16(mcc->initcgr.we_mask);
+	mcc->initcgr.cgr.wr_parm_g.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_g.word);
+	mcc->initcgr.cgr.wr_parm_y.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_y.word);
+	mcc->initcgr.cgr.wr_parm_r.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_r.word);
+	mcc->initcgr.cgr.cscn_targ =  cpu_to_be32(mcc->initcgr.cgr.cscn_targ);
+	mcc->initcgr.cgr.__cs_thres = cpu_to_be16(mcc->initcgr.cgr.__cs_thres);
+
+	mcc->initcgr.cgid = cgr->cgrid;
+	if (flags & QMAN_CGR_FLAG_USE_INIT)
+		verb = QM_MCC_VERB_INITCGR;
+	qm_mc_commit(&p->p, verb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == verb);
+	res = mcr->result;
+	return (res == QM_MCR_RESULT_OK) ? 0 : -EIO;
+}
+
+#define TARG_MASK(n) (0x80000000 >> (n->config->channel - \
+					QM_CHANNEL_SWPORTAL0))
+#define TARG_DCP_MASK(n) (0x80000000 >> (10 + n))
+#define PORTAL_IDX(n) (n->config->channel - QM_CHANNEL_SWPORTAL0)
+
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret;
+	struct qman_portal *p;
+
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	p = get_affine_portal();
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	cgr->chan = p->config->channel;
+	spin_lock(&p->cgr_lock);
+
+	/* if no opts specified, just add it to the list */
+	if (!opts)
+		goto add_list;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		goto release_lock;
+	if (opts)
+		local_opts = *opts;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+			QM_CGR_TARG_UDP_CTRL_WRITE_BIT | PORTAL_IDX(p);
+	else
+		/* Overwrite TARG */
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+							TARG_MASK(p);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT, &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		goto release_lock;
+add_list:
+	list_add(&cgr->node, &p->cgr_cbs);
+
+	/* Determine if newly added object requires its callback to be called */
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret) {
+		/* we can't go back, so proceed and return success, but screen
+		 * and wail to the log file.
+		 */
+		pr_crit("CGR HW state partially modified\n");
+		ret = 0;
+		goto release_lock;
+	}
+	if (cgr->cb && cgr_state.cgr.cscn_en && qman_cgrs_get(&p->cgrs[1],
+							      cgr->cgrid))
+		cgr->cb(p, cgr, 1);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+	return ret;
+}
+
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcc_initcgr local_opts;
+	struct qm_mcr_querycgr cgr_state;
+	int ret;
+
+	if ((qman_ip_rev & 0xFF00) < QMAN_REV30) {
+		pr_warn("QMan version doesn't support CSCN => DCP portal\n");
+		return -EINVAL;
+	}
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		return ret;
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	if (opts)
+		local_opts = *opts;
+
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+				QM_CGR_TARG_UDP_CTRL_WRITE_BIT |
+				QM_CGR_TARG_UDP_CTRL_DCP | dcp_portal;
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+					TARG_DCP_MASK(dcp_portal);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT,
+				      &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+
+	return ret;
+}
+
+int qman_delete_cgr(struct qman_cgr *cgr)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret = 0;
+	struct qman_cgr *i;
+	struct qman_portal *p = get_affine_portal();
+
+	if (cgr->chan != p->config->channel) {
+		pr_crit("Attempting to delete cgr from different portal than"
+			" it was create: create 0x%x, delete 0x%x\n",
+			cgr->chan, p->config->channel);
+		ret = -EINVAL;
+		goto put_portal;
+	}
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	spin_lock(&p->cgr_lock);
+	list_del(&cgr->node);
+	/*
+	 * If there are no other CGR objects for this CGRID in the list,
+	 * update CSCN_TARG accordingly
+	 */
+	list_for_each_entry(i, &p->cgr_cbs, node)
+		if ((i->cgrid == cgr->cgrid) && i->cb)
+			goto release_lock;
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)  {
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+		goto release_lock;
+	}
+	/* Overwrite TARG */
+	local_opts.we_mask = QM_CGR_WE_CSCN_TARG;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl = PORTAL_IDX(p);
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ &
+							 ~(TARG_MASK(p));
+	ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+put_portal:
+	return ret;
+}
+
+int qman_shutdown_fq(u32 fqid)
+{
+	struct qman_portal *p;
+	struct qm_portal *low_p;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	u8 state;
+	int orl_empty, fq_empty, drain = 0;
+	u32 result;
+	u32 channel, wq;
+	u16 dest_wq;
+
+	p = get_affine_portal();
+	low_p = &p->p;
+
+	/* Determine the state of the FQID */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	state = mcr->queryfq_np.state & QM_MCR_NP_STATE_MASK;
+	if (state == QM_MCR_NP_STATE_OOS)
+		return 0; /* Already OOS, no need to do anymore checks */
+
+	/* Query which channel the FQ is using */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+
+	/* Need to store these since the MCR gets reused */
+	dest_wq = be16_to_cpu(mcr->queryfq.fqd.dest_wq);
+	channel = dest_wq & 0x7;
+	wq = dest_wq >> 3;
+
+	switch (state) {
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+	case QM_MCR_NP_STATE_PARKED:
+		orl_empty = 0;
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_RETIRE);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_RETIRE);
+		result = mcr->result; /* Make a copy as we reuse MCR below */
+
+		if (result == QM_MCR_RESULT_PENDING) {
+			/* Need to wait for the FQRN in the message ring, which
+			 * will only occur once the FQ has been drained.  In
+			 * order for the FQ to drain the portal needs to be set
+			 * to dequeue from the channel the FQ is scheduled on
+			 */
+			const struct qm_mr_entry *msg;
+			const struct qm_dqrr_entry *dqrr = NULL;
+			int found_fqrn = 0;
+			__maybe_unused u16 dequeue_wq = 0;
+
+			/* Flag that we need to drain FQ */
+			drain = 1;
+
+			if (channel >= qm_channel_pool1 &&
+			    channel < (u16)(qm_channel_pool1 + 15)) {
+				/* Pool channel, enable the bit in the portal */
+				dequeue_wq = (channel -
+					      qm_channel_pool1 + 1) << 4 | wq;
+			} else if (channel < qm_channel_pool1) {
+				/* Dedicated channel */
+				dequeue_wq = wq;
+			} else {
+				pr_info("Cannot recover FQ 0x%x,"
+					" it is scheduled on channel 0x%x",
+					fqid, channel);
+				return -EBUSY;
+			}
+			/* Set the sdqcr to drain this channel */
+			if (channel < qm_channel_pool1)
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+					  QM_SDQCR_CHANNELS_DEDICATED);
+			else
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+						  QM_SDQCR_CHANNELS_POOL_CONV
+						  (channel));
+			while (!found_fqrn) {
+				/* Keep draining DQRR while checking the MR*/
+				qm_dqrr_pvb_update(low_p);
+				dqrr = qm_dqrr_current(low_p);
+				while (dqrr) {
+					qm_dqrr_cdc_consume_1ptr(
+						low_p, dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+				/* Process message ring too */
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+				while (msg) {
+					if ((msg->verb &
+					     QM_MR_VERB_TYPE_MASK)
+					    == QM_MR_VERB_FQRN)
+						found_fqrn = 1;
+					qm_mr_next(low_p);
+					qm_mr_cci_consume_to_current(low_p);
+					qm_mr_pvb_update(low_p);
+					msg = qm_mr_current(low_p);
+				}
+				cpu_relax();
+			}
+		}
+		if (result != QM_MCR_RESULT_OK &&
+		    result !=  QM_MCR_RESULT_PENDING) {
+			/* error */
+			pr_err("qman_retire_fq failed on FQ 0x%x,"
+			       " result=0x%x\n", fqid, result);
+			return -1;
+		}
+		if (!(mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)) {
+			/* ORL had no entries, no need to wait until the
+			 * ERNs come in.
+			 */
+			orl_empty = 1;
+		}
+		/* Retirement succeeded, check to see if FQ needs
+		 * to be drained.
+		 */
+		if (drain || mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY) {
+			/* FQ is Not Empty, drain using volatile DQ commands */
+			fq_empty = 0;
+			do {
+				const struct qm_dqrr_entry *dqrr = NULL;
+				u32 vdqcr = fqid | QM_VDQCR_NUMFRAMES_SET(3);
+
+				qm_dqrr_vdqcr_set(low_p, vdqcr);
+
+				/* Wait for a dequeue to occur */
+				while (dqrr == NULL) {
+					qm_dqrr_pvb_update(low_p);
+					dqrr = qm_dqrr_current(low_p);
+					if (!dqrr)
+						cpu_relax();
+				}
+				/* Process the dequeues, making sure to
+				 * empty the ring completely.
+				 */
+				while (dqrr) {
+					if (dqrr->fqid == fqid &&
+					    dqrr->stat & QM_DQRR_STAT_FQ_EMPTY)
+						fq_empty = 1;
+					qm_dqrr_cdc_consume_1ptr(low_p,
+								 dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+			} while (fq_empty == 0);
+		}
+		qm_dqrr_sdqcr_set(low_p, 0);
+
+		/* Wait for the ORL to have been completely drained */
+		while (orl_empty == 0) {
+			const struct qm_mr_entry *msg;
+
+			qm_mr_pvb_update(low_p);
+			msg = qm_mr_current(low_p);
+			while (msg) {
+				if ((msg->verb & QM_MR_VERB_TYPE_MASK) ==
+				    QM_MR_VERB_FQRL)
+					orl_empty = 1;
+				qm_mr_next(low_p);
+				qm_mr_cci_consume_to_current(low_p);
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+			}
+			cpu_relax();
+		}
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result != QM_MCR_RESULT_OK) {
+			pr_err(
+			"OOS after drain Failed on FQID 0x%x, result 0x%x\n",
+			       fqid, mcr->result);
+			return -1;
+		}
+		return 0;
+
+	case QM_MCR_NP_STATE_RETIRED:
+		/* Send OOS Command */
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result) {
+			pr_err("OOS Failed on FQID 0x%x\n", fqid);
+			return -1;
+		}
+		return 0;
+
+	}
+	return -1;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.h b/drivers/bus/dpaa/base/qbman/qman.h
new file mode 100644
index 0000000..ee78d31
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.h
@@ -0,0 +1,888 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman_priv.h"
+
+/***************************/
+/* Portal register assists */
+/***************************/
+#define QM_REG_EQCR_PI_CINH	0x3000
+#define QM_REG_EQCR_CI_CINH	0x3040
+#define QM_REG_EQCR_ITR		0x3080
+#define QM_REG_DQRR_PI_CINH	0x3100
+#define QM_REG_DQRR_CI_CINH	0x3140
+#define QM_REG_DQRR_ITR		0x3180
+#define QM_REG_DQRR_DCAP	0x31C0
+#define QM_REG_DQRR_SDQCR	0x3200
+#define QM_REG_DQRR_VDQCR	0x3240
+#define QM_REG_DQRR_PDQCR	0x3280
+#define QM_REG_MR_PI_CINH	0x3300
+#define QM_REG_MR_CI_CINH	0x3340
+#define QM_REG_MR_ITR		0x3380
+#define QM_REG_CFG		0x3500
+#define QM_REG_ISR		0x3600
+#define QM_REG_IIR              0x36C0
+#define QM_REG_ITPR		0x3740
+
+/* Cache-enabled register offsets */
+#define QM_CL_EQCR		0x0000
+#define QM_CL_DQRR		0x1000
+#define QM_CL_MR		0x2000
+#define QM_CL_EQCR_PI_CENA	0x3000
+#define QM_CL_EQCR_CI_CENA	0x3040
+#define QM_CL_DQRR_PI_CENA	0x3100
+#define QM_CL_DQRR_CI_CENA	0x3140
+#define QM_CL_MR_PI_CENA	0x3300
+#define QM_CL_MR_CI_CENA	0x3340
+#define QM_CL_CR		0x3800
+#define QM_CL_RR0		0x3900
+#define QM_CL_RR1		0x3940
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __qm_in(qm, o)		be32_to_cpu(__raw_readl((qm)->ci  + (o)))
+#define __qm_out(qm, o, val)	__raw_writel((cpu_to_be32(val)), \
+					     (qm)->ci + (o))
+#define qm_in(reg)		__qm_in(&portal->addr, QM_REG_##reg)
+#define qm_out(reg, val)	__qm_out(&portal->addr, QM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __qm_cl_touch_ro(qm, o) dcbt_ro((qm)->ce + (o))
+#define __qm_cl_touch_rw(qm, o) dcbt_rw((qm)->ce + (o))
+#define __qm_cl_in(qm, o)	be32_to_cpu(__raw_readl((qm)->ce + (o)))
+#define __qm_cl_out(qm, o, val) \
+	do { \
+		u32 *__tmpclout = (qm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __qm_cl_invalidate(qm, o) dccivac((qm)->ce + (o))
+#define qm_cl_touch_ro(reg) __qm_cl_touch_ro(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_touch_rw(reg) __qm_cl_touch_rw(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_in(reg)	    __qm_cl_in(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_out(reg, val) __qm_cl_out(&portal->addr, QM_CL_##reg##_CENA, val)
+#define qm_cl_invalidate(reg)\
+	__qm_cl_invalidate(&portal->addr, QM_CL_##reg##_CENA)
+
+/* Cache-enabled ring access */
+#define qm_cl(base, idx)	((void *)base + ((idx) << 6))
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 qm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *     dmode == h/w dequeue mode.
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ *     dc == DCA (Discrete Consumption Acknowledgment), DQRR-only
+ *   As for "enum qm_dqrr_dmode", it should be self-explanatory.
+ */
+enum qm_eqcr_pmode {		/* matches QCSP_CFG::EPM */
+	qm_eqcr_pci = 0,	/* PI index, cache-inhibited */
+	qm_eqcr_pce = 1,	/* PI index, cache-enabled */
+	qm_eqcr_pvb = 2		/* valid-bit */
+};
+
+enum qm_dqrr_dmode {		/* matches QCSP_CFG::DP */
+	qm_dqrr_dpush = 0,	/* SDQCR  + VDQCR */
+	qm_dqrr_dpull = 1	/* PDQCR */
+};
+
+enum qm_dqrr_pmode {		/* s/w-only */
+	qm_dqrr_pci,		/* reads DQRR_PI_CINH */
+	qm_dqrr_pce,		/* reads DQRR_PI_CENA */
+	qm_dqrr_pvb		/* reads valid-bit */
+};
+
+enum qm_dqrr_cmode {		/* matches QCSP_CFG::DCM */
+	qm_dqrr_cci = 0,	/* CI index, cache-inhibited */
+	qm_dqrr_cce = 1,	/* CI index, cache-enabled */
+	qm_dqrr_cdc = 2		/* Discrete Consumption Acknowledgment */
+};
+
+enum qm_mr_pmode {		/* s/w-only */
+	qm_mr_pci,		/* reads MR_PI_CINH */
+	qm_mr_pce,		/* reads MR_PI_CENA */
+	qm_mr_pvb		/* reads valid-bit */
+};
+
+enum qm_mr_cmode {		/* matches QCSP_CFG::MM */
+	qm_mr_cci = 0,		/* CI index, cache-inhibited */
+	qm_mr_cce = 1		/* CI index, cache-enabled */
+};
+
+/* ------------------------- */
+/* --- Portal structures --- */
+
+#define QM_EQCR_SIZE		8
+#define QM_DQRR_SIZE		16
+#define QM_MR_SIZE		8
+
+struct qm_eqcr {
+	struct qm_eqcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	u32 busy;
+	enum qm_eqcr_pmode pmode;
+#endif
+};
+
+struct qm_dqrr {
+	const struct qm_dqrr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum qm_dqrr_dmode dmode;
+	enum qm_dqrr_pmode pmode;
+	enum qm_dqrr_cmode cmode;
+#endif
+};
+
+struct qm_mr {
+	const struct qm_mr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum qm_mr_pmode pmode;
+	enum qm_mr_cmode cmode;
+#endif
+};
+
+struct qm_mc {
+	struct qm_mc_command *cr;
+	struct qm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum {
+		/* Can be _mc_start()ed */
+		qman_mc_idle,
+		/* Can be _mc_commit()ed or _mc_abort()ed */
+		qman_mc_user,
+		/* Can only be _mc_retry()ed */
+		qman_mc_hw
+	} state;
+#endif
+};
+
+#define QM_PORTAL_ALIGNMENT ____cacheline_aligned
+
+struct qm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct qm_portal {
+	struct qm_addr addr;
+	struct qm_eqcr eqcr;
+	struct qm_dqrr dqrr;
+	struct qm_mr mr;
+	struct qm_mc mc;
+} QM_PORTAL_ALIGNMENT;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define EQCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_EQCR_SIZE << 6)))
+
+extern dma_addr_t rte_mem_virt2phy(const void *addr);
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 EQCR_PTR2IDX(struct qm_eqcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_EQCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void EQCR_INC(struct qm_eqcr *eqcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates fast
+	 * code with essentially no branching overheads. We increment to the
+	 * next EQCR pointer and handle overflow and 'vbit'.
+	 */
+	struct qm_eqcr_entry *partial = eqcr->cursor + 1;
+
+	eqcr->cursor = EQCR_CARRYCLEAR(partial);
+	if (partial != eqcr->cursor)
+		eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_no_stash(struct qm_portal
+								 *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available)
+		return NULL;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 1;
+#endif
+
+	return eqcr->cursor;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_stash(struct qm_portal
+								*portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return NULL;
+	}
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 1;
+#endif
+	return eqcr->cursor;
+}
+
+static inline void qm_eqcr_abort(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_pend_and_next(
+					struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+	DPAA_ASSERT(eqcr->pmode != qm_eqcr_pvb);
+	if (eqcr->available == 1)
+		return NULL;
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcr->cursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	return eqcr->cursor;
+}
+
+#define EQCR_COMMIT_CHECKS(eqcr) \
+do { \
+	DPAA_ASSERT(eqcr->busy); \
+	DPAA_ASSERT(eqcr->cursor->orp == (eqcr->cursor->orp & 0x00ffffff)); \
+	DPAA_ASSERT(eqcr->cursor->fqid == (eqcr->cursor->fqid & 0x00ffffff)); \
+} while (0)
+
+static inline void qm_eqcr_pci_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pci);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	hwsync();
+	qm_out(EQCR_PI_CINH, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	qm_cl_invalidate(EQCR_PI);
+	qm_cl_touch_rw(EQCR_PI);
+}
+
+static inline void qm_eqcr_pce_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	lwsync();
+	qm_cl_out(EQCR_PI, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pvb_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eqcursor;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pvb);
+	lwsync();
+	eqcursor = eqcr->cursor;
+	eqcursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline u8 qm_eqcr_cci_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline void qm_eqcr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	qm_cl_touch_ro(EQCR_CI);
+}
+
+static inline u8 qm_eqcr_cce_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline u8 qm_eqcr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->ithresh;
+}
+
+static inline void qm_eqcr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	eqcr->ithresh = ithresh;
+	qm_out(EQCR_ITR, ithresh);
+}
+
+static inline u8 qm_eqcr_get_avail(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->available;
+}
+
+static inline u8 qm_eqcr_get_fill(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return QM_EQCR_SIZE - 1 - eqcr->available;
+}
+
+#define DQRR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_DQRR_SIZE << 6)))
+
+static inline u8 DQRR_PTR2IDX(const struct qm_dqrr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_DQRR_SIZE - 1);
+}
+
+static inline const struct qm_dqrr_entry *DQRR_INC(
+						const struct qm_dqrr_entry *e)
+{
+	return DQRR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_dqrr_set_maxfill(struct qm_portal *portal, u8 mf)
+{
+	qm_out(CFG, (qm_in(CFG) & 0xff0fffff) |
+		((mf & (QM_DQRR_SIZE - 1)) << 20));
+}
+
+static inline const struct qm_dqrr_entry *qm_dqrr_current(
+						struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	if (!dqrr->fill)
+		return NULL;
+	return dqrr->cursor;
+}
+
+static inline u8 qm_dqrr_cursor(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return DQRR_PTR2IDX(dqrr->cursor);
+}
+
+static inline u8 qm_dqrr_next(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->fill);
+	dqrr->cursor = DQRR_INC(dqrr->cursor);
+	return --dqrr->fill;
+}
+
+static inline u8 qm_dqrr_pci_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pci);
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	qm_cl_invalidate(DQRR_PI);
+	qm_cl_touch_ro(DQRR_PI);
+}
+
+static inline u8 qm_dqrr_pce_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	dqrr->pi = qm_cl_in(DQRR_PI) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	const struct qm_dqrr_entry *res = qm_cl(dqrr->ring, dqrr->pi);
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_DQRR_VERB_VBIT) == dqrr->vbit) {
+		dqrr->pi = (dqrr->pi + 1) & (QM_DQRR_SIZE - 1);
+		if (!dqrr->pi)
+			dqrr->vbit ^= QM_DQRR_VERB_VBIT;
+		dqrr->fill++;
+	}
+}
+
+static inline void qm_dqrr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_rw(DQRR_CI);
+}
+
+static inline void qm_dqrr_cce_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cdc_consume_1(struct qm_portal *portal, u8 idx,
+					 int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |	/* S */
+		((park ? 1 : 0) << 6) |	/* PK */
+		idx);			/* DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_1ptr(struct qm_portal *portal,
+					    const struct qm_dqrr_entry *dq,
+					int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 idx = DQRR_PTR2IDX(dq);
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* DQRR_DCAP::S */
+		((park ? 1 : 0) << 6) |		/* DQRR_DCAP::PK */
+		idx);				/* DQRR_DCAP::DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_n(struct qm_portal *portal, u16 bitmask)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (1 << 8) |		/* DQRR_DCAP::S */
+		((u32)bitmask << 16));		/* DQRR_DCAP::DCAP_CI */
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+}
+
+static inline u8 qm_dqrr_cdc_cci(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+}
+
+static inline void qm_dqrr_cdc_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_ro(DQRR_CI);
+}
+
+static inline u8 qm_dqrr_cdc_cce(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_cl_in(DQRR_CI) & (QM_DQRR_SIZE - 1);
+}
+
+static inline u8 qm_dqrr_get_ci(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	return dqrr->ci;
+}
+
+static inline void qm_dqrr_park(struct qm_portal *portal, u8 idx)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		(idx & (QM_DQRR_SIZE - 1)));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_park_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		DQRR_PTR2IDX(dqrr->cursor));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_sdqcr_set(struct qm_portal *portal, u32 sdqcr)
+{
+	qm_out(DQRR_SDQCR, sdqcr);
+}
+
+static inline u32 qm_dqrr_sdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_SDQCR);
+}
+
+static inline void qm_dqrr_vdqcr_set(struct qm_portal *portal, u32 vdqcr)
+{
+	qm_out(DQRR_VDQCR, vdqcr);
+}
+
+static inline u32 qm_dqrr_vdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_VDQCR);
+}
+
+static inline u8 qm_dqrr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return dqrr->ithresh;
+}
+
+static inline void qm_dqrr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(DQRR_ITR, ithresh);
+}
+
+static inline u8 qm_dqrr_get_maxfill(struct qm_portal *portal)
+{
+	return (qm_in(CFG) & 0x00f00000) >> 20;
+}
+
+/* -------------- */
+/* --- MR API --- */
+
+#define MR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_MR_SIZE << 6)))
+
+static inline u8 MR_PTR2IDX(const struct qm_mr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_MR_SIZE - 1);
+}
+
+static inline const struct qm_mr_entry *MR_INC(const struct qm_mr_entry *e)
+{
+	return MR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_mr_finish(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (mr->ci != MR_PTR2IDX(mr->cursor))
+		pr_crit("Ignoring completed MR entries\n");
+}
+
+static inline const struct qm_mr_entry *qm_mr_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (!mr->fill)
+		return NULL;
+	return mr->cursor;
+}
+
+static inline u8 qm_mr_next(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->fill);
+	mr->cursor = MR_INC(mr->cursor);
+	return --mr->fill;
+}
+
+static inline void qm_mr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = (mr->ci + num) & (QM_MR_SIZE - 1);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = MR_PTR2IDX(mr->cursor);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(MR_ITR, ithresh);
+}
+
+/* ------------------------------ */
+/* --- Management command API --- */
+static inline int qm_mc_init(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + QM_CL_CR;
+	mc->rr = portal->addr.ce + QM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			QM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? QM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_idle;
+#endif
+	return 0;
+}
+
+static inline void qm_mc_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (mc->state != qman_mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct qm_mc_command *qm_mc_start(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void qm_mc_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_hw;
+#endif
+}
+
+static inline struct qm_mc_result *qm_mc_result(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= QM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_idle;
+#endif
+	return rr;
+}
+
+/* Portal interrupt register API */
+static inline void qm_isr_set_iperiod(struct qm_portal *portal, u16 iperiod)
+{
+	qm_out(ITPR, iperiod);
+}
+
+static inline u32 __qm_isr_read(struct qm_portal *portal, enum qm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 6));
+#else
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __qm_isr_write(struct qm_portal *portal, enum qm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 6), val);
+#else
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 2), val);
+#endif
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 80dde20..90fb130 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -66,6 +66,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_qman_portal_init(uint32_t index, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct qman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -116,6 +117,14 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 	pcfg.node = NULL;
 	pcfg.irq = fd;
 
+	portal = qman_create_affine_portal(&pcfg, NULL);
+	if (!portal) {
+		pr_err("Qman portal initialisation failed (%d)\n",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	irq_map.type = dpaa_portal_qman;
 	irq_map.portal_cinh = map.addr.cinh;
 	process_portal_irq_map(fd, &irq_map);
@@ -124,10 +133,13 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 
 static int fsl_qman_portal_finish(void)
 {
+	__maybe_unused const struct qm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = qman_destroy_affine_portal();
+	DPAA_BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 784fe60..85ae13b 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1246,6 +1246,761 @@ struct qman_cgr {
 	struct list_head node;
 };
 
+/* Flags to qman_create_fq() */
+#define QMAN_FQ_FLAG_NO_ENQUEUE      0x00000001 /* can't enqueue */
+#define QMAN_FQ_FLAG_NO_MODIFY       0x00000002 /* can only enqueue */
+#define QMAN_FQ_FLAG_TO_DCPORTAL     0x00000004 /* consumed by CAAM/PME/Fman */
+#define QMAN_FQ_FLAG_LOCKED          0x00000008 /* multi-core locking */
+#define QMAN_FQ_FLAG_AS_IS           0x00000010 /* query h/w state */
+#define QMAN_FQ_FLAG_DYNAMIC_FQID    0x00000020 /* (de)allocate fqid */
+
+/* Flags to qman_destroy_fq() */
+#define QMAN_FQ_DESTROY_PARKED       0x00000001 /* FQ can be parked or OOS */
+
+/* Flags from qman_fq_state() */
+#define QMAN_FQ_STATE_CHANGING       0x80000000 /* 'state' is changing */
+#define QMAN_FQ_STATE_NE             0x40000000 /* retired FQ isn't empty */
+#define QMAN_FQ_STATE_ORL            0x20000000 /* retired FQ has ORL */
+#define QMAN_FQ_STATE_BLOCKOOS       0xe0000000 /* if any are set, no OOS */
+#define QMAN_FQ_STATE_CGR_EN         0x10000000 /* CGR enabled */
+#define QMAN_FQ_STATE_VDQCR          0x08000000 /* being volatile dequeued */
+
+/* Flags to qman_init_fq() */
+#define QMAN_INITFQ_FLAG_SCHED       0x00000001 /* schedule rather than park */
+#define QMAN_INITFQ_FLAG_LOCAL       0x00000004 /* set dest portal */
+
+/* Flags to qman_enqueue(). NB, the strange numbering is to align with hardware,
+ * bit-wise. (NB: the PME API is sensitive to these precise numberings too, so
+ * any change here should be audited in PME.)
+ */
+#define QMAN_ENQUEUE_FLAG_WATCH_CGR  0x00080000 /* watch congestion state */
+#define QMAN_ENQUEUE_FLAG_DCA        0x00008000 /* perform enqueue-DCA */
+#define QMAN_ENQUEUE_FLAG_DCA_PARK   0x00004000 /* If DCA, requests park */
+#define QMAN_ENQUEUE_FLAG_DCA_PTR(p)		/* If DCA, p is DQRR entry */ \
+		(((u32)(p) << 2) & 0x00000f00)
+#define QMAN_ENQUEUE_FLAG_C_GREEN    0x00000000 /* choose one C_*** flag */
+#define QMAN_ENQUEUE_FLAG_C_YELLOW   0x00000008
+#define QMAN_ENQUEUE_FLAG_C_RED      0x00000010
+#define QMAN_ENQUEUE_FLAG_C_OVERRIDE 0x00000018
+/* For the ORP-specific qman_enqueue_orp() variant;
+ * - this flag indicates "Not Last In Sequence", ie. all but the final fragment
+ *   of a frame.
+ */
+#define QMAN_ENQUEUE_FLAG_NLIS       0x01000000
+/* - this flag performs no enqueue but fills in an ORP sequence number that
+ *   would otherwise block it (eg. if a frame has been dropped).
+ */
+#define QMAN_ENQUEUE_FLAG_HOLE       0x02000000
+/* - this flag performs no enqueue but advances NESN to the given sequence
+ *   number.
+ */
+#define QMAN_ENQUEUE_FLAG_NESN       0x04000000
+
+/* Flags to qman_modify_cgr() */
+#define QMAN_CGR_FLAG_USE_INIT       0x00000001
+#define QMAN_CGR_MODE_FRAME          0x00000001
+
+/**
+ * qman_get_portal_index - get portal configuration index
+ */
+int qman_get_portal_index(void);
+
+/**
+ * qman_affine_channel - return the channel ID of an portal
+ * @cpu: the cpu whose affine portal is the subject of the query
+ *
+ * If @cpu is -1, the affine portal for the current CPU will be used. It is a
+ * bug to call this function for any value of @cpu (other than -1) that is not a
+ * member of the cpu mask.
+ */
+u16 qman_affine_channel(int cpu);
+
+/**
+ * qman_set_vdq - Issue a volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @num: Number of Frames requested for volatile dequeue
+ *
+ * This function will issue a volatile dequeue command to the QMAN.
+ */
+int qman_set_vdq(struct qman_fq *fq, u16 num);
+
+/**
+ * qman_dequeue - Get the DQRR entry after volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ *
+ * This function will return the DQRR entry after a volatile dequeue command
+ * is issued. It will keep returning NULL until there is no packet available on
+ * the DQRR.
+ */
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
+
+/**
+ * qman_dqrr_consume - Consume the DQRR entriy after volatile dequeue
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @dq: DQRR entry to consume. This is the one which is provided by the
+ *    'qbman_dequeue' command.
+ *
+ * This will consume the DQRR enrey and make it available for next volatile
+ * dequeue.
+ */
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq);
+
+/**
+ * qman_poll_dqrr - process DQRR (fast-path) entries
+ * @limit: the maximum number of DQRR entries to process
+ *
+ * Use of this function requires that DQRR processing not be interrupt-driven.
+ * Ie. the value returned by qman_irqsource_get() should not include
+ * QM_PIRQ_DQRI. If the current CPU is sharing a portal hosted on another CPU,
+ * this function will return -EINVAL, otherwise the return value is >=0 and
+ * represents the number of DQRR entries processed.
+ */
+int qman_poll_dqrr(unsigned int limit);
+
+/**
+ * qman_poll
+ *
+ * Dispatcher logic on a cpu can use this to trigger any maintenance of the
+ * affine portal. There are two classes of portal processing in question;
+ * fast-path (which involves demuxing dequeue ring (DQRR) entries and tracking
+ * enqueue ring (EQCR) consumption), and slow-path (which involves EQCR
+ * thresholds, congestion state changes, etc). This function does whatever
+ * processing is not triggered by interrupts.
+ *
+ * Note, if DQRR and some slow-path processing are poll-driven (rather than
+ * interrupt-driven) then this function uses a heuristic to determine how often
+ * to run slow-path processing - as slow-path processing introduces at least a
+ * minimum latency each time it is run, whereas fast-path (DQRR) processing is
+ * close to zero-cost if there is no work to be done.
+ */
+void qman_poll(void);
+
+/**
+ * qman_stop_dequeues - Stop h/w dequeuing to the s/w portal
+ *
+ * Disables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_stop_dequeues(void);
+
+/**
+ * qman_start_dequeues - (Re)start h/w dequeuing to the s/w portal
+ *
+ * Enables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_start_dequeues(void);
+
+/**
+ * qman_static_dequeue_add - Add pool channels to the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Adds a set of pool channels to the portal's static dequeue command register
+ * (SDQCR). The requested pools are limited to those the portal has dequeue
+ * access to.
+ */
+void qman_static_dequeue_add(u32 pools);
+
+/**
+ * qman_static_dequeue_del - Remove pool channels from the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Removes a set of pool channels from the portal's static dequeue command
+ * register (SDQCR). The requested pools are limited to those the portal has
+ * dequeue access to.
+ */
+void qman_static_dequeue_del(u32 pools);
+
+/**
+ * qman_static_dequeue_get - return the portal's current SDQCR
+ *
+ * Returns the portal's current static dequeue command register (SDQCR). The
+ * entire register is returned, so if only the currently-enabled pool channels
+ * are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
+ */
+u32 qman_static_dequeue_get(void);
+
+/**
+ * qman_dca - Perform a Discrete Consumption Acknowledgment
+ * @dq: the DQRR entry to be consumed
+ * @park_request: indicates whether the held-active @fq should be parked
+ *
+ * Only allowed in DCA-mode portals, for DQRR entries whose handler callback had
+ * previously returned 'qman_cb_dqrr_defer'. NB, as with the other APIs, this
+ * does not take a 'portal' argument but implies the core affine portal from the
+ * cpu that is currently executing the function. For reasons of locking, this
+ * function must be called from the same CPU as that which processed the DQRR
+ * entry in the first place.
+ */
+void qman_dca(struct qm_dqrr_entry *dq, int park_request);
+
+/**
+ * qman_eqcr_is_empty - Determine if portal's EQCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * enqueues for the local portal have been processed by Qman but can't use the
+ * QMAN_ENQUEUE_FLAG_WAIT_SYNC flag to do this from the final qman_enqueue().
+ * The function forces tracking of EQCR consumption (which normally doesn't
+ * happen until enqueue processing needs to find space to put new enqueue
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int qman_eqcr_is_empty(void);
+
+/**
+ * qman_set_dc_ern - Set the handler for DCP enqueue rejection notifications
+ * @handler: callback for processing DCP ERNs
+ * @affine: whether this handler is specific to the locally affine portal
+ *
+ * If a hardware block's interface to Qman (ie. its direct-connect portal, or
+ * DCP) is configured not to receive enqueue rejections, then any enqueues
+ * through that DCP that are rejected will be sent to a given software portal.
+ * If @affine is non-zero, then this handler will only be used for DCP ERNs
+ * received on the portal affine to the current CPU. If multiple CPUs share a
+ * portal and they all call this function, they will be setting the handler for
+ * the same portal! If @affine is zero, then this handler will be global to all
+ * portals handled by this instance of the driver. Only those portals that do
+ * not have their own affine handler will use the global handler.
+ */
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
+
+	/* FQ management */
+	/* ------------- */
+/**
+ * qman_create_fq - Allocates a FQ
+ * @fqid: the index of the FQD to encapsulate, must be "Out of Service"
+ * @flags: bit-mask of QMAN_FQ_FLAG_*** options
+ * @fq: memory for storing the 'fq', with callbacks filled in
+ *
+ * Creates a frame queue object for the given @fqid, unless the
+ * QMAN_FQ_FLAG_DYNAMIC_FQID flag is set in @flags, in which case a FQID is
+ * dynamically allocated (or the function fails if none are available). Once
+ * created, the caller should not touch the memory at 'fq' except as extended to
+ * adjacent memory for user-defined fields (see the definition of "struct
+ * qman_fq" for more info). NO_MODIFY is only intended for enqueuing to
+ * pre-existing frame-queues that aren't to be otherwise interfered with, it
+ * prevents all other modifications to the frame queue. The TO_DCPORTAL flag
+ * causes the driver to honour any contextB modifications requested in the
+ * qm_init_fq() API, as this indicates the frame queue will be consumed by a
+ * direct-connect portal (PME, CAAM, or Fman). When frame queues are consumed by
+ * software portals, the contextB field is controlled by the driver and can't be
+ * modified by the caller. If the AS_IS flag is specified, management commands
+ * will be used on portal @p to query state for frame queue @fqid and construct
+ * a frame queue object based on that, rather than assuming/requiring that it be
+ * Out of Service.
+ */
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
+
+/**
+ * qman_destroy_fq - Deallocates a FQ
+ * @fq: the frame queue object to release
+ * @flags: bit-mask of QMAN_FQ_FREE_*** options
+ *
+ * The memory for this frame queue object ('fq' provided in qman_create_fq()) is
+ * not deallocated but the caller regains ownership, to do with as desired. The
+ * FQ must be in the 'out-of-service' state unless the QMAN_FQ_FREE_PARKED flag
+ * is specified, in which case it may also be in the 'parked' state.
+ */
+void qman_destroy_fq(struct qman_fq *fq, u32 flags);
+
+/**
+ * qman_fq_fqid - Queries the frame queue ID of a FQ object
+ * @fq: the frame queue object to query
+ */
+u32 qman_fq_fqid(struct qman_fq *fq);
+
+/**
+ * qman_fq_state - Queries the state of a FQ object
+ * @fq: the frame queue object to query
+ * @state: pointer to state enum to return the FQ scheduling state
+ * @flags: pointer to state flags to receive QMAN_FQ_STATE_*** bitmask
+ *
+ * Queries the state of the FQ object, without performing any h/w commands.
+ * This captures the state, as seen by the driver, at the time the function
+ * executes.
+ */
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
+
+/**
+ * qman_init_fq - Initialises FQ fields, leaves the FQ "parked" or "scheduled"
+ * @fq: the frame queue object to modify, must be 'parked' or new.
+ * @flags: bit-mask of QMAN_INITFQ_FLAG_*** options
+ * @opts: the FQ-modification settings, as defined in the low-level API
+ *
+ * The @opts parameter comes from the low-level portal API. Select
+ * QMAN_INITFQ_FLAG_SCHED in @flags to cause the frame queue to be scheduled
+ * rather than parked. NB, @opts can be NULL.
+ *
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver;
+ * 1. the 'count' and 'fqid' fields are always ignored (this operation only
+ * affects one frame queue: @fq).
+ * 2. the QM_INITFQ_WE_CONTEXTB option of the 'we_mask' field and the associated
+ * 'fqd' structure's 'context_b' field are sometimes overwritten;
+ *   - if @fq was not created with QMAN_FQ_FLAG_TO_DCPORTAL, then context_b is
+ *     initialised to a value used by the driver for demux.
+ *   - if context_b is initialised for demux, so is context_a in case stashing
+ *     is requested (see item 4).
+ * (So caller control of context_b is only possible for TO_DCPORTAL frame queue
+ * objects.)
+ * 3. if @flags contains QMAN_INITFQ_FLAG_LOCAL, the 'fqd' structure's
+ * 'dest::channel' field will be overwritten to match the portal used to issue
+ * the command. If the WE_DESTWQ write-enable bit had already been set by the
+ * caller, the channel workqueue will be left as-is, otherwise the write-enable
+ * bit is set and the workqueue is set to a default of 4. If the "LOCAL" flag
+ * isn't set, the destination channel/workqueue fields and the write-enable bit
+ * are left as-is.
+ * 4. if the driver overwrites context_a/b for demux, then if
+ * QM_INITFQ_WE_CONTEXTA is set, the driver will only overwrite
+ * context_a.address fields and will leave the stashing fields provided by the
+ * user alone, otherwise it will zero out the context_a.stashing fields.
+ */
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
+
+/**
+ * qman_schedule_fq - Schedules a FQ
+ * @fq: the frame queue object to schedule, must be 'parked'
+ *
+ * Schedules the frame queue, which must be Parked, which takes it to
+ * Tentatively-Scheduled or Truly-Scheduled depending on its fill-level.
+ */
+int qman_schedule_fq(struct qman_fq *fq);
+
+/**
+ * qman_retire_fq - Retires a FQ
+ * @fq: the frame queue object to retire
+ * @flags: FQ flags (as per qman_fq_state) if retirement completes immediately
+ *
+ * Retires the frame queue. This returns zero if it succeeds immediately, +1 if
+ * the retirement was started asynchronously, otherwise it returns negative for
+ * failure. When this function returns zero, @flags is set to indicate whether
+ * the retired FQ is empty and/or whether it has any ORL fragments (to show up
+ * as ERNs). Otherwise the corresponding flags will be known when a subsequent
+ * FQRN message shows up on the portal's message ring.
+ *
+ * NB, if the retirement is asynchronous (the FQ was in the Truly Scheduled or
+ * Active state), the completion will be via the message ring as a FQRN - but
+ * the corresponding callback may occur before this function returns!! Ie. the
+ * caller should be prepared to accept the callback as the function is called,
+ * not only once it has returned.
+ */
+int qman_retire_fq(struct qman_fq *fq, u32 *flags);
+
+/**
+ * qman_oos_fq - Puts a FQ "out of service"
+ * @fq: the frame queue object to be put out-of-service, must be 'retired'
+ *
+ * The frame queue must be retired and empty, and if any order restoration list
+ * was released as ERNs at the time of retirement, they must all be consumed.
+ */
+int qman_oos_fq(struct qman_fq *fq);
+
+/**
+ * qman_fq_flow_control - Set the XON/XOFF state of a FQ
+ * @fq: the frame queue object to be set to XON/XOFF state, must not be 'oos',
+ * or 'retired' or 'parked' state
+ * @xon: boolean to set fq in XON or XOFF state
+ *
+ * The frame should be in Tentatively Scheduled state or Truly Schedule sate,
+ * otherwise the IFSI interrupt will be asserted.
+ */
+int qman_fq_flow_control(struct qman_fq *fq, int xon);
+
+/**
+ * qman_query_fq - Queries FQD fields (via h/w query command)
+ * @fq: the frame queue object to be queried
+ * @fqd: storage for the queried FQD fields
+ */
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd);
+
+/**
+ * qman_query_fq_has_pkts - Queries non-programmable FQD fields and returns '1'
+ * if packets are in the frame queue. If there are no packets on frame
+ * queue '0' is returned.
+ * @fq: the frame queue object to be queried
+ */
+int qman_query_fq_has_pkts(struct qman_fq *fq);
+
+/**
+ * qman_query_fq_np - Queries non-programmable FQD fields
+ * @fq: the frame queue object to be queried
+ * @np: storage for the queried FQD fields
+ */
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
+
+/**
+ * qman_query_wq - Queries work queue lengths
+ * @query_dedicated: If non-zero, query length of WQs in the channel dedicated
+ *		to this software portal. Otherwise, query length of WQs in a
+ *		channel  specified in wq.
+ * @wq: storage for the queried WQs lengths. Also specified the channel to
+ *	to query if query_dedicated is zero.
+ */
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
+
+/**
+ * qman_volatile_dequeue - Issue a volatile dequeue command
+ * @fq: the frame queue object to dequeue from
+ * @flags: a bit-mask of QMAN_VOLATILE_FLAG_*** options
+ * @vdqcr: bit mask of QM_VDQCR_*** options, as per qm_dqrr_vdqcr_set()
+ *
+ * Attempts to lock access to the portal's VDQCR volatile dequeue functionality.
+ * The function will block and sleep if QMAN_VOLATILE_FLAG_WAIT is specified and
+ * the VDQCR is already in use, otherwise returns non-zero for failure. If
+ * QMAN_VOLATILE_FLAG_FINISH is specified, the function will only return once
+ * the VDQCR command has finished executing (ie. once the callback for the last
+ * DQRR entry resulting from the VDQCR command has been called). If not using
+ * the FINISH flag, completion can be determined either by detecting the
+ * presence of the QM_DQRR_STAT_UNSCHEDULED and QM_DQRR_STAT_DQCR_EXPIRED bits
+ * in the "stat" field of the "struct qm_dqrr_entry" passed to the FQ's dequeue
+ * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
+ * "flags" retrieved from qman_fq_state().
+ */
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
+
+/**
+ * qman_enqueue - Enqueue a frame to a frame queue
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ *
+ * Fills an entry in the EQCR of portal @qm to enqueue the frame described by
+ * @fd. The descriptor details are copied from @fd to the EQCR entry, the 'pid'
+ * field is ignored. The return value is non-zero on error, such as ring full
+ * (and FLAG_WAIT not specified), congestion avoidance (FLAG_WATCH_CGR
+ * specified), etc. If the ring is full and FLAG_WAIT is specified, this
+ * function will block. If FLAG_INTERRUPT is set, the EQCI bit of the portal
+ * interrupt will assert when Qman consumes the EQCR entry (subject to "status
+ * disable", "enable", and "inhibit" registers). If FLAG_DCA is set, Qman will
+ * perform an implied "discrete consumption acknowledgment" on the dequeue
+ * ring's (DQRR) entry, at the ring index specified by the FLAG_DCA_IDX(x)
+ * macro. (As an alternative to issuing explicit DCA actions on DQRR entries,
+ * this implicit DCA can delay the release of a "held active" frame queue
+ * corresponding to a DQRR entry until Qman consumes the EQCR entry - providing
+ * order-preservation semantics in packet-forwarding scenarios.) If FLAG_DCA is
+ * set, then FLAG_DCA_PARK can also be set to imply that the DQRR consumption
+ * acknowledgment should "park request" the "held active" frame queue. Ie.
+ * when the portal eventually releases that frame queue, it will be left in the
+ * Parked state rather than Tentatively Scheduled or Truly Scheduled. If the
+ * portal is watching congestion groups, the QMAN_ENQUEUE_FLAG_WATCH_CGR flag
+ * is requested, and the FQ is a member of a congestion group, then this
+ * function returns -EAGAIN if the congestion group is currently congested.
+ * Note, this does not eliminate ERNs, as the async interface means we can be
+ * sending enqueue commands to an un-congested FQ that becomes congested before
+ * the enqueue commands are processed, but it does minimise needless thrashing
+ * of an already busy hardware resource by throttling many of the to-be-dropped
+ * enqueues "at the source".
+ */
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send);
+
+typedef int (*qman_cb_precommit) (void *arg);
+
+/**
+ * qman_enqueue_orp - Enqueue a frame to a frame queue using an ORP
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ * @orp: the frame queue object used as an order restoration point.
+ * @orp_seqnum: the sequence number of this frame in the order restoration path
+ *
+ * Similar to qman_enqueue(), but with the addition of an Order Restoration
+ * Point (@orp) and corresponding sequence number (@orp_seqnum) for this
+ * enqueue operation to employ order restoration. Each frame queue object acts
+ * as an Order Definition Point (ODP) by providing each frame dequeued from it
+ * with an incrementing sequence number, this value is generally ignored unless
+ * that sequence of dequeued frames will need order restoration later. Each
+ * frame queue object also encapsulates an Order Restoration Point (ORP), which
+ * is a re-assembly context for re-ordering frames relative to their sequence
+ * numbers as they are enqueued. The ORP does not have to be within the frame
+ * queue that receives the enqueued frame, in fact it is usually the frame
+ * queue from which the frames were originally dequeued. For the purposes of
+ * order restoration, multiple frames (or "fragments") can be enqueued for a
+ * single sequence number by setting the QMAN_ENQUEUE_FLAG_NLIS flag for all
+ * enqueues except the final fragment of a given sequence number. Ordering
+ * between sequence numbers is guaranteed, even if fragments of different
+ * sequence numbers are interlaced with one another. Fragments of the same
+ * sequence number will retain the order in which they are enqueued. If no
+ * enqueue is to performed, QMAN_ENQUEUE_FLAG_HOLE indicates that the given
+ * sequence number is to be "skipped" by the ORP logic (eg. if a frame has been
+ * dropped from a sequence), or QMAN_ENQUEUE_FLAG_NESN indicates that the given
+ * sequence number should become the ORP's "Next Expected Sequence Number".
+ *
+ * Side note: a frame queue object can be used purely as an ORP, without
+ * carrying any frames at all. Care should be taken not to deallocate a frame
+ * queue object that is being actively used as an ORP, as a future allocation
+ * of the frame queue object may start using the internal ORP before the
+ * previous use has finished.
+ */
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum);
+
+/**
+ * qman_alloc_fqid_range - Allocate a contiguous range of FQIDs
+ * @result: is set by the API to the base FQID of the allocated range
+ * @count: the number of FQIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count FQIDs
+ *
+ * Returns the number of frame queues allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * FQs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_fqid(u32 *result)
+{
+	int ret = qman_alloc_fqid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_fqid_range - Release the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of FQIDs
+ * that it can subsequently allocate from.
+ */
+void qman_release_fqid_range(u32 fqid, unsigned int count);
+static inline void qman_release_fqid(u32 fqid)
+{
+	qman_release_fqid_range(fqid, 1);
+}
+
+void qman_seed_fqid_range(u32 fqid, unsigned int count);
+
+int qman_shutdown_fq(u32 fqid);
+
+/**
+ * qman_reserve_fqid_range - Reserve the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ */
+int qman_reserve_fqid_range(u32 fqid, unsigned int count);
+static inline int qman_reserve_fqid(u32 fqid)
+{
+	return qman_reserve_fqid_range(fqid, 1);
+}
+
+/* Pool-channel management */
+/**
+ * qman_alloc_pool_range - Allocate a contiguous range of pool-channel IDs
+ * @result: is set by the API to the base pool-channel ID of the allocated range
+ * @count: the number of pool-channel IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of pool-channel IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_pool(u32 *result)
+{
+	int ret = qman_alloc_pool_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_pool_range - Release the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to deallocate
+ * @count: the number of pool-channel IDs in the range
+ */
+void qman_release_pool_range(u32 id, unsigned int count);
+static inline void qman_release_pool(u32 id)
+{
+	qman_release_pool_range(id, 1);
+}
+
+/**
+ * qman_reserve_pool_range - Reserve the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to reserve
+ * @count: the number of pool-channel IDs in the range
+ */
+int qman_reserve_pool_range(u32 id, unsigned int count);
+static inline int qman_reserve_pool(u32 id)
+{
+	return qman_reserve_pool_range(id, 1);
+}
+
+void qman_seed_pool_range(u32 id, unsigned int count);
+
+	/* CGR management */
+	/* -------------- */
+/**
+ * qman_create_cgr - Register a congestion group object
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: optional state of CGR settings
+ *
+ * Registers this object to receiving congestion entry/exit callbacks on the
+ * portal affine to the cpu portal on which this API is executed. If opts is
+ * NULL then only the callback (cgr->cb) function is registered. If @flags
+ * contains QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset
+ * any unspecified parameters) will be used rather than a modify hw hardware
+ * (which only modifies the specified parameters).
+ */
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_create_cgr_to_dcp - Register a congestion group object to DCP portal
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @dcp_portal: the DCP portal to which the cgr object is registered.
+ * @opts: optional state of CGR settings
+ *
+ */
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_delete_cgr - Deregisters a congestion group object
+ * @cgr: the 'cgr' object to deregister
+ *
+ * "Unplugs" this CGR object from the portal affine to the cpu on which this API
+ * is executed. This must be excuted on the same affine portal on which it was
+ * created.
+ */
+int qman_delete_cgr(struct qman_cgr *cgr);
+
+/**
+ * qman_modify_cgr - Modify CGR fields
+ * @cgr: the 'cgr' object to modify
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: the CGR-modification settings
+ *
+ * The @opts parameter comes from the low-level portal API, and can be NULL.
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver, in particular the 'cgrid' field is ignored (this operation
+ * only affects the given CGR object). If @flags contains
+ * QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset any
+ * unspecified parameters) will be used rather than a modify hw hardware (which
+ * only modifies the specified parameters).
+ */
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_query_cgr - Queries CGR fields
+ * @cgr: the 'cgr' object to query
+ * @result: storage for the queried congestion group record
+ */
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *result);
+
+/**
+ * qman_query_congestion - Queries the state of all congestion groups
+ * @congestion: storage for the queried state of all congestion groups
+ */
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
+
+/**
+ * qman_alloc_cgrid_range - Allocate a contiguous range of CGR IDs
+ * @result: is set by the API to the base CGR ID of the allocated range
+ * @count: the number of CGR IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of CGR IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_cgrid(u32 *result)
+{
+	int ret = qman_alloc_cgrid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_cgrid_range - Release the specified range of CGR IDs
+ * @id: the base CGR ID of the range to deallocate
+ * @count: the number of CGR IDs in the range
+ */
+void qman_release_cgrid_range(u32 id, unsigned int count);
+static inline void qman_release_cgrid(u32 id)
+{
+	qman_release_cgrid_range(id, 1);
+}
+
+/**
+ * qman_reserve_cgrid_range - Reserve the specified range of CGR ID
+ * @id: the base CGR ID of the range to reserve
+ * @count: the number of CGR IDs in the range
+ */
+int qman_reserve_cgrid_range(u32 id, unsigned int count);
+static inline int qman_reserve_cgrid(u32 id)
+{
+	return qman_reserve_cgrid_range(id, 1);
+}
+
+void qman_seed_cgrid_range(u32 id, unsigned int count);
+
+	/* Helpers */
+	/* ------- */
+/**
+ * qman_poll_fq_for_init - Check if an FQ has been initialised from OOS
+ * @fqid: the FQID that will be initialised by other s/w
+ *
+ * In many situations, a FQID is provided for communication between s/w
+ * entities, and whilst the consumer is responsible for initialising and
+ * scheduling the FQ, the producer(s) generally create a wrapper FQ object using
+ * and only call qman_enqueue() (no FQ initialisation, scheduling, etc). Ie;
+ *     qman_create_fq(..., QMAN_FQ_FLAG_NO_MODIFY, ...);
+ * However, data can not be enqueued to the FQ until it is initialised out of
+ * the OOS state - this function polls for that condition. It is particularly
+ * useful for users of IPC functions - each endpoint's Rx FQ is the other
+ * endpoint's Tx FQ, so each side can initialise and schedule their Rx FQ object
+ * and then use this API on the (NO_MODIFY) Tx FQ object in order to
+ * synchronise. The function returns zero for success, +1 if the FQ is still in
+ * the OOS state, or negative if there was an error.
+ */
+static inline int qman_poll_fq_for_init(struct qman_fq *fq)
+{
+	struct qm_mcr_queryfq_np np;
+	int err;
+
+	err = qman_query_fq_np(fq, &np);
+	if (err)
+		return err;
+	if ((np.state & QM_MCR_NP_STATE_MASK) == QM_MCR_NP_STATE_OOS)
+		return 1;
+	return 0;
+}
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+#define cpu_to_hw_sg(x) (x)
+#define hw_sg_to_cpu(x) (x)
+#else
+#define cpu_to_hw_sg(x)  __cpu_to_hw_sg(x)
+#define hw_sg_to_cpu(x)  __hw_sg_to_cpu(x)
+
+static inline void __cpu_to_hw_sg(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = cpu_to_be64(sgentry->opaque);
+	sgentry->val = cpu_to_be32(sgentry->val);
+	sgentry->val_off = cpu_to_be16(sgentry->val_off);
+}
+
+static inline void __hw_sg_to_cpu(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = be64_to_cpu(sgentry->opaque);
+	sgentry->val = be32_to_cpu(sgentry->val);
+	sgentry->val_off = be16_to_cpu(sgentry->val_off);
+}
+#endif
 
 #ifdef __cplusplus
 }
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index b0d953f..a4897b0 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -42,6 +42,7 @@
 #define __FSL_USD_H
 
 #include <compat.h>
+#include <fsl_qman.h>
 
 #ifdef __cplusplus
 extern "C" {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 12/40] bus/dpaa: add BMAN driver core
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (10 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 13/40] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
                       ` (28 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Buffer Manager (BMan) is a hardware buffer pool management block that
allows software and accelerators on the datapath to acquire and release
buffers in order to build frames.

This patch adds the core routines.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman_driver.c | 311 +++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_priv.h   | 125 ++++++++++
 drivers/bus/dpaa/include/fsl_bman.h       | 375 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   5 +
 5 files changed, 817 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index ba87386..2d626b2 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -70,6 +70,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
 	base/qbman/dpaa_alloc.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
new file mode 100644
index 0000000..fb3c50e
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -0,0 +1,311 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_branch_prediction.h>
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "bman_priv.h"
+#include <sys/ioctl.h>
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+u16 bman_ip_rev;
+u16 bman_pool_max;
+void *bman_ccsr_map;
+
+/*****************/
+/* Portal driver */
+/*****************/
+
+static __thread int fd = -1;
+static __thread struct bm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_bman
+};
+
+static int fsl_bman_portal_init(uint32_t idx, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!");
+		return -EINVAL;
+	}
+	/* Allocate and map a bman portal */
+	map.index = idx;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+	pcfg.is_shared = is_shared;
+	pcfg.index = map.index;
+	bman_depletion_fill(&pcfg.mask);
+
+	fd = open(BMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("BMan irq init failed");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+	/* Use the IRQ FD as a unique IRQ number */
+	pcfg.irq = fd;
+
+	/* Set the IRQ number */
+	irq_map.type = dpaa_portal_bman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_bman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int bman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_bman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int bman_thread_finish(void)
+{
+	return fsl_bman_portal_finish();
+}
+
+void bman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int bman_init_ccsr(const struct device_node *node)
+{
+	static int ccsr_map_fd;
+	uint64_t phys_addr;
+	const uint32_t *bman_addr;
+	uint64_t regs_size;
+
+	bman_addr = of_get_address(node, 0, &regs_size, NULL);
+	if (!bman_addr) {
+		pr_err("of_get_address cannot return BMan address");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(node, bman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open(BMAN_CCSR_MAP, O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for BMan CCSR map");
+		return ccsr_map_fd;
+	}
+
+	bman_ccsr_map = mmap(NULL, regs_size, PROT_READ |
+			     PROT_WRITE, MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (bman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map BMan CCSR base Bman: "
+		       "0x%x Phys: 0x%lx size 0x%lx",
+		       *bman_addr, phys_addr, regs_size);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int bman_global_init(void)
+{
+	const struct device_node *dt_node;
+	static int done;
+
+	if (done)
+		return -EBUSY;
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,bman-portal");
+	if (!dt_node) {
+		pr_err("No bman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,bman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,bman-portal-1.0.0")) {
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.0.8")) {
+		bman_ip_rev = BMAN_REV20;
+		bman_pool_max = 8;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.3")) {
+		bman_ip_rev = BMAN_REV21;
+		bman_pool_max = 64;
+	} else {
+		pr_warn("unknown BMan version in portal node,default "
+			"to rev1.0");
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	}
+
+	if (!bman_ip_rev) {
+		pr_err("Unknown bman portal version\n");
+		return -ENODEV;
+	}
+	{
+		const struct device_node *dn = of_find_compatible_node(NULL,
+							NULL, "fsl,bman");
+		if (!dn)
+			pr_err("No bman device node available");
+
+		if (bman_init_ccsr(dn))
+			pr_err("BMan CCSR map failed.");
+	}
+
+	done = 1;
+	return 0;
+}
+
+#define BMAN_POOL_CONTENT(n) (0x0600 + ((n) * 0x04))
+u32 bm_pool_free_buffers(u32 bpid)
+{
+	return in_be32(bman_ccsr_map + BMAN_POOL_CONTENT(bpid));
+}
+
+static u32 __generate_thresh(u32 val, int roundup)
+{
+	u32 e = 0;      /* co-efficient, exponent */
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	DPAA_ASSERT(e < 0x10);
+	return (val | (e << 8));
+}
+
+#define POOL_SWDET(n)       (0x0000 + ((n) * 0x04))
+#define POOL_HWDET(n)       (0x0100 + ((n) * 0x04))
+#define POOL_SWDXT(n)       (0x0200 + ((n) * 0x04))
+#define POOL_HWDXT(n)       (0x0300 + ((n) * 0x04))
+int bm_pool_set(u32 bpid, const u32 *thresholds)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	out_be32(bman_ccsr_map + POOL_SWDET(bpid),
+		 __generate_thresh(thresholds[0], 0));
+	out_be32(bman_ccsr_map + POOL_SWDXT(bpid),
+		 __generate_thresh(thresholds[1], 1));
+	out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+		 __generate_thresh(thresholds[2], 0));
+	out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+		 __generate_thresh(thresholds[3], 1));
+	return 0;
+}
+
+#define BMAN_LOW_DEFAULT_THRESH		0x40
+#define BMAN_HIGH_DEFAULT_THRESH		0x80
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	if (low_thresh && high_thresh) {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(low_thresh, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(high_thresh, 1));
+	} else {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(BMAN_LOW_DEFAULT_THRESH, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(BMAN_HIGH_DEFAULT_THRESH, 1));
+	}
+	return 0;
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman_priv.h b/drivers/bus/dpaa/base/qbman/bman_priv.h
new file mode 100644
index 0000000..07d9cec
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_priv.h
@@ -0,0 +1,125 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_PRIV_H
+#define __BMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_bman.h>
+
+/* Revision info (for errata and feature handling) */
+#define BMAN_REV10 0x0100
+#define BMAN_REV20 0x0200
+#define BMAN_REV21 0x0201
+
+#define BMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+#define BMAN_CCSR_MAP "/dev/mem"
+
+/* This mask contains all the "irqsource" bits visible to API users */
+#define BM_PIRQ_VISIBLE	(BM_PIRQ_RCRI | BM_PIRQ_BSCN)
+
+/* These are bm_<reg>_<verb>(). So for example, bm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define bm_isr_status_read(bm)		__bm_isr_read(bm, bm_isr_status)
+#define bm_isr_status_clear(bm, m)	__bm_isr_write(bm, bm_isr_status, m)
+#define bm_isr_enable_read(bm)		__bm_isr_read(bm, bm_isr_enable)
+#define bm_isr_enable_write(bm, v)	__bm_isr_write(bm, bm_isr_enable, v)
+#define bm_isr_disable_read(bm)		__bm_isr_read(bm, bm_isr_disable)
+#define bm_isr_disable_write(bm, v)	__bm_isr_write(bm, bm_isr_disable, v)
+#define bm_isr_inhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 1)
+#define bm_isr_uninhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 0)
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+extern u16 bman_pool_max;
+
+/* used by CCSR and portal interrupt code */
+enum bm_isr_reg {
+	bm_isr_status = 0,
+	bm_isr_enable = 1,
+	bm_isr_disable = 2,
+	bm_isr_inhibit = 3
+};
+
+struct bm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* This is used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* These are the buffer pool IDs that may be used via this portal. */
+	struct bman_depletion mask;
+
+};
+
+int bman_init_ccsr(const struct device_node *node);
+
+struct bman_portal *bman_create_affine_portal(
+			const struct bm_portal_config *config);
+const struct bm_portal_config *bman_destroy_affine_portal(void);
+
+/* Set depletion thresholds associated with a buffer pool. Requires that the
+ * operating system have access to Bman CCSR (ie. compiled in support and
+ * run-time access courtesy of the device-tree).
+ */
+int bm_pool_set(u32 bpid, const u32 *thresholds);
+
+/* Read the free buffer count for a given buffer */
+u32 bm_pool_free_buffers(u32 bpid);
+
+#endif /* __BMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
new file mode 100644
index 0000000..383106b
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -0,0 +1,375 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_BMAN_H
+#define __FSL_BMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* This wrapper represents a bit-array for the depletion state of the 64 Bman
+ * buffer pools.
+ */
+struct bman_depletion {
+	u32 state[2];
+};
+
+static inline void bman_depletion_init(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = 0;
+}
+
+static inline void bman_depletion_fill(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = ~0;
+}
+
+/* --- Bman data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct bm_rcr_entry;	/* RCR (Release Command Ring) entries */
+struct bm_mc_command;	/* MC (Management Command) command */
+struct bm_mc_result;	/* MC result */
+
+/* Code-reduction, define a wrapper for 48-bit buffers. In cases where a buffer
+ * pool id specific to this buffer is needed (BM_RCR_VERB_CMD_BPID_MULTI,
+ * BM_MCC_VERB_ACQUIRE), the 'bpid' field is used.
+ */
+struct bm_buffer {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1;
+			u8 bpid;
+			u16 hi; /* High 16-bits of 48-bit address */
+			u32 lo; /* Low 32-bits of 48-bit address */
+#else
+			u32 lo;
+			u16 hi;
+			u8 bpid;
+			u8 __reserved;
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:16;
+			u64 addr:48;
+#else
+			u64 addr:48;
+			u64 __notaddress:16;
+#endif
+		};
+		u64 opaque;
+	};
+} __attribute__((aligned(8)));
+static inline u64 bm_buffer_get64(const struct bm_buffer *buf)
+{
+	return buf->addr;
+}
+
+static inline dma_addr_t bm_buf_addr(const struct bm_buffer *buf)
+{
+	return (dma_addr_t)buf->addr;
+}
+
+#define bm_buffer_set64(buf, v) \
+	do { \
+		struct bm_buffer *__buf931 = (buf); \
+		__buf931->hi = upper_32_bits(v); \
+		__buf931->lo = lower_32_bits(v); \
+	} while (0)
+
+/* See 1.5.3.5.4: "Release Command" */
+struct bm_rcr_entry {
+	union {
+		struct {
+			u8 __dont_write_directly__verb;
+			u8 bpid; /* used with BM_RCR_VERB_CMD_BPID_SINGLE */
+			u8 __reserved1[62];
+		};
+		struct bm_buffer bufs[8];
+	};
+} __packed;
+#define BM_RCR_VERB_VBIT		0x80
+#define BM_RCR_VERB_CMD_MASK		0x70	/* one of two values; */
+#define BM_RCR_VERB_CMD_BPID_SINGLE	0x20
+#define BM_RCR_VERB_CMD_BPID_MULTI	0x30
+#define BM_RCR_VERB_BUFCOUNT_MASK	0x0f	/* values 1..8 */
+
+/* See 1.5.3.1: "Acquire Command" */
+/* See 1.5.3.2: "Query Command" */
+struct bm_mcc_acquire {
+	u8 bpid;
+	u8 __reserved1[62];
+} __packed;
+struct bm_mcc_query {
+	u8 __reserved2[63];
+} __packed;
+struct bm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct bm_mcc_acquire acquire;
+		struct bm_mcc_query query;
+	};
+} __packed;
+#define BM_MCC_VERB_VBIT		0x80
+#define BM_MCC_VERB_CMD_MASK		0x70	/* where the verb contains; */
+#define BM_MCC_VERB_CMD_ACQUIRE		0x10
+#define BM_MCC_VERB_CMD_QUERY		0x40
+#define BM_MCC_VERB_ACQUIRE_BUFCOUNT	0x0f	/* values 1..8 go here */
+
+/* See 1.5.3.3: "Acquire Response" */
+/* See 1.5.3.4: "Query Response" */
+struct bm_pool_state {
+	u8 __reserved1[32];
+	/* "availability state" and "depletion state" */
+	struct {
+		u8 __reserved1[8];
+		/* Access using bman_depletion_***() */
+		struct bman_depletion state;
+	} as, ds;
+};
+
+struct bm_mc_result {
+	union {
+		struct {
+			u8 verb;
+			u8 __reserved1[63];
+		};
+		union {
+			struct {
+				u8 __reserved1;
+				u8 bpid;
+				u8 __reserved2[62];
+			};
+			struct bm_buffer bufs[8];
+		} acquire;
+		struct bm_pool_state query;
+	};
+} __packed;
+#define BM_MCR_VERB_VBIT		0x80
+#define BM_MCR_VERB_CMD_MASK		BM_MCC_VERB_CMD_MASK
+#define BM_MCR_VERB_CMD_ACQUIRE		BM_MCC_VERB_CMD_ACQUIRE
+#define BM_MCR_VERB_CMD_QUERY		BM_MCC_VERB_CMD_QUERY
+#define BM_MCR_VERB_CMD_ERR_INVALID	0x60
+#define BM_MCR_VERB_CMD_ERR_ECC		0x70
+#define BM_MCR_VERB_ACQUIRE_BUFCOUNT	BM_MCC_VERB_ACQUIRE_BUFCOUNT /* 0..8 */
+
+/* Portal and Buffer Pools */
+/* Represents a managed portal */
+struct bman_portal;
+
+/* This object type represents Bman buffer pools. */
+struct bman_pool;
+
+/* This struct specifies parameters for a bman_pool object. */
+struct bman_pool_params {
+	/* index of the buffer pool to encapsulate (0-63), ignored if
+	 * BMAN_POOL_FLAG_DYNAMIC_BPID is set.
+	 */
+	u32 bpid;
+	/* bit-mask of BMAN_POOL_FLAG_*** options */
+	u32 flags;
+	/* depletion-entry/exit thresholds, if BMAN_POOL_FLAG_THRESH is set. NB:
+	 * this is only allowed if BMAN_POOL_FLAG_DYNAMIC_BPID is used *and*
+	 * when run in the control plane (which controls Bman CCSR). This array
+	 * matches the definition of bm_pool_set().
+	 */
+	u32 thresholds[4];
+};
+
+/* Flags to bman_new_pool() */
+#define BMAN_POOL_FLAG_NO_RELEASE    0x00000001 /* can't release to pool */
+#define BMAN_POOL_FLAG_ONLY_RELEASE  0x00000002 /* can only release to pool */
+#define BMAN_POOL_FLAG_DYNAMIC_BPID  0x00000008 /* (de)allocate bpid */
+#define BMAN_POOL_FLAG_THRESH        0x00000010 /* set depletion thresholds */
+
+/* Flags to bman_release() */
+#define BMAN_RELEASE_FLAG_NOW        0x00000008 /* issue immediate release */
+
+
+/**
+ * bman_get_portal_index - get portal configuration index
+ */
+int bman_get_portal_index(void);
+
+/**
+ * bman_rcr_is_empty - Determine if portal's RCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * releases for the local portal have been processed by Bman but can't use the
+ * BMAN_RELEASE_FLAG_WAIT_SYNC flag to do this from the final bman_release().
+ * The function forces tracking of RCR consumption (which normally doesn't
+ * happen until release processing needs to find space to put new release
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int bman_rcr_is_empty(void);
+
+/**
+ * bman_alloc_bpid_range - Allocate a contiguous range of BPIDs
+ * @result: is set by the API to the base BPID of the allocated range
+ * @count: the number of BPIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count BPIDs
+ *
+ * Returns the number of buffer pools allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * BPs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int bman_alloc_bpid(u32 *result)
+{
+	int ret = bman_alloc_bpid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * bman_release_bpid_range - Release the specified range of buffer pool IDs
+ * @bpid: the base BPID of the range to deallocate
+ * @count: the number of BPIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of BPIDs
+ * that it can subsequently allocate from.
+ */
+void bman_release_bpid_range(u32 bpid, unsigned int count);
+static inline void bman_release_bpid(u32 bpid)
+{
+	bman_release_bpid_range(bpid, 1);
+}
+
+int bman_reserve_bpid_range(u32 bpid, unsigned int count);
+static inline int bman_reserve_bpid(u32 bpid)
+{
+	return bman_reserve_bpid_range(bpid, 1);
+}
+
+void bman_seed_bpid_range(u32 bpid, unsigned int count);
+
+int bman_shutdown_pool(u32 bpid);
+
+/**
+ * bman_new_pool - Allocates a Buffer Pool object
+ * @params: parameters specifying the buffer pool ID and behaviour
+ *
+ * Creates a pool object for the given @params. A portal and the depletion
+ * callback field of @params are only used if the BMAN_POOL_FLAG_DEPLETION flag
+ * is set. NB, the fields from @params are copied into the new pool object, so
+ * the structure provided by the caller can be released or reused after the
+ * function returns.
+ */
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params);
+
+/**
+ * bman_free_pool - Deallocates a Buffer Pool object
+ * @pool: the pool object to release
+ */
+void bman_free_pool(struct bman_pool *pool);
+
+/**
+ * bman_get_params - Returns a pool object's parameters.
+ * @pool: the pool object
+ *
+ * The returned pointer refers to state within the pool object so must not be
+ * modified and can no longer be read once the pool object is destroyed.
+ */
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
+
+/**
+ * bman_release - Release buffer(s) to the buffer pool
+ * @pool: the buffer pool object to release to
+ * @bufs: an array of buffers to release
+ * @num: the number of buffers in @bufs (1-8)
+ * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options
+ *
+ */
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_acquire - Acquire buffer(s) from a buffer pool
+ * @pool: the buffer pool object to acquire from
+ * @bufs: array for storing the acquired buffers
+ * @num: the number of buffers desired (@bufs is at least this big)
+ *
+ * Issues an "Acquire" command via the portal's management command interface.
+ * The return value will be the number of buffers obtained from the pool, or a
+ * negative error code if a h/w error or pool starvation was encountered.
+ */
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_query_pools - Query all buffer pool states
+ * @state: storage for the queried availability and depletion states
+ */
+int bman_query_pools(struct bm_pool_state *state);
+
+/**
+ * bman_query_free_buffers - Query how many free buffers are in buffer pool
+ * @pool: the buffer pool object to query
+ *
+ * Return the number of the free buffers
+ */
+u32 bman_query_free_buffers(struct bman_pool *pool);
+
+/**
+ * bman_update_pool_thresholds - Change the buffer pool's depletion thresholds
+ * @pool: the buffer pool object to which the thresholds will be set
+ * @thresholds: the new thresholds
+ */
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds);
+
+/**
+ * bm_pool_set_hw_threshold - Change the buffer pool's thresholds
+ * @pool: Pool id
+ * @low_thresh: low threshold
+ * @high_thresh: high threshold
+ */
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_BMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index a4897b0..a3243af 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -50,7 +50,9 @@ extern "C" {
 
 /* Thread-entry/exit hooks; */
 int qman_thread_init(void);
+int bman_thread_init(void);
 int qman_thread_finish(void);
+int bman_thread_finish(void);
 
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
@@ -92,9 +94,12 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal);
  * into another blocking read/select/poll.
  */
 void qman_thread_irq(void);
+void bman_thread_irq(void);
 
 /* Global setup */
 int qman_global_init(void);
+int bman_global_init(void);
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 13/40] bus/dpaa: add support for FMAN frame queue lookup
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (11 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
                       ` (27 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/qbman/qman.c        | 99 ++++++++++++++++++++++++++++++-
 drivers/bus/dpaa/base/qbman/qman_driver.c |  7 ++-
 drivers/bus/dpaa/base/qbman/qman_priv.h   | 11 ++++
 drivers/bus/dpaa/include/fsl_qman.h       | 12 ++++
 4 files changed, 126 insertions(+), 3 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 494d54c..837e46c 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -176,6 +176,65 @@ static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
 	return fqtree_find(&p->retire_table, fqid);
 }
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+static void **qman_fq_lookup_table;
+static size_t qman_fq_lookup_table_size;
+
+int qman_setup_fq_lookup_table(size_t num_entries)
+{
+	num_entries++;
+	/* Allocate 1 more entry since the first entry is not used */
+	qman_fq_lookup_table = vmalloc((num_entries * sizeof(void *)));
+	if (!qman_fq_lookup_table) {
+		pr_err("QMan: Could not allocate fq lookup table\n");
+		return -ENOMEM;
+	}
+	memset(qman_fq_lookup_table, 0, num_entries * sizeof(void *));
+	qman_fq_lookup_table_size = num_entries;
+	pr_debug("QMan: Allocated lookup table at %p, entry count %lu\n",
+		qman_fq_lookup_table,
+			(unsigned long)qman_fq_lookup_table_size);
+	return 0;
+}
+
+/* global structure that maintains fq object mapping */
+static DEFINE_SPINLOCK(fq_hash_table_lock);
+
+static int find_empty_fq_table_entry(u32 *entry, struct qman_fq *fq)
+{
+	u32 i;
+
+	spin_lock(&fq_hash_table_lock);
+	/* Can't use index zero because this has special meaning
+	 * in context_b field.
+	 */
+	for (i = 1; i < qman_fq_lookup_table_size; i++) {
+		if (qman_fq_lookup_table[i] == NULL) {
+			*entry = i;
+			qman_fq_lookup_table[i] = fq;
+			spin_unlock(&fq_hash_table_lock);
+			return 0;
+		}
+	}
+	spin_unlock(&fq_hash_table_lock);
+	return -ENOMEM;
+}
+
+static void clear_fq_table_entry(u32 entry)
+{
+	spin_lock(&fq_hash_table_lock);
+	DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
+	qman_fq_lookup_table[entry] = NULL;
+	spin_unlock(&fq_hash_table_lock);
+}
+
+static inline struct qman_fq *get_fq_table_entry(u32 entry)
+{
+	DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
+	return qman_fq_lookup_table[entry];
+}
+#endif
+
 static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
 {
 	/* Byteswap the FQD to HW format */
@@ -766,8 +825,13 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 				break;
 			case QM_MR_VERB_FQPN:
 				/* Parked */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+				fq = get_fq_table_entry(
+					be32_to_cpu(msg->fq.contextB));
+#else
 				fq = (void *)(uintptr_t)
 					be32_to_cpu(msg->fq.contextB);
+#endif
 				fq_state_change(p, fq, msg, verb);
 				if (fq->cb.fqs)
 					fq->cb.fqs(p, fq, &swapped_msg);
@@ -792,7 +856,11 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 			}
 		} else {
 			/* Its a software ERN */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(be32_to_cpu(msg->ern.tag));
+#else
 			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+#endif
 			fq->cb.ern(p, fq, &swapped_msg);
 		}
 		num++;
@@ -907,7 +975,11 @@ static inline unsigned int __poll_portal_fast(struct qman_portal *p,
 				clear_vdqcr(p, fq);
 		} else {
 			/* SDQCR: context_b points to the FQ */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(dq->contextB);
+#else
 			fq = (void *)(uintptr_t)dq->contextB;
+#endif
 			/* Now let the callback do its stuff */
 			res = fq->cb.dqrr(p, fq, dq);
 			/*
@@ -1119,7 +1191,12 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
 	fq->flags = flags;
 	fq->state = qman_fq_state_oos;
 	fq->cgr_groupid = 0;
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	if (unlikely(find_empty_fq_table_entry(&fq->key, fq))) {
+		pr_info("Find empty table entry failed\n");
+		return -ENOMEM;
+	}
+#endif
 	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
 		return 0;
 	/* Everything else is AS_IS support */
@@ -1193,7 +1270,9 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
 	case qman_fq_state_oos:
 		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
 			qman_release_fqid(fq->fqid);
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		clear_fq_table_entry(fq->key);
+#endif
 		return;
 	default:
 		break;
@@ -1258,7 +1337,11 @@ int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
 		dma_addr_t phys_fq;
 
 		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		mcc->initfq.fqd.context_b = fq->key;
+#else
 		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+#endif
 		/*
 		 *  and the physical address - NB, if the user wasn't trying to
 		 * set CONTEXTA, clear the stashing settings.
@@ -1419,7 +1502,11 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
 			msg.verb = QM_MR_VERB_FQRNI;
 			msg.fq.fqs = mcr->alterfq.fqs;
 			msg.fq.fqid = fq->fqid;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			msg.fq.contextB = fq->key;
+#else
 			msg.fq.contextB = (u32)(uintptr_t)fq;
+#endif
 			fq->cb.fqs(p, fq, &msg);
 		}
 	} else if (res == QM_MCR_RESULT_PENDING) {
@@ -1861,7 +1948,11 @@ static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
 					QM_EQCR_DCA_PARK : 0) |
 			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
 	eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	eq->tag = cpu_to_be32(fq->key);
+#else
 	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 	eq->fd = *fd;
 	cpu_to_hw_fd(&eq->fd);
 	return eq;
@@ -1907,7 +1998,11 @@ int qman_enqueue_multi(struct qman_fq *fq,
 	/* try to send as many frames as possible */
 	while (eqcr->available && frames_to_send--) {
 		eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		eq->tag = cpu_to_be32(fq->key);
+#else
 		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 		eq->fd.opaque_addr = fd->opaque_addr;
 		eq->fd.addr = cpu_to_be40(fd->addr);
 		eq->fd.status = cpu_to_be32(fd->status);
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 90fb130..7a68896 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -279,5 +279,10 @@ int qman_global_init(void)
 	else
 		qman_clk = be32_to_cpu(*clk);
 
-	return ret;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	ret = qman_setup_fq_lookup_table(CONFIG_FSL_QMAN_FQ_LOOKUP_MAX);
+	if (ret)
+		return ret;
+#endif
+	return 0;
 }
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index 4a11e40..4b6c13c 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -44,6 +44,10 @@
 #include "dpaa_sys.h"
 #include <fsl_qman.h>
 
+#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)
+#error "_ARM64 requires _FSL_QMAN_FQ_LOOKUP"
+#endif
+
 /* Congestion Groups */
 /*
  * This wrapper represents a bit-array for the state of the 256 QMan congestion
@@ -197,6 +201,13 @@ void qm_set_liodns(struct qm_portal_config *pcfg);
 int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
 		       struct qm_mcr_cgrtestwrite *result);
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+/* If the fq object pointer is greater than the size of context_b field,
+ * than a lookup table is required.
+ */
+int qman_setup_fq_lookup_table(size_t num_entries);
+#endif
+
 /*   QMan s/w corenet portal, low-level i/face	 */
 
 /*
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 85ae13b..eedfd7e 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -46,6 +46,15 @@ extern "C" {
 
 #include <dpaa_rbtree.h>
 
+/* FQ lookups (turn this on for 64bit user-space) */
+#if (__WORDSIZE == 64)
+#define CONFIG_FSL_QMAN_FQ_LOOKUP
+/* if FQ lookups are supported, this controls the number of initialised,
+ * s/w-consumed FQs that can be supported at any one time.
+ */
+#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)
+#endif
+
 /* Last updated for v00.800 of the BG */
 
 /* Hardware constants */
@@ -1228,6 +1237,9 @@ struct qman_fq {
 	enum qman_fq_state state;
 	int cgr_groupid;
 	struct rb_node node;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	u32 key;
+#endif
 };
 
 /*
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 14/40] bus/dpaa: add BMan hardware interfaces
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (12 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 13/40] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
                       ` (26 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman.c        | 394 +++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman.h        | 550 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_driver.c |  12 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |  16 +
 5 files changed, 973 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 2d626b2..6675e53 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -70,6 +70,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman.c \
 	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman.c b/drivers/bus/dpaa/base/qbman/bman.c
new file mode 100644
index 0000000..be2d970
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.c
@@ -0,0 +1,394 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "bman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define RCR_THRESH	2	/* reread h/w CI when running out of space */
+#define IRQNAME		"BMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "BMan portal %d" */
+
+struct bman_portal {
+	struct bm_portal p;
+	/* 2-element array. pools[0] is mask, pools[1] is snapshot. */
+	struct bman_depletion *pools;
+	int thresh_set;
+	unsigned long irq_sources;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct bm_portal_config *config;
+	char irqname[MAX_IRQNAME];
+};
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static RTE_DEFINE_PER_LCORE(struct bman_portal, bman_affine_portal);
+
+static inline struct bman_portal *get_affine_portal(void)
+{
+	return &RTE_PER_LCORE(bman_affine_portal);
+}
+
+/*
+ * This object type refers to a pool, it isn't *the* pool. There may be
+ * more than one such object per BMan buffer pool, eg. if different users of
+ * the pool are operating via different portals.
+ */
+struct bman_pool {
+	struct bman_pool_params params;
+	/* Used for hash-table admin when using depletion notifications. */
+	struct bman_portal *portal;
+	struct bman_pool *next;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	atomic_t in_use;
+#endif
+};
+
+static inline
+struct bman_portal *bman_create_portal(struct bman_portal *portal,
+				       const struct bm_portal_config *c)
+{
+	struct bm_portal *p;
+	const struct bman_depletion *pools = &c->mask;
+	int ret;
+	u8 bpid = 0;
+
+	p = &portal->p;
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference...
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	if (bm_rcr_init(p, bm_rcr_pvb, bm_rcr_cce)) {
+		pr_err("Bman RCR initialisation failed\n");
+		return NULL;
+	}
+	if (bm_mc_init(p)) {
+		pr_err("Bman MC initialisation failed\n");
+		goto fail_mc;
+	}
+	portal->pools = kmalloc(2 * sizeof(*pools), GFP_KERNEL);
+	if (!portal->pools)
+		goto fail_pools;
+	portal->pools[0] = *pools;
+	bman_depletion_init(portal->pools + 1);
+	while (bpid < bman_pool_max) {
+		/*
+		 * Default to all BPIDs disabled, we enable as required at
+		 * run-time.
+		 */
+		bm_isr_bscn_mask(p, bpid, 0);
+		bpid++;
+	}
+	portal->slowpoll = 0;
+	/* Write-to-clear any stale interrupt status bits */
+	bm_isr_disable_write(p, 0xffffffff);
+	portal->irq_sources = 0;
+	bm_isr_enable_write(p, portal->irq_sources);
+	bm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, NULL, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need RCR to be empty before continuing */
+	ret = bm_rcr_get_fill(p);
+	if (ret) {
+		pr_err("Bman RCR unclean\n");
+		goto fail_rcr_empty;
+	}
+	/* Success */
+	portal->config = c;
+
+	bm_isr_disable_write(p, 0);
+	bm_isr_uninhibit(p);
+	return portal;
+fail_rcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->pools);
+fail_pools:
+	bm_mc_finish(p);
+fail_mc:
+	bm_rcr_finish(p);
+	return NULL;
+}
+
+struct bman_portal *
+bman_create_affine_portal(const struct bm_portal_config *c)
+{
+	struct bman_portal *portal = get_affine_portal();
+
+	/*This function is called from the context which is already affine to
+	 *CPU or in other words this in non-migratable to other CPUs.
+	 */
+	portal = bman_create_portal(portal, c);
+	if (portal) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		spin_unlock(&affine_mask_lock);
+	}
+	return portal;
+}
+
+static inline
+void bman_destroy_portal(struct bman_portal *bm)
+{
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bm_rcr_cce_update(&bm->p);
+	bm_rcr_cce_update(&bm->p);
+
+	free_irq(pcfg->irq, bm);
+
+	kfree(bm->pools);
+	bm_mc_finish(&bm->p);
+	bm_rcr_finish(&bm->p);
+	bm->config = NULL;
+}
+
+const struct
+bm_portal_config *bman_destroy_affine_portal(void)
+{
+	struct bman_portal *bm = get_affine_portal();
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bman_destroy_portal(bm);
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(pcfg->cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int
+bman_get_portal_index(void)
+{
+	struct bman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+static const u32 zero_thresholds[4] = {0, 0, 0, 0};
+
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params)
+{
+	struct bman_pool *pool = NULL;
+	u32 bpid;
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID) {
+		int ret = bman_alloc_bpid(&bpid);
+
+		if (ret)
+			return NULL;
+	} else {
+		if (params->bpid >= bman_pool_max)
+			return NULL;
+		bpid = params->bpid;
+	}
+	if (params->flags & BMAN_POOL_FLAG_THRESH) {
+		int ret = bm_pool_set(bpid, params->thresholds);
+
+		if (ret)
+			goto err;
+	}
+
+	pool = kmalloc(sizeof(*pool), GFP_KERNEL);
+	if (!pool)
+		goto err;
+	pool->params = *params;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	atomic_set(&pool->in_use, 1);
+#endif
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		pool->params.bpid = bpid;
+
+	return pool;
+err:
+	if (params->flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(bpid, zero_thresholds);
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(bpid);
+	kfree(pool);
+
+	return NULL;
+}
+
+void bman_free_pool(struct bman_pool *pool)
+{
+	if (pool->params.flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(pool->params.bpid, zero_thresholds);
+	if (pool->params.flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(pool->params.bpid);
+	kfree(pool);
+}
+
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool)
+{
+	return &pool->params;
+}
+
+static void update_rcr_ci(struct bman_portal *p, int avail)
+{
+	if (avail)
+		bm_rcr_cce_prefetch(&p->p);
+	else
+		bm_rcr_cce_update(&p->p);
+}
+
+#define BMAN_BUF_MASK 0x0000fffffffffffful
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p;
+	struct bm_rcr_entry *r;
+	u32 i = num - 1;
+	u8 avail;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_NO_RELEASE)
+		return -EINVAL;
+#endif
+
+	p = get_affine_portal();
+	avail = bm_rcr_get_avail(&p->p);
+	if (avail < 2)
+		update_rcr_ci(p, avail);
+	r = bm_rcr_start(&p->p);
+	if (unlikely(!r))
+		return -EBUSY;
+
+	/*
+	 * we can copy all but the first entry, as this can trigger badness
+	 * with the valid-bit
+	 */
+	r->bufs[0].opaque =
+		cpu_to_be64(((u64)pool->params.bpid << 48) |
+			    (bufs[0].opaque & BMAN_BUF_MASK));
+	if (i) {
+		for (i = 1; i < num; i++)
+			r->bufs[i].opaque =
+				cpu_to_be64(bufs[i].opaque & BMAN_BUF_MASK);
+	}
+
+	bm_rcr_pvb_commit(&p->p, BM_RCR_VERB_CMD_BPID_SINGLE |
+			  (num & BM_RCR_VERB_BUFCOUNT_MASK));
+
+	return 0;
+}
+
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_command *mcc;
+	struct bm_mc_result *mcr;
+	int ret, i;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_ONLY_RELEASE)
+		return -EINVAL;
+#endif
+
+	mcc = bm_mc_start(&p->p);
+	mcc->acquire.bpid = pool->params.bpid;
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_ACQUIRE |
+			(num & BM_MCC_VERB_ACQUIRE_BUFCOUNT));
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	ret = mcr->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT;
+	if (bufs) {
+		for (i = 0; i < num; i++)
+			bufs[i].opaque =
+				be64_to_cpu(mcr->acquire.bufs[i].opaque);
+	}
+	if (ret != num)
+		ret = -ENOMEM;
+	return ret;
+}
+
+int bman_query_pools(struct bm_pool_state *state)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_result *mcr;
+
+	bm_mc_start(&p->p);
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_QUERY);
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & BM_MCR_VERB_CMD_MASK) ==
+		    BM_MCR_VERB_CMD_QUERY);
+	*state = mcr->query;
+	state->as.state.state[0] = be32_to_cpu(state->as.state.state[0]);
+	state->as.state.state[1] = be32_to_cpu(state->as.state.state[1]);
+	state->ds.state.state[0] = be32_to_cpu(state->ds.state.state[0]);
+	state->ds.state.state[1] = be32_to_cpu(state->ds.state.state[1]);
+	return 0;
+}
+
+u32 bman_query_free_buffers(struct bman_pool *pool)
+{
+	return bm_pool_free_buffers(pool->params.bpid);
+}
+
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds)
+{
+	u32 bpid;
+
+	bpid = bman_get_params(pool)->bpid;
+
+	return bm_pool_set(bpid, thresholds);
+}
+
+int bman_shutdown_pool(u32 bpid)
+{
+	struct bman_portal *p = get_affine_portal();
+	return bm_shutdown_pool(&p->p, bpid);
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman.h b/drivers/bus/dpaa/base/qbman/bman.h
new file mode 100644
index 0000000..9c66797
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.h
@@ -0,0 +1,550 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_H
+#define __BMAN_H
+
+#include "bman_priv.h"
+
+/* Cache-inhibited register offsets */
+#define BM_REG_RCR_PI_CINH	0x3000
+#define BM_REG_RCR_CI_CINH	0x3100
+#define BM_REG_RCR_ITR		0x3200
+#define BM_REG_CFG		0x3300
+#define BM_REG_SCN(n)		(0x3400 + ((n) << 6))
+#define BM_REG_ISR		0x3e00
+#define BM_REG_IIR              0x3ec0
+
+/* Cache-enabled register offsets */
+#define BM_CL_CR		0x0000
+#define BM_CL_RR0		0x0100
+#define BM_CL_RR1		0x0140
+#define BM_CL_RCR		0x1000
+#define BM_CL_RCR_PI_CENA	0x3000
+#define BM_CL_RCR_CI_CENA	0x3100
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __bm_in(bm, o)		be32_to_cpu(__raw_readl((bm)->ci + (o)))
+#define __bm_out(bm, o, val)    __raw_writel(cpu_to_be32(val), \
+					     (bm)->ci + (o))
+#define bm_in(reg)		__bm_in(&portal->addr, BM_REG_##reg)
+#define bm_out(reg, val)	__bm_out(&portal->addr, BM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __bm_cl_touch_ro(bm, o) dcbt_ro((bm)->ce + (o))
+#define __bm_cl_touch_rw(bm, o) dcbt_rw((bm)->ce + (o))
+#define __bm_cl_in(bm, o)	be32_to_cpu(__raw_readl((bm)->ce + (o)))
+#define __bm_cl_out(bm, o, val) \
+	do { \
+		u32 *__tmpclout = (bm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __bm_cl_invalidate(bm, o) dccivac((bm)->ce + (o))
+#define bm_cl_touch_ro(reg) __bm_cl_touch_ro(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_touch_rw(reg) __bm_cl_touch_rw(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_in(reg)	    __bm_cl_in(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_out(reg, val) __bm_cl_out(&portal->addr, BM_CL_##reg##_CENA, val)
+#define bm_cl_invalidate(reg)\
+	__bm_cl_invalidate(&portal->addr, BM_CL_##reg##_CENA)
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 bm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ */
+enum bm_rcr_pmode {		/* matches BCSP_CFG::RPM */
+	bm_rcr_pci = 0,		/* PI index, cache-inhibited */
+	bm_rcr_pce = 1,		/* PI index, cache-enabled */
+	bm_rcr_pvb = 2		/* valid-bit */
+};
+
+enum bm_rcr_cmode {		/* s/w-only */
+	bm_rcr_cci,		/* CI index, cache-inhibited */
+	bm_rcr_cce		/* CI index, cache-enabled */
+};
+
+/* --- Portal structures --- */
+
+#define BM_RCR_SIZE		8
+
+struct bm_rcr {
+	struct bm_rcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	u32 busy;
+	enum bm_rcr_pmode pmode;
+	enum bm_rcr_cmode cmode;
+#endif
+};
+
+struct bm_mc {
+	struct bm_mc_command *cr;
+	struct bm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum {
+		/* Can only be _mc_start()ed */
+		mc_idle,
+		/* Can only be _mc_commit()ed or _mc_abort()ed */
+		mc_user,
+		/* Can only be _mc_retry()ed */
+		mc_hw
+	} state;
+#endif
+};
+
+struct bm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct bm_portal {
+	struct bm_addr addr;
+	struct bm_rcr rcr;
+	struct bm_mc mc;
+	struct bm_portal_config config;
+} ____cacheline_aligned;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define RCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(BM_RCR_SIZE << 6)))
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 RCR_PTR2IDX(struct bm_rcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (BM_RCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void RCR_INC(struct bm_rcr *rcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates
+	 * fast code with essentially no branching overheads. We increment to
+	 * the next RCR pointer and handle overflow and 'vbit'.
+	 */
+	struct bm_rcr_entry *partial = rcr->cursor + 1;
+
+	rcr->cursor = RCR_CARRYCLEAR(partial);
+	if (partial != rcr->cursor)
+		rcr->vbit ^= BM_RCR_VERB_VBIT;
+}
+
+static inline int bm_rcr_init(struct bm_portal *portal, enum bm_rcr_pmode pmode,
+			      __maybe_unused enum bm_rcr_cmode cmode)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct bm_rcr *rcr = &portal->rcr;
+	u32 cfg;
+	u8 pi;
+
+	rcr->ring = portal->addr.ce + BM_CL_RCR;
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	rcr->cursor = rcr->ring + pi;
+	rcr->vbit = (bm_in(RCR_PI_CINH) & BM_RCR_SIZE) ?  BM_RCR_VERB_VBIT : 0;
+	rcr->available = BM_RCR_SIZE - 1
+		- bm_cyc_diff(BM_RCR_SIZE, rcr->ci, pi);
+	rcr->ithresh = bm_in(RCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+	rcr->pmode = pmode;
+	rcr->cmode = cmode;
+#endif
+	cfg = (bm_in(CFG) & 0xffffffe0) | (pmode & 0x3); /* BCSP_CFG::RPM */
+	bm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void bm_rcr_finish(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	u8 ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	DPAA_ASSERT(!rcr->busy);
+	if (pi != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("losing uncommitted RCR entries\n");
+	if (ci != rcr->ci)
+		pr_crit("missing existing RCR completions\n");
+	if (rcr->ci != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("RCR destroyed unquiesced\n");
+}
+
+static inline struct bm_rcr_entry *bm_rcr_start(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(!rcr->busy);
+	if (!rcr->available)
+		return NULL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 1;
+#endif
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline struct bm_rcr_entry *bm_rcr_pend_and_next(
+					struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode != bm_rcr_pvb);
+	if (rcr->available == 1)
+		return NULL;
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcr->cursor);
+	RCR_INC(rcr);
+	rcr->available--;
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_pci_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pci);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	hwsync();
+	bm_out(RCR_PI_CINH, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	bm_cl_invalidate(RCR_PI);
+	bm_cl_touch_rw(RCR_PI);
+}
+
+static inline void bm_rcr_pce_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	lwsync();
+	bm_cl_out(RCR_PI, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pvb_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	struct bm_rcr_entry *rcursor;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pvb);
+	lwsync();
+	rcursor = rcr->cursor;
+	rcursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcursor);
+	RCR_INC(rcr);
+	rcr->available--;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline u8 bm_rcr_cci_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cci);
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline void bm_rcr_cce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	bm_cl_touch_ro(RCR_CI);
+}
+
+static inline u8 bm_rcr_cce_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	rcr->ci = bm_cl_in(RCR_CI) & (BM_RCR_SIZE - 1);
+	bm_cl_invalidate(RCR_CI);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline u8 bm_rcr_get_ithresh(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->ithresh;
+}
+
+static inline void bm_rcr_set_ithresh(struct bm_portal *portal, u8 ithresh)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	rcr->ithresh = ithresh;
+	bm_out(RCR_ITR, ithresh);
+}
+
+static inline u8 bm_rcr_get_avail(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->available;
+}
+
+static inline u8 bm_rcr_get_fill(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return BM_RCR_SIZE - 1 - rcr->available;
+}
+
+/* --- Management command API --- */
+
+static inline int bm_mc_init(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + BM_CL_CR;
+	mc->rr = portal->addr.ce + BM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			BM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? BM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+	return 0;
+}
+
+static inline void bm_mc_finish(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (mc->state != mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct bm_mc_command *bm_mc_start(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void bm_mc_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_user);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+}
+
+static inline void bm_mc_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_hw;
+#endif
+}
+
+static inline struct bm_mc_result *bm_mc_result(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= BM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+	return rr;
+}
+
+#define SCN_REG(bpid) BM_REG_SCN((bpid) / 32)
+#define SCN_BIT(bpid) (0x80000000 >> (bpid & 31))
+static inline void bm_isr_bscn_mask(struct bm_portal *portal, u8 bpid,
+				    int enable)
+{
+	u32 val;
+
+	DPAA_ASSERT(bpid < bman_pool_max);
+	/* REG_SCN for bpid=0..31, REG_SCN+4 for bpid=32..63 */
+	val = __bm_in(&portal->addr, SCN_REG(bpid));
+	if (enable)
+		val |= SCN_BIT(bpid);
+	else
+		val &= ~SCN_BIT(bpid);
+	__bm_out(&portal->addr, SCN_REG(bpid), val);
+}
+
+static inline u32 __bm_isr_read(struct bm_portal *portal, enum bm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 6));
+#else
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __bm_isr_write(struct bm_portal *portal, enum bm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 6), val);
+#else
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 2), val);
+#endif
+}
+
+/* Buffer Pool Cleanup */
+static inline int bm_shutdown_pool(struct bm_portal *p, u32 bpid)
+{
+	struct bm_mc_command *bm_cmd;
+	struct bm_mc_result *bm_res;
+
+	int aq_count = 0;
+	bool stop = false;
+
+	while (!stop) {
+		/* Acquire buffers until empty */
+		bm_cmd = bm_mc_start(p);
+		bm_cmd->acquire.bpid = bpid;
+		bm_mc_commit(p, BM_MCC_VERB_CMD_ACQUIRE |  1);
+		while (!(bm_res = bm_mc_result(p)))
+			cpu_relax();
+		if (!(bm_res->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT)) {
+			/* Pool is empty */
+			stop = true;
+		} else
+			++aq_count;
+	};
+	return 0;
+}
+
+#endif /* __BMAN_H */
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
index fb3c50e..5c13a80 100644
--- a/drivers/bus/dpaa/base/qbman/bman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -65,6 +65,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct bman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -111,6 +112,14 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 	/* Use the IRQ FD as a unique IRQ number */
 	pcfg.irq = fd;
 
+	portal = bman_create_affine_portal(&pcfg);
+	if (!portal) {
+		pr_err("Bman portal initialisation failed (%d)",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	/* Set the IRQ number */
 	irq_map.type = dpaa_portal_bman;
 	irq_map.portal_cinh = map.addr.cinh;
@@ -120,10 +129,13 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 
 static int fsl_bman_portal_finish(void)
 {
+	__maybe_unused const struct bm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = bman_destroy_affine_portal();
+	DPAA_BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
index 690576a..35dba7f 100644
--- a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -41,6 +41,22 @@
 #include "dpaa_sys.h"
 #include <process.h>
 #include <fsl_qman.h>
+#include <fsl_bman.h>
+
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_bpid, result, count, align, partial);
+}
+
+void bman_release_bpid_range(u32 bpid, u32 count)
+{
+	process_release(dpaa_id_bpid, bpid, count);
+}
+
+int bman_reserve_bpid_range(u32 bpid, u32 count)
+{
+	return process_reserve(dpaa_id_bpid, bpid, count);
+}
 
 int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
 {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 15/40] bus/dpaa: add fman flow control threshold setting
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (13 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
                       ` (25 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/fman/fman_hw.c | 28 ++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_fman.h  |  7 +++++++
 2 files changed, 35 insertions(+)

diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 77908ec..7618fc1 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -37,6 +37,7 @@
  */
 #include <fsl_fman.h>
 #include <fsl_fman_crc64.h>
+#include <fsl_bman.h>
 
 /* Instantiate the global variable that the inline CRC64 implementation (in
  * <fsl_fman.h>) depends on.
@@ -437,6 +438,33 @@ fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
 }
 
 int
+fman_if_get_fc_threshold(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	return in_be32(fmbm_mpd);
+}
+
+int
+fman_if_set_fc_threshold(struct fman_if *fm_if, u32 high_water,
+			 u32 low_water, u32 bpid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	out_be32(fmbm_mpd, FMAN_ENABLE_BPOOL_DEPLETION);
+	return bm_pool_set_hw_threshold(bpid, low_water, high_water);
+
+}
+
+int
 fman_if_get_fc_quanta(struct fman_if *fm_if)
 {
 	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index 0aff22c..b94bc56 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -120,6 +120,13 @@ void fman_if_loopback_disable(struct fman_if *);
 void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
 		    size_t bufsize);
 
+/* Get Flow Control threshold parameters on specific interface */
+int fman_if_get_fc_threshold(struct fman_if *fm_if);
+
+/* Enable and Set Flow Control threshold parameters on specific interface */
+int fman_if_set_fc_threshold(struct fman_if *fm_if,
+			u32 high_water, u32 low_water, u32 bpid);
+
 /* Get Flow Control pause quanta on specific interface */
 int fman_if_get_fc_quanta(struct fman_if *fm_if);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (14 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
                       ` (24 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Now that QBMAN (QMAN, BMAN) and FMAN drivers are available, this patch
integrates the DPAA Bus driver for using the drivers for scanning
devices and calling the PMD registered probe callbacks.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c               | 248 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  39 +++++
 drivers/bus/dpaa/rte_dpaa_bus.h           |   6 +
 3 files changed, 293 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index cc343b3..8017df3 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -63,9 +63,21 @@
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
 int dpaa_logtype_bus;
 
 struct rte_dpaa_bus rte_dpaa_bus;
+struct netcfg_info *dpaa_netcfg;
+
+/* define a variable to hold the portal_key, once created.*/
+pthread_key_t dpaa_portal_key;
+
+RTE_DEFINE_PER_LCORE(bool, _dpaa_io);
 
 static inline void
 dpaa_add_to_device_list(struct rte_dpaa_device *dev)
@@ -79,11 +91,247 @@ dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
 	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
 }
 
+static void dpaa_clean_device_list(void);
+
+static int
+dpaa_create_device_list(void)
+{
+	int i;
+	int ret;
+	struct rte_dpaa_device *dev;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+
+	/* Creating Ethernet Devices */
+	for (i = 0; i < dpaa_netcfg->num_ethports; i++) {
+		dev = calloc(1, sizeof(struct rte_dpaa_device));
+		if (!dev) {
+			DPAA_BUS_LOG(ERR, "Failed to allocate ETH devices");
+			ret = -ENOMEM;
+			goto cleanup;
+		}
+
+		cfg = &dpaa_netcfg->port_cfg[i];
+		fman_intf = cfg->fman_if;
+
+		/* Device identifiers */
+		dev->id.fman_id = fman_intf->fman_idx + 1;
+		dev->id.mac_id = fman_intf->mac_idx;
+		dev->device_type = FSL_DPAA_ETH;
+		dev->id.dev_id = i;
+
+		/* Create device name */
+		memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
+		sprintf(dev->name, "fm%d-mac%d", (fman_intf->fman_idx + 1),
+			fman_intf->mac_idx);
+		DPAA_BUS_LOG(DEBUG, "Device added: %s", dev->name);
+		dev->device.name = dev->name;
+
+		dpaa_add_to_device_list(dev);
+	}
+
+	rte_dpaa_bus.device_count = i;
+
+	return 0;
+
+cleanup:
+	dpaa_clean_device_list();
+	return ret;
+}
+
+static void
+dpaa_clean_device_list(void)
+{
+	struct rte_dpaa_device *dev = NULL;
+	struct rte_dpaa_device *tdev = NULL;
+
+	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+		free(dev);
+		dev = NULL;
+	}
+}
+
+/** XXX move this function into a separate file */
+static int
+_dpaa_portal_init(void *arg)
+{
+	cpu_set_t cpuset;
+	pthread_t id;
+	uint32_t cpu = rte_lcore_id();
+	int ret;
+	struct dpaa_portal *dpaa_io_portal;
+
+	BUS_INIT_FUNC_TRACE();
+
+	if ((uint64_t)arg == 1 || cpu == LCORE_ID_ANY)
+		cpu = rte_get_master_lcore();
+	/* if the core id is not supported */
+	else
+		if (cpu >= RTE_MAX_LCORE)
+			return -1;
+
+	/* Set CPU affinity for this thread */
+	CPU_ZERO(&cpuset);
+	CPU_SET(cpu, &cpuset);
+	id = pthread_self();
+	ret = pthread_setaffinity_np(id, sizeof(cpu_set_t), &cpuset);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "pthread_setaffinity_np failed on "
+			"core :%d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	/* Initialise bman thread portals */
+	ret = bman_thread_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "BMAN thread initialized");
+
+	/* Initialise qman thread portals */
+	ret = qman_thread_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		bman_thread_finish();
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
+
+	dpaa_io_portal = rte_malloc(NULL, sizeof(struct dpaa_portal),
+				    RTE_CACHE_LINE_SIZE);
+	if (!dpaa_io_portal) {
+		DPAA_BUS_LOG(ERR, "Unable to allocate memory");
+		bman_thread_finish();
+		qman_thread_finish();
+		return -ENOMEM;
+	}
+
+	dpaa_io_portal->qman_idx = qman_get_portal_index();
+	dpaa_io_portal->bman_idx = bman_get_portal_index();
+	dpaa_io_portal->tid = syscall(SYS_gettid);
+
+	ret = pthread_setspecific(dpaa_portal_key, (void *)dpaa_io_portal);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "pthread_setspecific failed on "
+			    "core %d with ret: %d", cpu, ret);
+		dpaa_portal_finish(NULL);
+
+		return ret;
+	}
+
+	RTE_PER_LCORE(_dpaa_io) = true;
+
+	DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
+
+	return 0;
+}
+
+/*
+ * rte_dpaa_portal_init - Wrapper over _dpaa_portal_init with thread level check
+ * XXX Complete this
+ */
+int
+rte_dpaa_portal_init(void *arg)
+{
+	if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
+		return _dpaa_portal_init(arg);
+
+	return 0;
+}
+
+void
+dpaa_portal_finish(void *arg)
+{
+	struct dpaa_portal *dpaa_io_portal = (struct dpaa_portal *)arg;
+
+	if (!dpaa_io_portal) {
+		DPAA_BUS_LOG(DEBUG, "Portal already cleaned");
+		return;
+	}
+
+	bman_thread_finish();
+	qman_thread_finish();
+
+	pthread_setspecific(dpaa_portal_key, NULL);
+
+	rte_free(dpaa_io_portal);
+	dpaa_io_portal = NULL;
+
+	RTE_PER_LCORE(_dpaa_io) = false;
+}
+
+#define DPAA_DEV_PATH1 "/sys/devices/platform/soc/soc:fsl,dpaa"
+#define DPAA_DEV_PATH2 "/sys/devices/platform/fsl,dpaa"
+
 static int
 rte_dpaa_bus_scan(void)
 {
+	int ret;
+
 	BUS_INIT_FUNC_TRACE();
 
+	if ((access(DPAA_DEV_PATH1, F_OK) != 0) &&
+	    (access(DPAA_DEV_PATH2, F_OK) != 0)) {
+		RTE_LOG(DEBUG, EAL, "DPAA Bus not present. Skipping.\n");
+		return 0;
+	}
+
+	/* Load the device-tree driver */
+	ret = of_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "of_init failed with ret: %d", ret);
+		return -1;
+	}
+
+	/* Get the interface configurations from device-tree */
+	dpaa_netcfg = netcfg_acquire();
+	if (!dpaa_netcfg) {
+		DPAA_BUS_LOG(ERR, "netcfg_acquire failed");
+		return -EINVAL;
+	}
+
+	RTE_LOG(NOTICE, EAL, "DPAA Bus Detected\n");
+
+	if (!dpaa_netcfg->num_ethports) {
+		DPAA_BUS_LOG(INFO, "no network interfaces available");
+		/* This is not an error */
+		return 0;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "Bus: Address of netcfg=%p, Ethports=%d",
+		     dpaa_netcfg, dpaa_netcfg->num_ethports);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dump_netcfg(dpaa_netcfg);
+#endif
+
+	DPAA_BUS_LOG(DEBUG, "Number of devices = %d\n",
+		     dpaa_netcfg->num_ethports);
+	ret = dpaa_create_device_list();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to create device list. (%d)", ret);
+		return ret;
+	}
+
+	/* create the key, supplying a function that'll be invoked
+	 * when a portal affined thread will be deleted.
+	 */
+	ret = pthread_key_create(&dpaa_portal_key, dpaa_portal_finish);
+	if (ret) {
+		DPAA_BUS_LOG(DEBUG, "Unable to create pthread key. (%d)", ret);
+		dpaa_clean_device_list();
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "dpaa_portal_key=%u, ret=%d\n",
+		    (unsigned int)dpaa_portal_key, ret);
+
 	return 0;
 }
 
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index d97a009..263c08c 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,7 +1,46 @@
 DPDK_17.11 {
 	global:
 
+	bman_acquire;
+	bman_free_pool;
+	bman_get_params;
+	bman_new_pool;
+	bman_release;
+	dpaa_netcfg;
+	fman_ccsr_map_fd;
+	fman_dealloc_bufs_mask_hi;
+	fman_dealloc_bufs_mask_lo;
+	fman_if_disable_rx;
+	fman_if_enable_rx;
+	fman_if_discard_rx_errors;
+	fman_if_get_fc_threshold;
+	fman_if_get_fc_quanta;
+	fman_if_promiscuous_disable;
+	fman_if_promiscuous_enable;
+	fman_if_reset_mcast_filter_table;
+	fman_if_set_bp;
+	fman_if_set_fc_threshold;
+	fman_if_set_fc_quanta;
+	fman_if_set_fdoff;
+	fman_if_set_ic_params;
+	fman_if_set_maxfrm;
+	fman_if_set_mcast_filter_table;
+	fman_if_stats_get;
+	fman_if_stats_reset;
+	fm_mac_add_exact_match_mac_addr;
+	fm_mac_rem_exact_match_mac_addr;
+	netcfg_acquire;
+	netcfg_release;
+	qman_create_fq;
+	qman_dequeue;
+	qman_dqrr_consume;
+	qman_enqueue_multi;
+	qman_init_fq;
+	qman_set_vdq;
+	qman_reserve_fqid_range;
 	rte_dpaa_driver_register;
 	rte_dpaa_driver_unregister;
+	rte_dpaa_mem_ptov;
+	rte_dpaa_portal_init;
 
 };
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 8a1e192..ed6f77c 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -35,6 +35,12 @@
 #include <rte_bus.h>
 #include <rte_mempool.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
 #define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
 
 #define DEV_TO_DPAA_DEVICE(ptr)	\
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 17/40] doc: add NXP DPAA PMD documentation
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (15 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
                       ` (23 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                       |   2 +
 doc/guides/nics/dpaa.rst          | 374 ++++++++++++++++++++++++++++++++++++++
 doc/guides/nics/features/dpaa.ini |   8 +
 doc/guides/nics/index.rst         |   1 +
 4 files changed, 385 insertions(+)
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index 6ee20ce..10646a4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -409,6 +409,8 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: doc/guides/nics/dpaa.rst
+F: doc/guides/nics/features/dpaa.ini
 
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
new file mode 100644
index 0000000..404efcb
--- /dev/null
+++ b/doc/guides/nics/dpaa.rst
@@ -0,0 +1,374 @@
+..  BSD LICENSE
+    Copyright 2017 NXP.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+DPAA Poll Mode Driver
+=====================
+
+The DPAA NIC PMD (**librte_pmd_dpaa**) provides poll mode driver
+support for the inbuilt NIC found in the **NXP DPAA** SoC family.
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+NXP DPAA (Data Path Acceleration Architecture - Gen 1)
+------------------------------------------------------
+
+This section provides an overview of the NXP DPAA architecture
+and how it is integrated into the DPDK.
+
+Contents summary
+
+- DPAA overview
+- DPAA driver architecture overview
+
+.. _dpaa_overview:
+
+DPAA Overview
+~~~~~~~~~~~~~
+
+Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
+
+The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
+components on specific QorIQ series multicore processors. This architecture
+provides the infrastructure to support simplified sharing of networking
+interfaces and accelerators by multiple CPU cores, and the accelerators
+themselves.
+
+DPAA includes:
+
+- Cores
+- Network and packet I/O
+- Hardware offload accelerators
+- Infrastructure required to facilitate flow of packets between the components above
+
+Infrastructure components are:
+
+- The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
+  It allows  CPUs and other accelerators connected to the SoC datapath to
+  enqueue and dequeue ethernet frames, thus providing the infrastructure for
+  data exchange among CPUs and datapath accelerators.
+- The Buffer Manager (BMan) is a hardware buffer pool management block that
+  allows software and accelerators on the datapath to acquire and release
+  buffers in order to build frames.
+
+Hardware accelerators are:
+
+- SEC - Cryptographic accelerator
+- PME - Pattern matching engine
+
+The Network and packet I/O component:
+
+- The Frame Manager (FMan) is a key component in the DPAA and makes use of the
+  DPAA infrastructure (QMan and BMan). FMan  is responsible for packet
+  distribution and policing. Each frame can be parsed, classified and results
+  may be attached to the frame. This meta data can be used to select
+  particular QMan queue, which the packet is forwarded to.
+
+
+DPAA DPDK - Poll Mode Driver Overview
+-------------------------------------
+
+This section provides an overview of the drivers for DPAA:
+
+* Bus driver and associated "DPAA infrastructure" drivers
+* Functional object drivers (such as Ethernet).
+
+Brief description of each driver is provided in layout below as well as
+in the following sections.
+
+.. code-block:: console
+
+                                       +------------+
+                                       | DPDK DPAA  |
+                                       |    PMD     |
+                                       +-----+------+
+                                             |
+                                       +-----+------+       +---------------+
+                                       :  Ethernet  :.......| DPDK DPAA     |
+                    . . . . . . . . .  :   (FMAN)   :       | Mempool driver|
+                   .                   +---+---+----+       |  (BMAN)       |
+                  .                        ^   |            +-----+---------+
+                 .                         |   |<enqueue,         .
+                .                          |   | dequeue>         .
+               .                           |   |                  .
+              .                        +---+---V----+             .
+             .      . . . . . . . . . .: Portal drv :             .
+            .      .                   :            :             .
+           .      .                    +-----+------+             .
+          .      .                     :   QMAN     :             .
+         .      .                      :  Driver    :             .
+    +----+------+-------+              +-----+------+             .
+    |   DPDK DPAA Bus   |                    |                    .
+    |   driver          |....................|.....................
+    |   /bus/dpaa       |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|========================
+                                            PHY
+    =========================================|========================
+
+In the above representation, solid lines represent components which interface
+with DPDK RTE Framework and dotted lines represent DPAA internal components.
+
+DPAA Bus driver
+~~~~~~~~~~~~~~~
+
+The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
+Key functions include:
+
+- Scanning and parsing the various objects and adding them to their respective
+  device list.
+- Performing probe for available drivers against each scanned device
+- Creating necessary ethernet instance before passing control to the PMD
+
+DPAA NIC Driver (PMD)
+~~~~~~~~~~~~~~~~~~~~~
+
+DPAA PMD is traditional DPDK PMD which provides necessary interface between
+RTE framework and DPAA internal components/drivers.
+
+- Once devices have been identified by DPAA Bus, each device is associated
+  with the PMD
+- PMD is responsible for implementing necessary glue layer between RTE APIs
+  and lower level QMan and FMan blocks.
+  The Ethernet driver is bound to a FMAN port and implements the interfaces
+  needed to connect the DPAA network interface to the network stack.
+  Each FMAN Port corresponds to a DPDK network interface.
+
+
+Features
+^^^^^^^^
+
+  Features of the DPAA PMD are:
+
+  - Multiple queues for TX and RX
+  - Receive Side Scaling (RSS)
+  - Packet type information
+  - Checksum offload
+  - Promiscuous mode
+
+DPAA Mempool Driver
+~~~~~~~~~~~~~~~~~~~
+
+DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
+Manager.
+
+- Using standard Mempools operations RTE API, the mempool driver interfaces
+  with RTE to service each mempool creation, deletion, buffer allocation and
+  deallocation requests.
+- Each FMAN instance has a BMan pool attached to it during initialization.
+  Each Tx frame can be automatically released by hardware, if allocated from
+  this pool.
+
+
+Supported DPAA SoCs
+-------------------
+
+- LS1043A/LS1023A
+- LS1046A/LS1026A
+
+Prerequisites
+-------------
+
+There are three main pre-requisities for executing DPAA PMD on a DPAA
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+   For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
+
+2. **Linux Kernel**
+
+   It can be obtained from `NXP's Github hosting <https://github.com/qoriq-open-source/linux>`_.
+
+3. **Rootfile system**
+
+   Any *aarch64* supporting filesystem can be used. For example,
+   Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
+   from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
+
+4. **FMC Tool**
+
+   Before any DPDK application can be executed, the Frame Manager Configuration
+   Tool (FMC) need to be executed to set the configurations of the queues. This
+   includes the queue state, RSS and other policies.
+   This tool can be obtained from `NXP (Freescale) Public Git Repository <http://git.freescale.com/git/cgit.cgi/ppc/sdk/fmc.git>`_.
+   This tool needs configuration files which are available in the
+   :ref:`DPDK Extra Scripts <extra_scripts>`, described below.
+
+As an alternative method, DPAA PMD can also be executed using images provided
+as part of SDK from NXP. The SDK includes all the above prerequisites necessary
+to bring up a DPAA board.
+
+The following dependencies are not part of DPDK and must be installed
+separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+
+.. _extra_scripts:
+
+- **DPDK Extra Scripts**
+
+  DPAA based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK Extra repository.
+
+  `DPDK Extras Scripts <https://github.com/qoriq-open-source/dpdk-extras>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+**.
+- Supported architectures:  **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
+  to setup the basic DPDK environment.
+
+.. note::
+
+   Some part of dpaa bus code (qbman and fman - library) routines are
+   dual licensed (BSD & GPLv2).
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_bus_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_pmd_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
+
+  Toggle display of generic debugging messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT`` (default ``n``)
+
+  Toggle display of initialization related messages.
+
+- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` (default ``dpaa``)
+
+  This is not a DPAA specific configuration - it is a generic RTE config.
+  For optimal performance and hardware utilization, it is expected that DPAA
+  Mempool driver is used for mempools. For that, this configuration needs to
+  enabled.
+
+Environment Variables
+~~~~~~~~~~~~~~~~~~~~~
+
+DPAA drivers uses the following environment variables to configure its
+state during application initialization:
+
+- ``DPAA_NUM_RX_QUEUES`` (default 1)
+
+  This defines the number of Rx queues configured for an application, per
+  port. Hardware would distribute across these many number of queues on Rx
+  of packets.
+  In case the application is configured to use lesser number of queues than
+  configured above, it might result in packet loss (because of distribution).
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+#. Running testpmd:
+
+   Follow instructions available in the document
+   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+   to run testpmd.
+
+   Example output:
+
+   .. code-block:: console
+
+      ./arm64-dpaa-linuxapp-gcc/testpmd -c 0xff -n 1 \
+        -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
+
+      .....
+      EAL: Registered [pci] bus.
+      EAL: Registered [dpaa] bus.
+      EAL: Detected 4 lcore(s)
+      .....
+      EAL: dpaa: Bus scan completed
+      .....
+      Configuring Port 0 (socket 0)
+      Port 0: 00:00:00:00:00:01
+      Configuring Port 1 (socket 0)
+      Port 1: 00:00:00:00:00:02
+      .....
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Port 1 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+Limitations
+-----------
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA drivers for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
+up to 10240 bytes can still reach the host interface.
+
+Multiprocess Support
+~~~~~~~~~~~~~~~~~~~~
+
+Current version of DPAA driver doesn't support multi-process applications
+where I/O is performed using secondary processes. This feature would be
+implemented in subsequent versions.
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
new file mode 100644
index 0000000..9e8befc
--- /dev/null
+++ b/doc/guides/nics/features/dpaa.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'dpaa' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+ARMv8                = Y
+Usage doc            = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 36f4f3f..4115141 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -43,6 +43,7 @@ Network Interface Controller Drivers
     bnx2x
     bnxt
     cxgbe
+    dpaa
     dpaa2
     e1000em
     ena
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 18/40] bus/dpaa: add DPAA mempool logging macros
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (16 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 19/40] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
                       ` (22 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c      |  5 +++++
 drivers/bus/dpaa/rte_dpaa_logs.h | 28 ++++++++++++++++++++++++++++
 2 files changed, 33 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 8017df3..dc2b3ad 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -70,6 +70,7 @@
 #include <netcfg.h>
 
 int dpaa_logtype_bus;
+int dpaa_logtype_mempool;
 
 struct rte_dpaa_bus rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
@@ -452,4 +453,8 @@ dpaa_init_log(void)
 	dpaa_logtype_bus = rte_log_register("bus.dpaa");
 	if (dpaa_logtype_bus >= 0)
 		rte_log_set_level(dpaa_logtype_bus, RTE_LOG_NOTICE);
+
+	dpaa_logtype_mempool = rte_log_register("mempool.dpaa");
+	if (dpaa_logtype_mempool >= 0)
+		rte_log_set_level(dpaa_logtype_mempool, RTE_LOG_NOTICE);
 }
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
index 3ca3f9b..253962f 100644
--- a/drivers/bus/dpaa/rte_dpaa_logs.h
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -36,6 +36,7 @@
 #include <rte_log.h>
 
 extern int dpaa_logtype_bus;
+extern int dpaa_logtype_mempool;
 
 #define DPAA_BUS_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
@@ -63,4 +64,31 @@ extern int dpaa_logtype_bus;
 #define DPAA_BUS_WARN(fmt, args...) \
 	DPAA_BUS_LOG(WARNING, fmt, ## args)
 
+/* Mempool related logs */
+
+#define DPAA_MEMPOOL_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_mempool, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define MEMPOOL_INIT_FUNC_TRACE() DPAA_MEMPOOL_LOG(DEBUG, " >>")
+
+/* DEBUG is conditional to compiled configuration */
+#ifdef RTE_LIBRTE_DPAA_MEMPOOL_DEBUG
+#define DPAA_MEMPOOL_DEBUG(fmt, args...) \
+	DPAA_MEMPOOL_LOG(DEBUG, fmt, ## args)
+
+#else /* RTE_LIBRTE_DPAA_MEMPOOL_DEBUG */
+#define DPAA_MEMPOOL_DEBUG(fmt, args...) do { } while (0)
+#endif /* RTE_LIBRTE_DPAA_MEMPOOL_DEBUG */
+
+/* WARNING, ERR and INFO are unconditional */
+#define DPAA_MEMPOOL_ERR(fmt, args...) \
+	DPAA_MEMPOOL_LOG(ERR, fmt, ## args)
+
+#define DPAA_MEMPOOL_INFO(fmt, args...) \
+	DPAA_MEMPOOL_LOG(INFO, fmt, ## args)
+
+#define DPAA_MEMPOOL_WARN(fmt, args...) \
+	DPAA_MEMPOOL_LOG(WARNING, fmt, ## args)
+
 #endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 19/40] mempool/dpaa: add support for NXP DPAA Mempool
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (17 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 20/40] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
                       ` (21 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This Mempool driver works with DPAA BMan hardware block. This block
manages data buffers in memory, and provides efficient interface with
other hardware and software components for buffer requests.

This patch adds support for BMan. Compilation would be enabled in
subsequent patches.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/mempool/dpaa/Makefile                     |  64 +++++
 drivers/mempool/dpaa/dpaa_mempool.c               | 276 ++++++++++++++++++++++
 drivers/mempool/dpaa/dpaa_mempool.h               |  77 ++++++
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map |   6 +
 4 files changed, 423 insertions(+)
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map

diff --git a/drivers/mempool/dpaa/Makefile b/drivers/mempool/dpaa/Makefile
new file mode 100644
index 0000000..4b3be6b
--- /dev/null
+++ b/drivers/mempool/dpaa/Makefile
@@ -0,0 +1,64 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
+CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
+
+# versioning export map
+EXPORT_MAP := rte_mempool_dpaa_version.map
+
+# Lbrary version
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa_mempool.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
new file mode 100644
index 0000000..33276a4
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -0,0 +1,276 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <dpaa_mempool.h>
+
+struct dpaa_bp_info rte_dpaa_bpid_info[DPAA_MAX_BPOOLS];
+
+static int
+dpaa_mbuf_create_pool(struct rte_mempool *mp)
+{
+	struct bman_pool *bp;
+	struct bm_buffer bufs[8];
+	struct dpaa_bp_info *bp_info;
+	uint8_t bpid;
+	int num_bufs = 0, ret = 0;
+	struct bman_pool_params params = {
+		.flags = BMAN_POOL_FLAG_DYNAMIC_BPID
+	};
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	bp = bman_new_pool(&params);
+	if (!bp) {
+		DPAA_MEMPOOL_ERR("bman_new_pool() failed");
+		return -ENODEV;
+	}
+	bpid = bman_get_params(bp)->bpid;
+
+	/* Drain the pool of anything already in it. */
+	do {
+		/* Acquire is all-or-nothing, so we drain in 8s,
+		 * then in 1s for the remainder.
+		 */
+		if (ret != 1)
+			ret = bman_acquire(bp, bufs, 8, 0);
+		if (ret < 8)
+			ret = bman_acquire(bp, bufs, 1, 0);
+		if (ret > 0)
+			num_bufs += ret;
+	} while (ret > 0);
+	if (num_bufs)
+		DPAA_MEMPOOL_WARN("drained %u bufs from BPID %d",
+				  num_bufs, bpid);
+
+	rte_dpaa_bpid_info[bpid].mp = mp;
+	rte_dpaa_bpid_info[bpid].bpid = bpid;
+	rte_dpaa_bpid_info[bpid].size = mp->elt_size;
+	rte_dpaa_bpid_info[bpid].bp = bp;
+	rte_dpaa_bpid_info[bpid].meta_data_size =
+		sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(mp);
+	rte_dpaa_bpid_info[bpid].dpaa_ops_index = mp->ops_index;
+
+	bp_info = rte_malloc(NULL,
+			     sizeof(struct dpaa_bp_info),
+			     RTE_CACHE_LINE_SIZE);
+	rte_memcpy(bp_info, (void *)&rte_dpaa_bpid_info[bpid],
+		   sizeof(struct dpaa_bp_info));
+	mp->pool_data = (void *)bp_info;
+
+	DPAA_MEMPOOL_INFO("BMAN pool created for bpid =%d", bpid);
+	return 0;
+}
+
+static void
+dpaa_mbuf_free_pool(struct rte_mempool *mp)
+{
+	struct dpaa_bp_info *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	bman_free_pool(bp_info->bp);
+	DPAA_MEMPOOL_INFO("BMAN pool freed for bpid =%d", bp_info->bpid);
+	rte_free(mp->pool_data);
+	mp->pool_data = NULL;
+}
+
+static void
+dpaa_buf_free(struct dpaa_bp_info *bp_info, uint64_t addr)
+{
+	struct bm_buffer buf;
+	int ret;
+
+	DPAA_MEMPOOL_DEBUG("Free 0x%lx to bpid: %d", addr, bp_info->bpid);
+
+	bm_buffer_set64(&buf, addr);
+retry:
+	ret = bman_release(bp_info->bp, &buf, 1, 0);
+	if (ret) {
+		DPAA_MEMPOOL_DEBUG("BMAN busy. Retrying...");
+		cpu_spin(CPU_SPIN_BACKOFF_CYCLES);
+		goto retry;
+	}
+}
+
+static int
+dpaa_mbuf_free_bulk(struct rte_mempool *pool,
+		    void *const *obj_table,
+		    unsigned int n)
+{
+	struct dpaa_bp_info *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+	int ret;
+	unsigned int i = 0;
+
+	DPAA_MEMPOOL_DEBUG(" Request to free %d buffers in bpid = %d",
+			   n, bp_info->bpid);
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
+				 ret);
+		return 0;
+	}
+
+	while (i < n) {
+		dpaa_buf_free(bp_info,
+			      (uint64_t)rte_mempool_virt2phy(pool,
+			      obj_table[i]) + bp_info->meta_data_size);
+		i = i + 1;
+	}
+
+	DPAA_MEMPOOL_DEBUG(" freed %d buffers in bpid =%d", n, bp_info->bpid);
+
+	return 0;
+}
+
+static int
+dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
+		     void **obj_table,
+		     unsigned int count)
+{
+	struct rte_mbuf **m = (struct rte_mbuf **)obj_table;
+	struct bm_buffer bufs[DPAA_MBUF_MAX_ACQ_REL];
+	struct dpaa_bp_info *bp_info;
+	void *bufaddr;
+	int i, ret;
+	unsigned int n = 0;
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+
+	DPAA_MEMPOOL_DEBUG(" Request to alloc %d buffers in bpid = %d",
+			   count, bp_info->bpid);
+
+	if (unlikely(count >= (RTE_MEMPOOL_CACHE_MAX_SIZE * 2))) {
+		DPAA_MEMPOOL_ERR("Unable to allocate requested (%u) buffers",
+				 count);
+		return -1;
+	}
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
+				 ret);
+		return 0;
+	}
+
+	while (n < count) {
+		/* Acquire is all-or-nothing, so we drain in 7s,
+		 * then the remainder.
+		 */
+		if ((count - n) > DPAA_MBUF_MAX_ACQ_REL) {
+			ret = bman_acquire(bp_info->bp, bufs,
+					   DPAA_MBUF_MAX_ACQ_REL, 0);
+		} else {
+			ret = bman_acquire(bp_info->bp, bufs, count - n, 0);
+		}
+		/* In case of less than requested number of buffers available
+		 * in pool, qbman_swp_acquire returns 0
+		 */
+		if (ret <= 0) {
+			DPAA_MEMPOOL_DEBUG("Buffer acquire failed with"
+					   " err code: %d", ret);
+			/* The API expect the exact number of requested
+			 * buffers. Releasing all buffers allocated
+			 */
+			dpaa_mbuf_free_bulk(pool, obj_table, n);
+			return -ENOBUFS;
+		}
+		/* assigning mbuf from the acquired objects */
+		for (i = 0; (i < ret) && bufs[i].addr; i++) {
+			/* TODO-errata - objerved that bufs may be null
+			 * i.e. first buffer is valid, remaining 6 buffers
+			 * may be null.
+			 */
+			bufaddr = (void *)rte_dpaa_mem_ptov(bufs[i].addr);
+			m[n] = (struct rte_mbuf *)((char *)bufaddr
+						- bp_info->meta_data_size);
+			DPAA_MEMPOOL_DEBUG("Acquired %p address %p from BMAN",
+					   (void *)bufaddr, (void *)m[n]);
+			n++;
+		}
+	}
+
+	DPAA_MEMPOOL_DEBUG(" allocated %d buffers from bpid =%d",
+			   n, bp_info->bpid);
+	return 0;
+}
+
+static unsigned int
+dpaa_mbuf_get_count(const struct rte_mempool *mp)
+{
+	struct dpaa_bp_info *bp_info;
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	if (!mp || !mp->pool_data) {
+		DPAA_MEMPOOL_ERR("Invalid mempool provided\n");
+		return 0;
+	}
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	return bman_query_free_buffers(bp_info->bp);
+}
+
+struct rte_mempool_ops dpaa_mpool_ops = {
+	.name = "dpaa",
+	.alloc = dpaa_mbuf_create_pool,
+	.free = dpaa_mbuf_free_pool,
+	.enqueue = dpaa_mbuf_free_bulk,
+	.dequeue = dpaa_mbuf_alloc_bulk,
+	.get_count = dpaa_mbuf_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
diff --git a/drivers/mempool/dpaa/dpaa_mempool.h b/drivers/mempool/dpaa/dpaa_mempool.h
new file mode 100644
index 0000000..de33c0c
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.h
@@ -0,0 +1,77 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_MEMPOOL_H__
+#define __DPAA_MEMPOOL_H__
+
+/* System headers */
+#include <stdio.h>
+#include <stdbool.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <unistd.h>
+
+#include <rte_mempool.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <fsl_usd.h>
+#include <fsl_bman.h>
+
+#define CPU_SPIN_BACKOFF_CYCLES               512
+
+/* total number of bpools on SoC */
+#define DPAA_MAX_BPOOLS	256
+
+/* Maximum release/acquire from BMAN */
+#define DPAA_MBUF_MAX_ACQ_REL  8
+
+struct dpaa_bp_info {
+	struct rte_mempool *mp;
+	struct bman_pool *bp;
+	uint32_t bpid;
+	uint32_t size;
+	uint32_t meta_data_size;
+	int32_t dpaa_ops_index;
+};
+
+#define DPAA_MEMPOOL_TO_POOL_INFO(__mp) \
+	((struct dpaa_bp_info *)__mp->pool_data)
+
+#define DPAA_MEMPOOL_TO_BPID(__mp) \
+	(((struct dpaa_bp_info *)__mp->pool_data)->bpid)
+
+extern struct dpaa_bp_info rte_dpaa_bpid_info[DPAA_MAX_BPOOLS];
+
+#define DPAA_BPID_TO_POOL_INFO(__bpid) (&rte_dpaa_bpid_info[__bpid])
+
+#endif
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
new file mode 100644
index 0000000..93ea216
--- /dev/null
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -0,0 +1,6 @@
+DPDK_17.11 {
+	global:
+
+	rte_dpaa_pool_table;
+
+};
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 20/40] drivers: enable compilation of DPAA Mempool driver
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (18 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 19/40] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-09-21 21:55       ` Thomas Monjalon
  2017-08-23 14:11     ` [PATCH v3 21/40] maintainers: claim ownership " Shreyansh Jain
                       ` (20 subsequent siblings)
  40 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch also adds configuration necessary for compilation of DPAA
Mempool driver into the DPAA specific config file.
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=dpaa is also configured to allow
applications to use DPAA mempool as default.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       | 1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 5 +++++
 drivers/mempool/Makefile                 | 2 ++
 3 files changed, 8 insertions(+)

diff --git a/config/common_base b/config/common_base
index 2bb2269..e4a9d6d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -305,6 +305,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 110042c..d91249f 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -43,3 +43,8 @@ CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
+
+# NXP DPAA Mempool
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index efd55f2..bfc5f00 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -32,6 +32,8 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mempool librte_ring
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 21/40] maintainers: claim ownership of DPAA Mempool driver
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (19 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 20/40] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-09-21 21:56       ` Thomas Monjalon
  2017-08-23 14:11     ` [PATCH v3 22/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
                       ` (19 subsequent siblings)
  40 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 10646a4..74b7aba 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -409,6 +409,7 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: drivers/mempool/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 22/40] bus/dpaa: add DPAA PMD logging macros
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (20 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 21/40] maintainers: claim ownership " Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 23/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
                       ` (18 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c      |  5 +++++
 drivers/bus/dpaa/rte_dpaa_logs.h | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index dc2b3ad..7ae5bfa 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -71,6 +71,7 @@
 
 int dpaa_logtype_bus;
 int dpaa_logtype_mempool;
+int dpaa_logtype_pmd;
 
 struct rte_dpaa_bus rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
@@ -457,4 +458,8 @@ dpaa_init_log(void)
 	dpaa_logtype_mempool = rte_log_register("mempool.dpaa");
 	if (dpaa_logtype_mempool >= 0)
 		rte_log_set_level(dpaa_logtype_mempool, RTE_LOG_NOTICE);
+
+	dpaa_logtype_pmd = rte_log_register("pmd.dpaa");
+	if (dpaa_logtype_pmd >= 0)
+		rte_log_set_level(dpaa_logtype_pmd, RTE_LOG_NOTICE);
 }
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
index 253962f..8442e0e 100644
--- a/drivers/bus/dpaa/rte_dpaa_logs.h
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -37,6 +37,7 @@
 
 extern int dpaa_logtype_bus;
 extern int dpaa_logtype_mempool;
+extern int dpaa_logtype_pmd;
 
 #define DPAA_BUS_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
@@ -91,4 +92,39 @@ extern int dpaa_logtype_mempool;
 #define DPAA_MEMPOOL_WARN(fmt, args...) \
 	DPAA_MEMPOOL_LOG(WARNING, fmt, ## args)
 
+/* PMD related logs */
+
+#define DPAA_PMD_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_pmd, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() DPAA_PMD_LOG(DEBUG, " >>")
+
+/* DEBUG is conditional to compiled configuration */
+#ifdef RTE_LIBRTE_DPAA_PMD_DEBUG
+#define DPAA_PMD_DEBUG(fmt, args...) \
+	DPAA_PMD_LOG(DEBUG, fmt, ## args)
+
+#else /* RTE_LIBRTE_DPAA_PMD_DEBUG */
+#define DPAA_PMD_DEBUG(fmt, args...) do { } while (0)
+#endif /* RTE_LIBRTE_DPAA_PMD_DEBUG */
+
+/* WARNING, ERR and INFO are unconditional */
+#define DPAA_PMD_ERR(fmt, args...) \
+	DPAA_PMD_LOG(ERR, fmt, ## args)
+
+#define DPAA_PMD_INFO(fmt, args...) \
+	DPAA_PMD_LOG(INFO, fmt, ## args)
+
+#define DPAA_PMD_WARN(fmt, args...) \
+	DPAA_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define DPAA_RX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, fmt, ## args)
+#define DPAA_TX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, fmt, ## args)
+#define DPAA_DP_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, fmt, ## args)
+
 #endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 23/40] net/dpaa: add NXP DPAA PMD driver skeleton
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (21 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 22/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 24/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
                       ` (17 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

A skeleton which would be called after bus device scan. It currently
fails to identify the device.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                               |   1 +
 drivers/net/dpaa/Makefile                 |  63 ++++++++
 drivers/net/dpaa/dpaa_ethdev.c            | 256 ++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_ethdev.h            | 130 +++++++++++++++
 drivers/net/dpaa/rte_pmd_dpaa_version.map |   4 +
 5 files changed, 454 insertions(+)
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 74b7aba..48afbfc 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -410,6 +410,7 @@ M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
 F: drivers/mempool/dpaa/
+F: drivers/net/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
new file mode 100644
index 0000000..7ecd5be
--- /dev/null
+++ b/drivers/net/dpaa/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_SDK_DPAA=$(RTE_SDK)/drivers/net/dpaa
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+
+CFLAGS += -I$(RTE_SDK_DPAA)/
+CFLAGS += -I$(RTE_SDK_DPAA)/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
+
+EXPORT_MAP := rte_pmd_dpaa_version.map
+
+LIBABIVER := 1
+
+# Interfaces with DPDK
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
new file mode 100644
index 0000000..4543dfc
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -0,0 +1,256 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <dpaa_ethdev.h>
+
+/* Keep track of whether QMAN and BMAN have been globally initialized */
+static int is_global_init;
+
+static int
+dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	/* Change tx callback to the real one */
+	dev->tx_pkt_burst = NULL;
+
+	return 0;
+}
+
+static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = NULL;
+}
+
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static struct eth_dev_ops dpaa_devops = {
+	.dev_configure		  = dpaa_eth_dev_configure,
+	.dev_start		  = dpaa_eth_dev_start,
+	.dev_stop		  = dpaa_eth_dev_stop,
+	.dev_close		  = dpaa_eth_dev_close,
+};
+
+/* Initialise a network interface */
+static int
+dpaa_dev_init(struct rte_eth_dev *eth_dev)
+{
+	int dev_id;
+	struct rte_dpaa_device *dpaa_device;
+	struct dpaa_if *dpaa_intf;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
+	dev_id = dpaa_device->id.dev_id;
+	dpaa_intf = eth_dev->data->dev_private;
+
+	dpaa_intf->name = dpaa_device->name;
+
+	dpaa_intf->ifid = dev_id;
+
+	eth_dev->dev_ops = &dpaa_devops;
+
+	return 0;
+}
+
+static int
+dpaa_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	if (!dpaa_intf) {
+		DPAA_PMD_WARN("Already closed or not started");
+		return -1;
+	}
+
+	dpaa_eth_dev_close(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
+	return 0;
+}
+
+static int
+rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv,
+	       struct rte_dpaa_device *dpaa_dev)
+{
+	int diag;
+	int ret;
+	struct rte_eth_dev *eth_dev;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* In case of secondary process, the device is already configured
+	 * and no further action is required, except portal initialization
+	 * and verifying secondary attachment to port name.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		eth_dev = rte_eth_dev_attach_secondary(dpaa_dev->name);
+		if (!eth_dev)
+			return -ENOMEM;
+		return 0;
+	}
+
+	if (!is_global_init) {
+		/* One time load of Qman/Bman drivers */
+		ret = qman_global_init();
+		if (ret) {
+			DPAA_PMD_ERR("QMAN initialization failed: %d",
+				     ret);
+			return ret;
+		}
+		ret = bman_global_init();
+		if (ret) {
+			DPAA_PMD_ERR("BMAN initialization failed: %d",
+				     ret);
+			return ret;
+		}
+
+		is_global_init = 1;
+	}
+
+	ret = rte_dpaa_portal_init((void *)1);
+	if (ret) {
+		DPAA_PMD_ERR("Unable to initialize portal");
+		return ret;
+	}
+
+	eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
+	if (eth_dev == NULL)
+		return -ENOMEM;
+
+	eth_dev->data->dev_private = rte_zmalloc(
+					"ethdev private structure",
+					sizeof(struct dpaa_if),
+					RTE_CACHE_LINE_SIZE);
+	if (!eth_dev->data->dev_private) {
+		DPAA_PMD_ERR("Cannot allocate memzone for port data");
+		rte_eth_dev_release_port(eth_dev);
+		return -ENOMEM;
+	}
+
+	eth_dev->device = &dpaa_dev->device;
+	eth_dev->device->driver = &dpaa_drv->driver;
+	dpaa_dev->eth_dev = eth_dev;
+
+	/* Invoke PMD device initialization function */
+	diag = dpaa_dev_init(eth_dev);
+	if (diag == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eth_dev->data->dev_private);
+
+	rte_eth_dev_release_port(eth_dev);
+	return diag;
+}
+
+static int
+rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
+{
+	struct rte_eth_dev *eth_dev;
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev = dpaa_dev->eth_dev;
+	dpaa_dev_uninit(eth_dev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eth_dev->data->dev_private);
+
+	rte_eth_dev_release_port(eth_dev);
+
+	return 0;
+}
+
+static struct rte_dpaa_driver rte_dpaa_pmd = {
+	.drv_type = FSL_DPAA_ETH,
+	.probe = rte_dpaa_probe,
+	.remove = rte_dpaa_remove,
+};
+
+RTE_PMD_REGISTER_DPAA(net_dpaa, rte_dpaa_pmd);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
new file mode 100644
index 0000000..c3eb804
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -0,0 +1,130 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2014-2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_ETHDEV_H__
+#define __DPAA_ETHDEV_H__
+
+/* System headers */
+#include <stdbool.h>
+#include <rte_ethdev.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
+#define DPAA_MBUF_HW_ANNOTATION		64
+#define DPAA_FD_PTA_SIZE		64
+
+#if (DPAA_MBUF_HW_ANNOTATION + DPAA_FD_PTA_SIZE) > RTE_PKTMBUF_HEADROOM
+#error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM"
+#endif
+
+/* we will re-use the HEADROOM for annotation in RX */
+#define DPAA_HW_BUF_RESERVE	0
+#define DPAA_PACKET_LAYOUT_ALIGN	64
+
+/* Alignment to use for cpu-local structs to avoid coherency problems. */
+#define MAX_CACHELINE			64
+
+#define DPAA_MIN_RX_BUF_SIZE 512
+#define DPAA_MAX_RX_PKT_LEN  10240
+
+/* RX queue tail drop threshold
+ * currently considering 32 KB packets.
+ */
+#define CONG_THRESHOLD_RX_Q  (32 * 1024)
+
+/*max mac filter for memac(8) including primary mac addr*/
+#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
+
+/*Maximum number of slots available in TX ring*/
+#define MAX_TX_RING_SLOTS	8
+
+/* PCD frame queues */
+#define DPAA_PCD_FQID_START		0x400
+#define DPAA_PCD_FQID_MULTIPLIER	0x100
+#define DPAA_DEFAULT_NUM_PCD_QUEUES	1
+
+#define DPAA_IF_TX_PRIORITY		3
+#define DPAA_IF_RX_PRIORITY		4
+#define DPAA_IF_DEBUG_PRIORITY		7
+
+#define DPAA_IF_RX_ANNOTATION_STASH	1
+#define DPAA_IF_RX_DATA_STASH		1
+#define DPAA_IF_RX_CONTEXT_STASH		0
+
+/* Each "debug" FQ is represented by one of these */
+#define DPAA_DEBUG_FQ_RX_ERROR   0
+#define DPAA_DEBUG_FQ_TX_ERROR   1
+
+#define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
+		PKT_TX_IP_CKSUM |                \
+		PKT_TX_TCP_CKSUM |               \
+		PKT_TX_UDP_CKSUM)
+
+/* DPAA Frame descriptor macros */
+
+#define DPAA_FD_CMD_FCO			0x80000000
+/**< Frame queue Context Override */
+#define DPAA_FD_CMD_RPD			0x40000000
+/**< Read Prepended Data */
+#define DPAA_FD_CMD_UPD			0x20000000
+/**< Update Prepended Data */
+#define DPAA_FD_CMD_DTC			0x10000000
+/**< Do IP/TCP/UDP Checksum */
+#define DPAA_FD_CMD_DCL4C		0x10000000
+/**< Didn't calculate L4 Checksum */
+#define DPAA_FD_CMD_CFQ			0x00ffffff
+/**< Confirmation Frame Queue */
+
+/* Configuration variables exported from DPAA bus */
+extern struct netcfg_info *dpaa_netcfg;
+
+/* Each network interface is represented by one of these */
+struct dpaa_if {
+	int valid;
+	char *name;
+	const struct fm_eth_port_cfg *cfg;
+	struct qman_fq *rx_queues;
+	struct qman_fq *tx_queues;
+	struct qman_fq debug_queues[2];
+	uint16_t nb_rx_queues;
+	uint16_t nb_tx_queues;
+	uint32_t ifid;
+	struct fman_if *fif;
+	struct dpaa_bp_info *bp_info;
+	struct rte_eth_fc_conf *fc_conf;
+};
+
+#endif
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
new file mode 100644
index 0000000..a70bd19
--- /dev/null
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -0,0 +1,4 @@
+DPDK_17.11 {
+
+	local: *;
+};
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 24/40] config: enable NXP DPAA PMD compilation
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (22 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 23/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-09-21 22:03       ` Thomas Monjalon
  2017-08-23 14:11     ` [PATCH v3 25/40] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
                       ` (16 subsequent siblings)
  40 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       |  1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 12 ++++++++++++
 drivers/net/Makefile                     |  2 ++
 mk/rte.app.mk                            |  5 +++++
 4 files changed, 20 insertions(+)

diff --git a/config/common_base b/config/common_base
index e4a9d6d..a780284 100644
--- a/config/common_base
+++ b/config/common_base
@@ -306,6 +306,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
+CONFIG_RTE_LIBRTE_DPAA_PMD=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index d91249f..a349cec 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -38,6 +38,14 @@ CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
 CONFIG_RTE_LIBRTE_VHOST_NUMA=n
 CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
 
+#
+# Compile Environment Abstraction Layer
+#
+CONFIG_RTE_MAX_LCORE=4
+CONFIG_RTE_MAX_NUMA_NODES=1
+CONFIG_RTE_CACHE_LINE_SIZE=64
+CONFIG_RTE_PKTMBUF_HEADROOM=128
+
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
@@ -48,3 +56,7 @@ CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL_DEBUG=n
 CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
+
+# Compile software NXP DPAA PMD
+CONFIG_RTE_LIBRTE_DPAA_PMD=y
+CONFIG_RTE_LIBRTE_DPAA_PMD_DEBUG=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index d33c959..2bd42f8 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -51,6 +51,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DEPDIRS-bonding = $(core-libs) librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
 DEPDIRS-cxgbe = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += e1000
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index c25fdd9..9c5a171 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -116,6 +116,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD)      += -lrte_pmd_cxgbe
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_pmd_dpaa
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_pmd_dpaa2
 _LDLIBS-$(CONFIG_RTE_LIBRTE_E1000_PMD)      += -lrte_pmd_e1000
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ENA_PMD)        += -lrte_pmd_ena
@@ -182,6 +183,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_bus_fslmc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_mempool_dpaa2
 endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
 
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+endif
+
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
 
 _LDLIBS-y += --no-whole-archive
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 25/40] net/dpaa: add support for Tx and Rx queue setup
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (23 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 24/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-08-23 14:11     ` [PATCH v3 26/40] net/dpaa: add support for MTU update Shreyansh Jain
                       ` (15 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/Makefile      |   4 +
 drivers/net/dpaa/dpaa_ethdev.c | 290 +++++++++++++++++++++++++++++++-
 drivers/net/dpaa/dpaa_rxtx.c   | 370 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h   |  61 +++++++
 mk/rte.app.mk                  |   1 +
 5 files changed, 723 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h

diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index 7ecd5be..9b8debc 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -43,11 +43,13 @@ else
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 endif
+CFLAGS +=-Wno-pointer-arith
 
 CFLAGS += -I$(RTE_SDK_DPAA)/
 CFLAGS += -I$(RTE_SDK_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
 
@@ -57,7 +59,9 @@ LIBABIVER := 1
 
 # Interfaces with DPDK
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_rxtx.c
 
 LDLIBS += -lrte_bus_dpaa
+LDLIBS += -lrte_mempool_dpaa
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4543dfc..ab19b2e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -62,8 +62,15 @@
 
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
+#include <dpaa_mempool.h>
 
 #include <dpaa_ethdev.h>
+#include <dpaa_rxtx.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <fsl_fman.h>
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -78,20 +85,104 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
 	PMD_INIT_FUNC_TRACE();
 
 	/* Change tx callback to the real one */
-	dev->tx_pkt_burst = NULL;
+	dev->tx_pkt_burst = dpaa_eth_queue_tx;
+	fman_if_enable_rx(dpaa_intf->fif);
 
 	return 0;
 }
 
 static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = NULL;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_disable_rx(dpaa_intf->fif);
+	dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
 }
 
-static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+}
+
+static
+int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+			    unsigned int socket_id __rte_unused,
+			    const struct rte_eth_rxconf *rx_conf __rte_unused,
+			    struct rte_mempool *mp)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	DPAA_PMD_INFO("Rx queue setup for queue index: %d", queue_idx);
+
+	if (!dpaa_intf->bp_info || dpaa_intf->bp_info->mp != mp) {
+		struct fman_if_ic_params icp;
+		uint32_t fd_offset;
+		uint32_t bp_size;
+
+		if (!mp->pool_data) {
+			DPAA_PMD_ERR("Not an offloaded buffer pool!");
+			return -1;
+		}
+		dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+		memset(&icp, 0, sizeof(icp));
+		/* set ICEOF for to the default value , which is 0*/
+		icp.iciof = DEFAULT_ICIOF;
+		icp.iceof = DEFAULT_RX_ICEOF;
+		icp.icsz = DEFAULT_ICSZ;
+		fman_if_set_ic_params(dpaa_intf->fif, &icp);
+
+		fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
+		fman_if_set_fdoff(dpaa_intf->fif, fd_offset);
+
+		/* Buffer pool size should be equal to Dataroom Size*/
+		bp_size = rte_pktmbuf_data_room_size(mp);
+		fman_if_set_bp(dpaa_intf->fif, mp->size,
+			       dpaa_intf->bp_info->bpid, bp_size);
+		dpaa_intf->valid = 1;
+		DPAA_PMD_INFO("if =%s - fd_offset = %d offset = %d",
+			    dpaa_intf->name, fd_offset,
+			fman_if_get_fdoff(dpaa_intf->fif));
+	}
+	dev->data->rx_queues[queue_idx] = &dpaa_intf->rx_queues[queue_idx];
+
+	return 0;
+}
+
+static
+void dpaa_eth_rx_queue_release(void *rxq __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static
+int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+		unsigned int socket_id __rte_unused,
+		const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	DPAA_PMD_INFO("Tx queue setup for queue index: %d", queue_idx);
+	dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx];
+	return 0;
+}
+
+static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 }
@@ -101,15 +192,102 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+
+	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
+	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
+	.rx_queue_release	  = dpaa_eth_rx_queue_release,
+	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 };
 
+/* Initialise an Rx FQ */
+static int dpaa_rx_queue_init(struct qman_fq *fq,
+			      uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		DPAA_PMD_ERR("reserve rx fqid %d failed with ret: %d",
+			     fqid, ret);
+		return -EINVAL;
+	}
+
+	DPAA_PMD_DEBUG("creating rx fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create rx fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTA;
+
+	opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
+			   QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_a.stashing.exclusive = 0;
+	opts.fqd.context_a.stashing.annotation_cl = DPAA_IF_RX_ANNOTATION_STASH;
+	opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
+	opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
+
+	/*Enable tail drop */
+	opts.we_mask = opts.we_mask | QM_INITFQ_WE_TDTHRESH;
+	opts.fqd.fq_ctrl = opts.fqd.fq_ctrl | QM_FQCTRL_TDE;
+	qm_fqd_taildrop_set(&opts.fqd.td, CONG_THRESHOLD_RX_Q, 1);
+
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init rx fqid %d failed with ret: %d", fqid, ret);
+	return ret;
+}
+
+/* Initialise a Tx FQ */
+static int dpaa_tx_queue_init(struct qman_fq *fq,
+			      struct fman_if *fman_intf)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID |
+			     QMAN_FQ_FLAG_TO_DCPORTAL, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create tx fq failed with ret: %d", ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTB | QM_INITFQ_WE_CONTEXTA;
+	opts.fqd.dest.channel = fman_intf->tx_channel_id;
+	opts.fqd.dest.wq = DPAA_IF_TX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_b = 0;
+	/* no tx-confirmation */
+	opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+	opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
+	DPAA_PMD_DEBUG("init tx fq %p, fqid %d", fq, fq->fqid);
+	ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init tx fqid %d failed %d", fq->fqid, ret);
+	return ret;
+}
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
 {
+	int num_cores, num_rx_fqs, fqid;
+	int loop, ret = 0;
 	int dev_id;
 	struct rte_dpaa_device *dpaa_device;
 	struct dpaa_if *dpaa_intf;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+	struct fman_if_bpool *bp, *tmp_bp;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -120,12 +298,104 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
 	dev_id = dpaa_device->id.dev_id;
 	dpaa_intf = eth_dev->data->dev_private;
+	cfg = &dpaa_netcfg->port_cfg[dev_id];
+	fman_intf = cfg->fman_if;
 
 	dpaa_intf->name = dpaa_device->name;
 
+	/* save fman_if & cfg in the interface struture */
+	dpaa_intf->fif = fman_intf;
 	dpaa_intf->ifid = dev_id;
+	dpaa_intf->cfg = cfg;
+
+	/* Initialize Rx FQ's */
+	if (getenv("DPAA_NUM_RX_QUEUES"))
+		num_rx_fqs = atoi(getenv("DPAA_NUM_RX_QUEUES"));
+	else
+		num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
 
+	/* Each device can not have more than DPAA_PCD_FQID_MULTIPLIER RX
+	 * queues.
+	 */
+	if (num_rx_fqs <= 0 || num_rx_fqs > DPAA_PCD_FQID_MULTIPLIER) {
+		DPAA_PMD_ERR("Invalid number of RX queues\n");
+		return -EINVAL;
+	}
+
+	dpaa_intf->rx_queues = rte_zmalloc(NULL,
+		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
+	for (loop = 0; loop < num_rx_fqs; loop++) {
+		fqid = DPAA_PCD_FQID_START + dpaa_intf->ifid *
+			DPAA_PCD_FQID_MULTIPLIER + loop;
+		ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop], fqid);
+		if (ret)
+			return ret;
+		dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_rx_queues = num_rx_fqs;
+
+	/* Initialise Tx FQs. Have as many Tx FQ's as number of cores */
+	num_cores = rte_lcore_count();
+	dpaa_intf->tx_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
+		num_cores, MAX_CACHELINE);
+	if (!dpaa_intf->tx_queues)
+		return -ENOMEM;
+
+	for (loop = 0; loop < num_cores; loop++) {
+		ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
+					 fman_intf);
+		if (ret)
+			return ret;
+		dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_tx_queues = num_cores;
+
+	DPAA_PMD_DEBUG("All frame queues created");
+
+	/* reset bpool list, initialize bpool dynamically */
+	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+
+	/* Populate ethdev structure */
 	eth_dev->dev_ops = &dpaa_devops;
+	eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
+	eth_dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
+		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		DPAA_PMD_ERR("Failed to allocate %d bytes needed to "
+						"store MAC addresses",
+				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
+		return -ENOMEM;
+	}
+
+	/* copy the primary mac address */
+	memcpy(eth_dev->data->mac_addrs[0].addr_bytes,
+		fman_intf->mac_addr.addr_bytes,
+		ETHER_ADDR_LEN);
+
+	RTE_LOG(INFO, PMD, "net: dpaa: %s: %02x:%02x:%02x:%02x:%02x:%02x\n",
+		dpaa_device->name,
+		fman_intf->mac_addr.addr_bytes[0],
+		fman_intf->mac_addr.addr_bytes[1],
+		fman_intf->mac_addr.addr_bytes[2],
+		fman_intf->mac_addr.addr_bytes[3],
+		fman_intf->mac_addr.addr_bytes[4],
+		fman_intf->mac_addr.addr_bytes[5]);
+
+	/* Disable RX mode */
+	fman_if_discard_rx_errors(fman_intf);
+	fman_if_disable_rx(fman_intf);
+	/* Disable promiscuous mode */
+	fman_if_promiscuous_disable(fman_intf);
+	/* Disable multicast */
+	fman_if_reset_mcast_filter_table(fman_intf);
+	/* Reset interface statistics */
+	fman_if_stats_reset(fman_intf);
 
 	return 0;
 }
@@ -147,6 +417,20 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
 
 	dpaa_eth_dev_close(dev);
 
+	/* release configuration memory */
+	if (dpaa_intf->fc_conf)
+		rte_free(dpaa_intf->fc_conf);
+
+	rte_free(dpaa_intf->rx_queues);
+	dpaa_intf->rx_queues = NULL;
+
+	rte_free(dpaa_intf->tx_queues);
+	dpaa_intf->tx_queues = NULL;
+
+	/* free memory for storing MAC addresses */
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
 	dev->dev_ops = NULL;
 	dev->rx_pkt_burst = NULL;
 	dev->tx_pkt_burst = NULL;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
new file mode 100644
index 0000000..80adf9c
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -0,0 +1,370 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <limits.h>
+#include <sched.h>
+#include <pthread.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_udp.h>
+
+#include "dpaa_ethdev.h"
+#include "dpaa_rxtx.h"
+#include <rte_dpaa_bus.h>
+#include <dpaa_mempool.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
+#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
+	do { \
+		(_fd)->cmd = 0; \
+		(_fd)->opaque_addr = 0; \
+		(_fd)->opaque = QM_FD_CONTIG << DPAA_FD_FORMAT_SHIFT; \
+		(_fd)->opaque |= ((_mbuf)->data_off) << DPAA_FD_OFFSET_SHIFT; \
+		(_fd)->opaque |= (_mbuf)->pkt_len; \
+		(_fd)->addr = (_mbuf)->buf_physaddr; \
+		(_fd)->bpid = _bpid; \
+	} while (0)
+
+static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
+							uint32_t ifid)
+{
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *mbuf;
+	void *ptr;
+	uint16_t offset =
+		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
+	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
+
+	DPAA_RX_LOG(DEBUG, " FD--->MBUF");
+
+	/* Ignoring case when format != qm_fd_contig */
+	ptr = rte_dpaa_mem_ptov(fd->addr);
+	/* Ignoring case when ptr would be NULL. That is only possible incase
+	 * of a corrupted packet
+	 */
+
+	mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
+	/* Prefetch the Parse results and packet data to L1 */
+	rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
+	rte_prefetch0((void *)((uint8_t *)ptr + offset));
+
+	mbuf->data_off = offset;
+	mbuf->data_len = length;
+	mbuf->pkt_len = length;
+
+	mbuf->port = ifid;
+	mbuf->nb_segs = 1;
+	mbuf->ol_flags = 0;
+	mbuf->next = NULL;
+	rte_mbuf_refcnt_set(mbuf, 1);
+
+	return mbuf;
+}
+
+uint16_t dpaa_eth_queue_rx(void *q,
+			   struct rte_mbuf **bufs,
+			   uint16_t nb_bufs)
+{
+	struct qman_fq *fq = q;
+	struct qm_dqrr_entry *dq;
+	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_PMD_ERR("Failure in affining portal");
+		return 0;
+	}
+
+	ret = qman_set_vdq(fq, (nb_bufs > DPAA_MAX_DEQUEUE_NUM_FRAMES) ?
+				DPAA_MAX_DEQUEUE_NUM_FRAMES : nb_bufs);
+	if (ret)
+		return 0;
+
+	do {
+		dq = qman_dequeue(fq);
+		if (!dq)
+			continue;
+		bufs[num_rx++] = dpaa_eth_fd_to_mbuf(&dq->fd, ifid);
+		qman_dqrr_consume(fq, dq);
+	} while (fq->flags & QMAN_FQ_STATE_VDQCR);
+
+	return num_rx;
+}
+
+static void *dpaa_get_pktbuf(struct dpaa_bp_info *bp_info)
+{
+	int ret;
+	uint64_t buf = 0;
+	struct bm_buffer bufs;
+
+	ret = bman_acquire(bp_info->bp, &bufs, 1, 0);
+	if (ret <= 0) {
+		DPAA_PMD_WARN("Failed to allocate buffers %d", ret);
+		return (void *)buf;
+	}
+
+	DPAA_RX_LOG(DEBUG, "got buffer 0x%lx from pool %d",
+		    (uint64_t)bufs.addr, bufs.bpid);
+
+	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
+	if (!buf)
+		goto out;
+
+out:
+	return (void *)buf;
+}
+
+static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
+					     struct dpaa_if *dpaa_intf)
+{
+	struct rte_mbuf *dpaa_mbuf;
+
+	/* allocate pktbuffer on bpid for dpaa port */
+	dpaa_mbuf = dpaa_get_pktbuf(dpaa_intf->bp_info);
+	if (!dpaa_mbuf)
+		return NULL;
+
+	memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + mbuf->data_off, (void *)
+		((uint8_t *)(mbuf->buf_addr) + mbuf->data_off), mbuf->pkt_len);
+
+	/* Copy only the required fields */
+	dpaa_mbuf->data_off = mbuf->data_off;
+	dpaa_mbuf->pkt_len = mbuf->pkt_len;
+	dpaa_mbuf->ol_flags = mbuf->ol_flags;
+	dpaa_mbuf->packet_type = mbuf->packet_type;
+	dpaa_mbuf->tx_offload = mbuf->tx_offload;
+	rte_pktmbuf_free(mbuf);
+	return dpaa_mbuf;
+}
+
+/* Handle mbufs which are not segmented (non SG) */
+static inline void
+tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
+			    struct dpaa_bp_info *bp_info,
+			    struct qm_fd *fd_arr)
+{
+	struct rte_mbuf *mi = NULL;
+
+	if (RTE_MBUF_DIRECT(mbuf)) {
+		if (rte_mbuf_refcnt_read(mbuf) > 1) {
+			/* In case of direct mbuf and mbuf being cloned,
+			 * BMAN should _not_ release buffer.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, 0xff);
+			/* Buffer should be releasd by EAL */
+			rte_mbuf_refcnt_update(mbuf, -1);
+		} else {
+			/* In case of direct mbuf and no cloning, mbuf can be
+			 * released by BMAN.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+		}
+	} else {
+		/* This is data-containing core mbuf: 'mi' */
+		mi = rte_mbuf_from_indirect(mbuf);
+		if (rte_mbuf_refcnt_read(mi) > 1) {
+			/* In case of indirect mbuf, and mbuf being cloned,
+			 * BMAN should _not_ release it and let EAL release
+			 * it through pktmbuf_free below.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, 0xff);
+		} else {
+			/* In case of indirect mbuf, and no cloning, core mbuf
+			 * should be released by BMAN.
+			 * Increate refcnt of core mbuf so that when
+			 * pktmbuf_free is called and mbuf is released, EAL
+			 * doesn't try to release core mbuf which would have
+			 * been released by BMAN.
+			 */
+			rte_mbuf_refcnt_update(mi, 1);
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+		}
+		rte_pktmbuf_free(mbuf);
+	}
+}
+
+/* Handle all mbufs on dpaa BMAN managed pool */
+static inline uint16_t
+tx_on_dpaa_pool(struct rte_mbuf *mbuf,
+		struct dpaa_bp_info *bp_info,
+		struct qm_fd *fd_arr)
+{
+	DPAA_TX_LOG(DEBUG, "BMAN offloaded buffer, mbuf: %p", mbuf);
+
+	if (mbuf->nb_segs == 1) {
+		/* Case for non-segmented buffers */
+		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr);
+	} else {
+		DPAA_PMD_DEBUG("Number of Segments not supported");
+		return 1;
+	}
+
+	return 0;
+}
+
+/* Handle all mbufs on an external pool (non-dpaa2) */
+static inline uint16_t
+tx_on_external_pool(struct qman_fq *txq, struct rte_mbuf *mbuf,
+		    struct qm_fd *fd_arr)
+{
+	struct dpaa_if *dpaa_intf = txq->dpaa_intf;
+	struct rte_mbuf *dmable_mbuf;
+
+	DPAA_TX_LOG(DEBUG, "Non-BMAN offloaded buffer."
+		    "Allocating an offloaded buffer");
+	dmable_mbuf = dpaa_get_dmable_mbuf(mbuf, dpaa_intf);
+	if (!dmable_mbuf) {
+		DPAA_TX_LOG(DEBUG, "no dpaa buffers.");
+		return 1;
+	}
+
+	DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, dpaa_intf->bp_info->bpid);
+
+	return 0;
+}
+
+uint16_t
+dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+	struct rte_mbuf *mbuf, *mi = NULL;
+	struct rte_mempool *mp;
+	struct dpaa_bp_info *bp_info;
+	struct qm_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send, loop, i = 0;
+	uint16_t state;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_PMD_ERR("Failure in affining portal");
+		return 0;
+	}
+
+	DPAA_TX_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
+
+	while (nb_bufs) {
+		frames_to_send = (nb_bufs >> 3) ? MAX_TX_RING_SLOTS : nb_bufs;
+		for (loop = 0; loop < frames_to_send; loop++, i++) {
+			mbuf = bufs[i];
+			if (RTE_MBUF_DIRECT(mbuf)) {
+				mp = mbuf->pool;
+			} else {
+				mi = rte_mbuf_from_indirect(mbuf);
+				mp = mi->pool;
+			}
+
+			bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+			if (likely(mp->ops_index == bp_info->dpaa_ops_index)) {
+				state = tx_on_dpaa_pool(mbuf, bp_info,
+							&fd_arr[loop]);
+				if (unlikely(state)) {
+					/* Set frames_to_send & nb_bufs so
+					 * that packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+			} else {
+				state = tx_on_external_pool(q, mbuf,
+							    &fd_arr[loop]);
+				if (unlikely(state)) {
+					/* Set frames_to_send & nb_bufs so
+					 * that packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+			}
+		}
+
+send_pkts:
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qman_enqueue_multi(q, &fd_arr[loop],
+					frames_to_send - loop);
+		}
+		nb_bufs -= frames_to_send;
+	}
+
+	DPAA_TX_LOG(DEBUG, "Transmitted %d buffers on queue: %p", i, q);
+
+	return i;
+}
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+		uint16_t nb_bufs __rte_unused)
+{
+	DPAA_TX_LOG(DEBUG, "Drop all packets");
+
+	/* Drop all incoming packets. No need to free packets here
+	 * because the rte_eth f/w frees up the packets through tx_buffer
+	 * callback in case this functions returns count less than nb_bufs
+	 */
+	return 0;
+}
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
new file mode 100644
index 0000000..45bfae8
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -0,0 +1,61 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPDK_RXTX_H__
+#define __DPDK_RXTX_H__
+
+/* internal offset from where IC is copied to packet buffer*/
+#define DEFAULT_ICIOF          32
+/* IC transfer size */
+#define DEFAULT_ICSZ	48
+
+/* IC offsets from buffer header address */
+#define DEFAULT_RX_ICEOF	16
+
+#define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
+	/** <Maximum number of frames to be dequeued in a single rx call*/
+/* FD structure masks and offset */
+#define DPAA_FD_FORMAT_MASK 0xE0000000
+#define DPAA_FD_OFFSET_MASK 0x1FF00000
+#define DPAA_FD_LENGTH_MASK 0xFFFFF
+#define DPAA_FD_FORMAT_SHIFT 29
+#define DPAA_FD_OFFSET_SHIFT 20
+
+uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+			      uint16_t nb_bufs __rte_unused);
+#endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 9c5a171..7440848 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -185,6 +185,7 @@ endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
 
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_mempool_dpaa
 endif
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 26/40] net/dpaa: add support for MTU update
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (24 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 25/40] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
@ 2017-08-23 14:11     ` Shreyansh Jain
  2017-09-21 22:07       ` Thomas Monjalon
  2017-08-23 14:12     ` [PATCH v3 27/40] net/dpaa: add support for jumbo frames Shreyansh Jain
                       ` (14 subsequent siblings)
  40 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:11 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 9e8befc..59ef23d 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,5 +4,6 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index ab19b2e..ad3eaac 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -76,6 +76,26 @@
 static int is_global_init;
 
 static int
+dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (mtu < ETHER_MIN_MTU)
+		return -EINVAL;
+	if (mtu > ETHER_MAX_LEN)
+		return -1;
+
+	dev->data->dev_conf.rxmode.jumbo_frame = 0;
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
+
+	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
+
+	return 0;
+}
+
+static int
 dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
@@ -197,6 +217,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+	.mtu_set		  = dpaa_mtu_set,
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 27/40] net/dpaa: add support for jumbo frames
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (25 preceding siblings ...)
  2017-08-23 14:11     ` [PATCH v3 26/40] net/dpaa: add support for MTU update Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 28/40] net/dpaa: add support for link status update Shreyansh Jain
                       ` (13 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 13 +++++++++++--
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 59ef23d..e62812c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Jumbo frame          = Y
 MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index ad3eaac..d0bab36 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -85,9 +85,10 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	if (mtu < ETHER_MIN_MTU)
 		return -EINVAL;
 	if (mtu > ETHER_MAX_LEN)
-		return -1;
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
 
-	dev->data->dev_conf.rxmode.jumbo_frame = 0;
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
 
 	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
@@ -100,6 +101,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 
+	if (dev->data->dev_conf.rxmode.jumbo_frame == 1) {
+		if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
+		    DPAA_MAX_RX_PKT_LEN)
+			return dpaa_mtu_set(dev,
+				dev->data->dev_conf.rxmode.max_rx_pkt_len);
+		else
+			return -1;
+	}
 	return 0;
 }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 28/40] net/dpaa: add support for link status update
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (26 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 27/40] net/dpaa: add support for jumbo frames Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 29/40] net/dpaa: add support for device info and speed capability Shreyansh Jain
                       ` (12 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 42 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index e62812c..132f94b 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index d0bab36..75fded2 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -142,6 +142,28 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static int dpaa_eth_link_update(struct rte_eth_dev *dev,
+				int wait_to_complete __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_link *link = &dev->data->dev_link;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dpaa_intf->fif->mac_type == fman_mac_1g)
+		link->link_speed = 1000;
+	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
+		link->link_speed = 10000;
+	else
+		DPAA_PMD_ERR("invalid link_speed: %s, %d",
+			     dpaa_intf->name, dpaa_intf->fif->mac_type);
+
+	link->link_status = dpaa_intf->valid;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = ETH_LINK_AUTONEG;
+	return 0;
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -216,6 +238,22 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 	PMD_INIT_FUNC_TRACE();
 }
 
+static int dpaa_link_down(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+	return 0;
+}
+
+static int dpaa_link_up(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_start(dev);
+	return 0;
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -226,7 +264,11 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+
+	.link_update		  = dpaa_eth_link_update,
 	.mtu_set		  = dpaa_mtu_set,
+	.dev_set_link_down	  = dpaa_link_down,
+	.dev_set_link_up	  = dpaa_link_up,
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 29/40] net/dpaa: add support for device info and speed capability
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (27 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 28/40] net/dpaa: add support for link status update Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 30/40] net/dpaa: add support for promiscuous toggle Shreyansh Jain
                       ` (11 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 132f94b..19beada 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 75fded2..9751145 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -142,6 +142,25 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
+			      struct rte_eth_dev_info *dev_info)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->max_rx_queues = dpaa_intf->nb_rx_queues;
+	dev_info->max_tx_queues = dpaa_intf->nb_tx_queues;
+	dev_info->min_rx_bufsize = DPAA_MIN_RX_BUF_SIZE;
+	dev_info->max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
+	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
+	dev_info->max_hash_mac_addrs = 0;
+	dev_info->max_vfs = 0;
+	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
+				ETH_LINK_SPEED_10G);
+}
+
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 				int wait_to_complete __rte_unused)
 {
@@ -259,6 +278,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+	.dev_infos_get		  = dpaa_eth_dev_info,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 30/40] net/dpaa: add support for promiscuous toggle
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (28 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 29/40] net/dpaa: add support for device info and speed capability Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 31/40] net/dpaa: add support for multicast toggle Shreyansh Jain
                       ` (10 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 19beada..b2dfd81 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -8,5 +8,6 @@ Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
+Promiscuous mode     = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 9751145..803b9df 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -183,6 +183,25 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+
+static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_enable(dpaa_intf->fif);
+}
+
+static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_disable(dpaa_intf->fif);
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -286,6 +305,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
+	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 31/40] net/dpaa: add support for multicast toggle
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (29 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 30/40] net/dpaa: add support for promiscuous toggle Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 32/40] net/dpaa: add support for MAC address update Shreyansh Jain
                       ` (9 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index b2dfd81..f21a85f 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -9,5 +9,6 @@ Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
 Promiscuous mode     = Y
+Allmulticast mode    = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 803b9df..982e762 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -202,6 +202,24 @@ static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
 	fman_if_promiscuous_disable(dpaa_intf->fif);
 }
 
+static void dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_set_mcast_filter_table(dpaa_intf->fif);
+}
+
+static void dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_reset_mcast_filter_table(dpaa_intf->fif);
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -307,6 +325,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.link_update		  = dpaa_eth_link_update,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
+	.allmulticast_enable	  = dpaa_eth_multicast_enable,
+	.allmulticast_disable	  = dpaa_eth_multicast_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 32/40] net/dpaa: add support for MAC address update
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (30 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 31/40] net/dpaa: add support for multicast toggle Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 33/40] net/dpaa: add support for basic stats Shreyansh Jain
                       ` (8 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 55 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 56 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index f21a85f..cdf5e46 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -10,5 +10,6 @@ Jumbo frame          = Y
 MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
+Unicast MAC filter   = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 982e762..d7b0e16 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -310,6 +310,57 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *addr,
+			     uint32_t index,
+			     __rte_unused uint32_t pool)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fm_mac_add_exact_match_mac_addr(dpaa_intf->fif,
+					      addr->addr_bytes, index);
+
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Adding the MAC ADDR failed:"
+			" err = %d", ret);
+	return 0;
+}
+
+static void
+dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
+			  uint32_t index)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fm_mac_rem_exact_match_mac_addr(dpaa_intf->fif, index);
+
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Removing the MAC ADDR failed:"
+			" err = %d", ret);
+}
+
+static void
+dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
+		       struct ether_addr *addr)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fm_mac_add_exact_match_mac_addr(dpaa_intf->fif,
+					      addr->addr_bytes, 0);
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Setting the MAC ADDR failed %d", ret);
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -330,6 +381,10 @@ static struct eth_dev_ops dpaa_devops = {
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
+	.mac_addr_add		  = dpaa_dev_add_mac_addr,
+	.mac_addr_remove	  = dpaa_dev_remove_mac_addr,
+	.mac_addr_set		  = dpaa_dev_set_mac_addr,
+
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 33/40] net/dpaa: add support for basic stats
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (31 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 32/40] net/dpaa: add support for MAC address update Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 34/40] net/dpaa: add support for flow control Shreyansh Jain
                       ` (7 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index cdf5e46..c09efd8 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,5 +11,6 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index d7b0e16..062f23e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -183,6 +183,24 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void dpaa_eth_stats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_stats *stats)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_get(dpaa_intf->fif, stats);
+}
+
+static void dpaa_eth_stats_reset(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_reset(dpaa_intf->fif);
+}
 
 static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
 {
@@ -374,6 +392,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.stats_get		  = dpaa_eth_stats_get,
+	.stats_reset		  = dpaa_eth_stats_reset,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.allmulticast_enable	  = dpaa_eth_multicast_enable,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 34/40] net/dpaa: add support for flow control
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (32 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 33/40] net/dpaa: add support for basic stats Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 35/40] net/dpaa: add support for hashed RSS Shreyansh Jain
                       ` (6 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 112 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index c09efd8..1ba6b11 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Flow control         = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 062f23e..fddd9ec 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -329,6 +329,85 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 }
 
 static int
+dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			DPAA_PMD_ERR("unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	net_fc = dpaa_intf->fc_conf;
+
+	if (fc_conf->high_water < fc_conf->low_water) {
+		DPAA_PMD_ERR("Incorrect Flow Control Configuration");
+		return -EINVAL;
+	}
+
+	if (fc_conf->mode == RTE_FC_NONE) {
+		return 0;
+	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_FC_FULL) {
+		fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water,
+					 fc_conf->low_water,
+				dpaa_intf->bp_info->bpid);
+		if (fc_conf->pause_time)
+			fman_if_set_fc_quanta(dpaa_intf->fif,
+					      fc_conf->pause_time);
+	}
+
+	/* Save the information in dpaa device */
+	net_fc->pause_time = fc_conf->pause_time;
+	net_fc->high_water = fc_conf->high_water;
+	net_fc->low_water = fc_conf->low_water;
+	net_fc->send_xon = fc_conf->send_xon;
+	net_fc->mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd;
+	net_fc->mode = fc_conf->mode;
+	net_fc->autoneg = fc_conf->autoneg;
+
+	return 0;
+}
+
+static int
+dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc = dpaa_intf->fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (net_fc) {
+		fc_conf->pause_time = net_fc->pause_time;
+		fc_conf->high_water = net_fc->high_water;
+		fc_conf->low_water = net_fc->low_water;
+		fc_conf->send_xon = net_fc->send_xon;
+		fc_conf->mac_ctrl_frame_fwd = net_fc->mac_ctrl_frame_fwd;
+		fc_conf->mode = net_fc->mode;
+		fc_conf->autoneg = net_fc->autoneg;
+		return 0;
+	}
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
+static int
 dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *addr,
 			     uint32_t index,
@@ -391,6 +470,9 @@ static struct eth_dev_ops dpaa_devops = {
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
+	.flow_ctrl_get		  = dpaa_flow_ctrl_get,
+	.flow_ctrl_set		  = dpaa_flow_ctrl_set,
+
 	.link_update		  = dpaa_eth_link_update,
 	.stats_get		  = dpaa_eth_stats_get,
 	.stats_reset		  = dpaa_eth_stats_reset,
@@ -407,6 +489,33 @@ static struct eth_dev_ops dpaa_devops = {
 
 };
 
+static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
+{
+	struct rte_eth_fc_conf *fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			DPAA_PMD_ERR("unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	fc_conf = dpaa_intf->fc_conf;
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
 /* Initialise an Rx FQ */
 static int dpaa_rx_queue_init(struct qman_fq *fq,
 			      uint32_t fqid)
@@ -560,6 +669,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 
 	DPAA_PMD_DEBUG("All frame queues created");
 
+	/* Get the initial configuration for flow control */
+	dpaa_fc_set_default(dpaa_intf);
+
 	/* reset bpool list, initialize bpool dynamically */
 	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
 		list_del(&bp->node);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 35/40] net/dpaa: add support for hashed RSS
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (33 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 34/40] net/dpaa: add support for flow control Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 36/40] net/dpaa: add support for packet type parsing Shreyansh Jain
                       ` (5 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c |  1 +
 drivers/net/dpaa/dpaa_ethdev.h | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index fddd9ec..55adc04 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -157,6 +157,7 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
 				ETH_LINK_SPEED_10G);
 }
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index c3eb804..225898e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -88,6 +88,16 @@
 #define DPAA_DEBUG_FQ_RX_ERROR   0
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
+#define DPAA_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP)
+
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
 		PKT_TX_TCP_CKSUM |               \
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 36/40] net/dpaa: add support for packet type parsing
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (34 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 35/40] net/dpaa: add support for hashed RSS Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 37/40] net/dpaa: add support for checksum offload Shreyansh Jain
                       ` (4 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Add support for parsing the packet type and L2/L3 checksum offload
capability information.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   2 +
 drivers/net/dpaa/dpaa_ethdev.c    |  27 +++++
 drivers/net/dpaa/dpaa_rxtx.c      | 116 +++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 206 ++++++++++++++++++++++++++++++++++++++
 4 files changed, 351 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 1ba6b11..2ef1b56 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,7 +11,9 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+RSS hash             = Y
 Flow control         = Y
+Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 55adc04..bcb69ad 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -112,6 +112,28 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 	return 0;
 }
 
+static const uint32_t *
+dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		/*todo -= add more types */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_SCTP
+	};
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dev->rx_pkt_burst == dpaa_eth_queue_rx)
+		return ptypes;
+	return NULL;
+}
+
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -160,6 +182,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
 				ETH_LINK_SPEED_10G);
+	dev_info->rx_offload_capa =
+		(DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM   |
+		DEV_RX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
@@ -465,6 +491,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
 	.dev_infos_get		  = dpaa_eth_dev_info,
+	.dev_supported_ptypes_get = dpaa_supported_ptypes_get,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 80adf9c..90be40d 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,121 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
+				     uint64_t prs __rte_unused)
+{
+	DPAA_RX_LOG(DEBUG, "Slow parsing");
+	/*TBD:XXX: to be implemented*/
+}
+
+static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
+					uint64_t fd_virt_addr)
+{
+	struct annotations_t *annot = GET_ANNOTATIONS(fd_virt_addr);
+	uint64_t prs = *((uint64_t *)(&annot->parse)) & DPAA_PARSE_MASK;
+
+	DPAA_RX_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
+
+	switch (prs) {
+	case DPAA_PKT_TYPE_NONE:
+		m->packet_type = 0;
+		break;
+	case DPAA_PKT_TYPE_ETHER:
+		m->packet_type = RTE_PTYPE_L2_ETHER;
+		break;
+	case DPAA_PKT_TYPE_IPV4:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4;
+		break;
+	case DPAA_PKT_TYPE_IPV6:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6;
+		break;
+	case DPAA_PKT_TYPE_IPV4_FRAG:
+	case DPAA_PKT_TYPE_IPV4_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV6_FRAG:
+	case DPAA_PKT_TYPE_IPV6_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV4_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP;
+		break;
+	/* More switch cases can be added */
+	default:
+		dpaa_slow_parsing(m, prs);
+	}
+
+	m->tx_offload = annot->parse.ip_off[0];
+	m->tx_offload |= (annot->parse.l4_off - annot->parse.ip_off[0])
+					<< DPAA_PKT_L3_LEN_SHIFT;
+
+	/* Set the hash values */
+	m->hash.rss = (uint32_t)(rte_be_to_cpu_64(annot->hash));
+	m->ol_flags = PKT_RX_RSS_HASH;
+	/* All packets with Bad checksum are dropped by interface (and
+	 * corresponding notification issued to RX error queues).
+	 */
+	m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	/* Check if Vlan is present */
+	if (prs & DPAA_PARSE_VLAN_MASK)
+		m->ol_flags |= PKT_RX_VLAN_PKT;
+	/* Packet received without stripping the vlan */
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -117,6 +232,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 	mbuf->ol_flags = 0;
 	mbuf->next = NULL;
 	rte_mbuf_refcnt_set(mbuf, 1);
+	dpaa_eth_packet_info(mbuf, (uint64_t)mbuf->buf_addr);
 
 	return mbuf;
 }
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 45bfae8..68d2c41 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -44,6 +44,7 @@
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
+
 /* FD structure masks and offset */
 #define DPAA_FD_FORMAT_MASK 0xE0000000
 #define DPAA_FD_OFFSET_MASK 0x1FF00000
@@ -51,6 +52,211 @@
 #define DPAA_FD_FORMAT_SHIFT 29
 #define DPAA_FD_OFFSET_SHIFT 20
 
+/* Parsing mask (Little Endian) - 0x00E044ED00800000
+ *	Classification Plan ID 0x00
+ *	L4R 0xE0 -
+ *		0x20 - TCP
+ *		0x40 - UDP
+ *		0x80 - SCTP
+ *	L3R 0xEDC4 (in Big Endian) -
+ *		0x8000 - IPv4
+ *		0x4000 - IPv6
+ *		0x8140 - IPv4 Ext + Frag
+ *		0x8040 - IPv4 Frag
+ *		0x8100 - IPv4 Ext
+ *		0x4140 - IPv6 Ext + Frag
+ *		0x4040 - IPv6 Frag
+ *		0x4100 - IPv6 Ext
+ *	L2R 0x8000 (in Big Endian) -
+ *		0x8000 - Ethernet type
+ *	ShimR & Logical Port ID 0x0000
+ */
+#define DPAA_PARSE_MASK			0x00E044ED00800000
+#define DPAA_PARSE_VLAN_MASK		0x0000000000700000
+
+/* Parsed values (Little Endian) */
+#define DPAA_PKT_TYPE_NONE		0x0000000000000000
+#define DPAA_PKT_TYPE_ETHER		0x0000000000800000
+#define DPAA_PKT_TYPE_IPV4 \
+			(0x0000008000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV6 \
+			(0x0000004000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_GRE \
+			(0x0000002000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV4_FRAG	\
+			(0x0000400000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_FRAG	\
+			(0x0000400000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_EXT \
+			(0x0000000100000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_EXT \
+			(0x0000000100000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_SCTP	\
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_SCTP	\
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_FRAG_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_SCTP \
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_SCTP \
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_EXT_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_IPV4_EXT_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_TUNNEL_4_4 \
+			(0x0000000800000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6 \
+			(0x0000000400000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6 \
+			(0x0000000400000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_4 \
+			(0x0000000800000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_L3_LEN_SHIFT	7
+
+/**
+ * FMan parse result array
+ */
+struct dpaa_eth_parse_results_t {
+	 uint8_t     lpid;		 /**< Logical port id */
+	 uint8_t     shimr;		 /**< Shim header result  */
+	 union {
+		uint16_t              l2r;	/**< Layer 2 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      ethernet:1;
+			uint16_t      vlan:1;
+			uint16_t      llc_snap:1;
+			uint16_t      mpls:1;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      unused_1:3;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      eth_frame_type:2;
+			uint16_t      l2r_err:5;
+			/*00-unicast, 01-multicast, 11-broadcast*/
+#else
+			uint16_t      l2r_err:5;
+			uint16_t      eth_frame_type:2;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      unused_1:3;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      mpls:1;
+			uint16_t      llc_snap:1;
+			uint16_t      vlan:1;
+			uint16_t      ethernet:1;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint16_t              l3r;	/**< Layer 3 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      first_ipv4:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      gre:1;
+			uint16_t      min_enc:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      last_ipv6:1;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      last_ip_err_code:3;
+#else
+			uint16_t      last_ip_err_code:3;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      last_ipv6:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      min_enc:1;
+			uint16_t      gre:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      first_ipv4:1;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint8_t               l4r;	/**< Layer 4 result */
+		struct{
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint8_t	       l4_type:3;
+			uint8_t	       l4_info_err:1;
+			uint8_t	       l4_result:4;
+					/* if type IPSec: 1 ESP, 2 AH */
+#else
+			uint8_t        l4_result:4;
+					/* if type IPSec: 1 ESP, 2 AH */
+			uint8_t        l4_info_err:1;
+			uint8_t        l4_type:3;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 uint8_t     cplan;		 /**< Classification plan id */
+	 uint16_t    nxthdr;		 /**< Next Header  */
+	 uint16_t    cksum;		 /**< Checksum */
+	 uint32_t    lcv;		 /**< LCV */
+	 uint8_t     shim_off[3];	 /**< Shim offset */
+	 uint8_t     eth_off;		 /**< ETH offset */
+	 uint8_t     llc_snap_off;	 /**< LLC_SNAP offset */
+	 uint8_t     vlan_off[2];	 /**< VLAN offset */
+	 uint8_t     etype_off;		 /**< ETYPE offset */
+	 uint8_t     pppoe_off;		 /**< PPP offset */
+	 uint8_t     mpls_off[2];	 /**< MPLS offset */
+	 uint8_t     ip_off[2];		 /**< IP offset */
+	 uint8_t     gre_off;		 /**< GRE offset */
+	 uint8_t     l4_off;		 /**< Layer 4 offset */
+	 uint8_t     nxthdr_off;	 /**< Parser end point */
+} __attribute__ ((__packed__));
+
+/* The structure is the Prepended Data to the Frame which is used by FMAN */
+struct annotations_t {
+	uint8_t reserved[DEFAULT_RX_ICEOF];
+	struct dpaa_eth_parse_results_t parse;	/**< Pointer to Parsed result*/
+	uint64_t reserved1;
+	uint64_t hash;			/**< Hash Result */
+};
+
+#define GET_ANNOTATIONS(_buf) \
+	(struct annotations_t *)(_buf)
+
+#define GET_RX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
+
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
 uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 37/40] net/dpaa: add support for checksum offload
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (35 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 36/40] net/dpaa: add support for packet type parsing Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 38/40] net/dpaa: add support for Scattered Rx Shreyansh Jain
                       ` (3 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  2 +
 drivers/net/dpaa/dpaa_ethdev.c    |  4 ++
 drivers/net/dpaa/dpaa_rxtx.c      | 89 +++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 19 +++++++++
 4 files changed, 114 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 2ef1b56..23626c0 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -13,6 +13,8 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 RSS hash             = Y
 Flow control         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bcb69ad..96924b6 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -186,6 +186,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 		(DEV_RX_OFFLOAD_IPV4_CKSUM |
 		DEV_RX_OFFLOAD_UDP_CKSUM   |
 		DEV_RX_OFFLOAD_TCP_CKSUM);
+	dev_info->tx_offload_capa =
+		(DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 90be40d..0f43bb4 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -200,6 +200,82 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
 	/* Packet received without stripping the vlan */
 }
 
+static inline void dpaa_checksum(struct rte_mbuf *mbuf)
+{
+	struct ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf, struct ether_hdr *);
+	char *l3_hdr = (char *)eth_hdr + mbuf->l2_len;
+	struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+	struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	DPAA_TX_LOG(DEBUG, "Calculating checksum for mbuf: %p", mbuf);
+
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	    ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	    RTE_PTYPE_L3_IPV4_EXT)) {
+		ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+		ipv4_hdr->hdr_checksum = 0;
+		ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
+	} else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6_EXT))
+		ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP) {
+		struct tcp_hdr *tcp_hdr = (struct tcp_hdr *)(l3_hdr +
+					  mbuf->l3_len);
+		tcp_hdr->cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			tcp_hdr->cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+							       tcp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			tcp_hdr->cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+							       tcp_hdr);
+	} else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) ==
+		   RTE_PTYPE_L4_UDP) {
+		struct udp_hdr *udp_hdr = (struct udp_hdr *)(l3_hdr +
+							     mbuf->l3_len);
+		udp_hdr->dgram_cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			udp_hdr->dgram_cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+								     udp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			udp_hdr->dgram_cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+								     udp_hdr);
+	}
+}
+
+static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
+					 struct qm_fd *fd, char *prs_buf)
+{
+	struct dpaa_eth_parse_results_t *prs;
+
+	DPAA_TX_LOG(DEBUG, " Offloading checksum for mbuf: %p", mbuf);
+
+	prs = GET_TX_PRS(prs_buf);
+	prs->l3r = 0;
+	prs->l4r = 0;
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	   RTE_PTYPE_L3_IPV4_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV4;
+	else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		 ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		RTE_PTYPE_L3_IPV6_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV6;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_TCP;
+	else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_UDP;
+
+	prs->ip_off[0] = mbuf->l2_len;
+	prs->l4_off = mbuf->l3_len + mbuf->l2_len;
+	/* Enable L3 (and L4, if TCP or UDP) HW checksum*/
+	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -358,6 +434,19 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 		}
 		rte_pktmbuf_free(mbuf);
 	}
+
+	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+		if (mbuf->data_off < (DEFAULT_TX_ICEOF +
+		    sizeof(struct dpaa_eth_parse_results_t))) {
+			DPAA_TX_LOG(DEBUG, "Checksum offload Err: "
+				"Not enough Headroom "
+				"space for correct Checksum offload."
+				"So Calculating checksum in Software.");
+			dpaa_checksum(mbuf);
+		} else {
+			dpaa_checksum_offload(mbuf, fd_arr, mbuf->buf_addr);
+		}
+	}
 }
 
 /* Handle all mbufs on dpaa BMAN managed pool */
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 68d2c41..624ddda 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -41,6 +41,22 @@
 
 /* IC offsets from buffer header address */
 #define DEFAULT_RX_ICEOF	16
+#define DEFAULT_TX_ICEOF	16
+
+/*
+ * Values for the L3R field of the FM Parse Results
+ */
+/* L3 Type field: First IP Present IPv4 */
+#define DPAA_L3_PARSE_RESULT_IPV4 0x80
+/* L3 Type field: First IP Present IPv6 */
+#define DPAA_L3_PARSE_RESULT_IPV6	0x40
+/* Values for the L4R field of the FM Parse Results
+ * See $8.8.4.7.20 - L4 HXS - L4 Results from DPAA-Rev2 Reference Manual.
+ */
+/* L4 Type field: UDP */
+#define DPAA_L4_PARSE_RESULT_UDP	0x40
+/* L4 Type field: TCP */
+#define DPAA_L4_PARSE_RESULT_TCP	0x20
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
@@ -257,6 +273,9 @@ struct annotations_t {
 #define GET_RX_PRS(_buf) \
 	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
 
+#define GET_TX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_TX_ICEOF)
+
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
 uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 38/40] net/dpaa: add support for Scattered Rx
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (36 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 37/40] net/dpaa: add support for checksum offload Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 39/40] net/dpaa: add packet dump for debugging Shreyansh Jain
                       ` (2 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_rxtx.c      | 158 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      |   9 +++
 3 files changed, 168 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 23626c0..0e7956c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -8,6 +8,7 @@ Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
+Scattered Rx         = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 0f43bb4..064f0da 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -276,18 +276,82 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
 	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
 }
 
+struct rte_mbuf *
+dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
+{
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+	struct qm_sg_entry *sgt, *sg_temp;
+	void *vaddr, *sg_vaddr;
+	int i = 0;
+	uint8_t fd_offset = fd->offset;
+
+	DPAA_RX_LOG(DEBUG, "Received an SG frame");
+
+	vaddr = rte_dpaa_mem_ptov(qm_fd_addr(fd));
+	if (!vaddr) {
+		DPAA_PMD_ERR("unable to convert physical address");
+		return NULL;
+	}
+	sgt = vaddr + fd_offset;
+	sg_temp = &sgt[i++];
+	hw_sg_to_cpu(sg_temp);
+	temp = (struct rte_mbuf *)((char *)vaddr - bp_info->meta_data_size);
+	sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+
+	first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						bp_info->meta_data_size);
+	first_seg->data_off = sg_temp->offset;
+	first_seg->data_len = sg_temp->length;
+	first_seg->pkt_len = sg_temp->length;
+	rte_mbuf_refcnt_set(first_seg, 1);
+
+	first_seg->port = ifid;
+	first_seg->nb_segs = 1;
+	first_seg->ol_flags = 0;
+	prev_seg = first_seg;
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		hw_sg_to_cpu(sg_temp);
+		sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+		cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						      bp_info->meta_data_size);
+		cur_seg->data_off = sg_temp->offset;
+		cur_seg->data_len = sg_temp->length;
+		first_seg->pkt_len += sg_temp->length;
+		first_seg->nb_segs += 1;
+		rte_mbuf_refcnt_set(cur_seg, 1);
+		prev_seg->next = cur_seg;
+		if (sg_temp->final) {
+			cur_seg->next = NULL;
+			break;
+		}
+		prev_seg = cur_seg;
+	}
+
+	dpaa_eth_packet_info(first_seg, (uint64_t)vaddr);
+	rte_pktmbuf_free_seg(temp);
+
+	return first_seg;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
 	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
 	struct rte_mbuf *mbuf;
 	void *ptr;
+	uint8_t format =
+		(fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
 	uint16_t offset =
 		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
 	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
 
 	DPAA_RX_LOG(DEBUG, " FD--->MBUF");
 
+	if (unlikely(format == qm_fd_sg))
+		return dpaa_eth_sg_to_mbuf(fd, ifid);
+
 	/* Ignoring case when format != qm_fd_contig */
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
@@ -390,6 +454,94 @@ static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
 	return dpaa_mbuf;
 }
 
+int
+dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
+		struct qm_fd *fd,
+		uint32_t bpid)
+{
+	struct rte_mbuf *cur_seg = mbuf, *prev_seg = NULL;
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(bpid);
+	struct rte_mbuf *temp, *mi;
+	struct qm_sg_entry *sg_temp, *sgt;
+	int i = 0;
+
+	DPAA_TX_LOG(DEBUG, "Creating SG FD to transmit");
+
+	temp = rte_pktmbuf_alloc(bp_info->mp);
+	if (!temp) {
+		DPAA_PMD_ERR("Failure in allocation of mbuf");
+		return -1;
+	}
+	if (temp->buf_len < ((mbuf->nb_segs * sizeof(struct qm_sg_entry))
+				+ temp->data_off)) {
+		DPAA_PMD_ERR("Insufficient space in mbuf for SG entries");
+		return -1;
+	}
+
+	fd->cmd = 0;
+	fd->opaque_addr = 0;
+
+	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+		if (temp->data_off < DEFAULT_TX_ICEOF
+			+ sizeof(struct dpaa_eth_parse_results_t))
+			temp->data_off = DEFAULT_TX_ICEOF
+				+ sizeof(struct dpaa_eth_parse_results_t);
+		dcbz_64(temp->buf_addr);
+		dpaa_checksum_offload(mbuf, fd, temp->buf_addr);
+	}
+
+	sgt = temp->buf_addr + temp->data_off;
+	fd->format = QM_FD_SG;
+	fd->addr = temp->buf_physaddr;
+	fd->offset = temp->data_off;
+	fd->bpid = bpid;
+	fd->length20 = mbuf->pkt_len;
+
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		sg_temp->opaque = 0;
+		sg_temp->val = 0;
+		sg_temp->addr = cur_seg->buf_physaddr;
+		sg_temp->offset = cur_seg->data_off;
+		sg_temp->length = cur_seg->data_len;
+		if (RTE_MBUF_DIRECT(cur_seg)) {
+			if (rte_mbuf_refcnt_read(cur_seg) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+				rte_mbuf_refcnt_update(cur_seg, -1);
+			} else
+				sg_temp->bpid =
+					DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
+			cur_seg = cur_seg->next;
+		} else {
+			/* Get owner MBUF from indirect buffer */
+			mi = rte_mbuf_from_indirect(cur_seg);
+			if (rte_mbuf_refcnt_read(mi) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * owner buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+			} else {
+				sg_temp->bpid = DPAA_MEMPOOL_TO_BPID(mi->pool);
+				rte_mbuf_refcnt_update(mi, 1);
+			}
+			prev_seg = cur_seg;
+			cur_seg = cur_seg->next;
+			prev_seg->next = NULL;
+			rte_pktmbuf_free(prev_seg);
+		}
+		if (cur_seg == NULL) {
+			sg_temp->final = 1;
+			cpu_to_hw_sg(sg_temp);
+			break;
+		}
+		cpu_to_hw_sg(sg_temp);
+	}
+	return 0;
+}
+
 /* Handle mbufs which are not segmented (non SG) */
 static inline void
 tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
@@ -460,6 +612,12 @@ tx_on_dpaa_pool(struct rte_mbuf *mbuf,
 	if (mbuf->nb_segs == 1) {
 		/* Case for non-segmented buffers */
 		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr);
+	} else if (mbuf->nb_segs > 1 &&
+		   mbuf->nb_segs <= DPAA_SGT_MAX_ENTRIES) {
+		if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr, bp_info->bpid)) {
+			DPAA_PMD_DEBUG("Unable to create Scatter Gather FD");
+			return 1;
+		}
 	} else {
 		DPAA_PMD_DEBUG("Number of Segments not supported");
 		return 1;
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 624ddda..351fc00 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -58,6 +58,8 @@
 /* L4 Type field: TCP */
 #define DPAA_L4_PARSE_RESULT_TCP	0x20
 
+#define DPAA_SGT_MAX_ENTRIES 16 /* maximum number of entries in SG Table */
+
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
 
@@ -283,4 +285,11 @@ uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
 			      struct rte_mbuf **bufs __rte_unused,
 			      uint16_t nb_bufs __rte_unused);
+
+struct rte_mbuf *dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid);
+
+int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
+			   struct qm_fd *fd,
+			   uint32_t bpid);
+
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 39/40] net/dpaa: add packet dump for debugging
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (37 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 38/40] net/dpaa: add support for Scattered Rx Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-08-23 14:12     ` [PATCH v3 40/40] net/dpaa: support for firmware version get API Shreyansh Jain
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc |  2 ++
 drivers/net/dpaa/dpaa_ethdev.c           | 42 ++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.c             | 26 ++++++++++++++++++++
 3 files changed, 70 insertions(+)

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index a349cec..c0f5e4a 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -51,6 +51,8 @@ CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER_DISPLAY=n
+CONFIG_RTE_LIBRTE_DPAA_CHECKING=n
 
 # NXP DPAA Mempool
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 96924b6..4c543ee 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -625,6 +625,39 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
 	return ret;
 }
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
+static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		DPAA_PMD_LOG(ERR, "reserve debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return -EINVAL;
+	}
+	/* "map" this Rx FQ to one of the interfaces Tx FQID */
+	DPAA_PMD_LOG(DEBUG, "creating debug fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		DPAA_PMD_LOG(ERR, "create debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL;
+	opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		DPAA_PMD_LOG(ERR, "init debug fqid %d failed with ret: %d",
+			    fqid, ret);
+	return ret;
+}
+#endif
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -699,6 +732,15 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	}
 	dpaa_intf->nb_tx_queues = num_cores;
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+#endif
+
 	DPAA_PMD_DEBUG("All frame queues created");
 
 	/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 064f0da..8e106c0 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,31 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+#if (defined RTE_LIBRTE_DPAA_DEBUG_DRIVER_DISPLAY)
+void dpaa_display_frame(const struct qm_fd *fd)
+{
+	int ii;
+	char *ptr;
+
+	printf("%s::bpid %x addr %08x%08x, format %d off %d, len %d stat %x\n",
+	       __func__, fd->bpid, fd->addr_hi, fd->addr_lo, fd->format,
+		fd->offset, fd->length20, fd->status);
+
+	ptr = (char *)rte_dpaa_mem_ptov(fd->addr);
+	ptr += fd->offset;
+	printf("%02x ", *ptr);
+	for (ii = 1; ii < fd->length20; ii++) {
+		printf("%02x ", *ptr);
+		if ((ii % 16) == 0)
+			printf("\n");
+		ptr++;
+	}
+	printf("\n");
+}
+#else
+#define dpaa_display_frame(a)
+#endif
+
 static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
 				     uint64_t prs __rte_unused)
 {
@@ -353,6 +378,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 		return dpaa_eth_sg_to_mbuf(fd, ifid);
 
 	/* Ignoring case when format != qm_fd_contig */
+	dpaa_display_frame(fd);
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
 	 * of a corrupted packet
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v3 40/40] net/dpaa: support for firmware version get API
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (38 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 39/40] net/dpaa: add packet dump for debugging Shreyansh Jain
@ 2017-08-23 14:12     ` Shreyansh Jain
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-08-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 0e7956c..09b9bd9 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -18,5 +18,6 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
+FW version           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4c543ee..8028651 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -164,6 +164,41 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static int
+dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
+		     char *fw_version,
+		     size_t fw_size)
+{
+	int ret;
+	FILE *svr_file = NULL;
+	unsigned int svr_ver = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	svr_file = fopen("/sys/devices/soc0/soc_id", "r");
+	if (!svr_file) {
+		DPAA_PMD_ERR("Unable to open SoC device");
+		return -ENOTSUP; /* Not supported on this infra */
+	}
+
+	ret = fscanf(svr_file, "svr:%x", &svr_ver);
+	if (ret <= 0) {
+		DPAA_PMD_ERR("Unable to read SoC device");
+		return -ENOTSUP; /* Not supported on this infra */
+	}
+
+	ret = snprintf(fw_version, fw_size,
+		       "svr:%x-fman-v%x",
+		       svr_ver,
+		       fman_ip_rev);
+
+	ret += 1; /* add the size of '\0' */
+	if (fw_size < (uint32_t)ret)
+		return ret;
+	else
+		return 0;
+}
+
 static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 			      struct rte_eth_dev_info *dev_info)
 {
@@ -519,6 +554,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.mac_addr_remove	  = dpaa_dev_remove_mac_addr,
 	.mac_addr_set		  = dpaa_dev_set_mac_addr,
 
+	.fw_version_get		  = dpaa_fw_version_get,
 };
 
 static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
  2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
                       ` (39 preceding siblings ...)
  2017-08-23 14:12     ` [PATCH v3 40/40] net/dpaa: support for firmware version get API Shreyansh Jain
@ 2017-09-09 11:20     ` Shreyansh Jain
  2017-09-09 11:20       ` [PATCH v4 01/41] config: add NXP DPAA SoC build configuration Shreyansh Jain
                         ` (43 more replies)
  40 siblings, 44 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:20 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Change Log:
============

v4:
 - Some checkpatch fixes which were reported by checkpatch@dpdk
 - adding support for extended stats (patch 41)

v3:
 - Rebasing over 17.11-rc0 (85238f50)
 - Checkpatch fixes
   (There are still 2 errors which I think are false positives)
 - Implement rte_bus.find_device() interface
 - Various other minor updates/cleanups

v2:
 - Fixing various comments from Ferruh, but broadly:
  -) Logging is been changed to reflect rte_log_register
  -) Logs across Bus, Mempool and PMD updated
  -) fixed incorrect feature claimed in dpaa.ini
 - Removed 24/40/48 bit swapping macro from EAL.
   These are defined in dpaa/bus now (compat.h)
 - Added missing memory cleanup operation
 - Updated documentation with some missing information

Introduction
============

RFC was posted here -> [R3]
V2 was posted here -> [R5]
V3 was posted here -> [R6]

This patch series adds NXP's QorIQ-Layerscape DPAA Architecture based
bus driver, mempool driver and PMD. This version of driver supports NXP
LS1043A/LS1023A, LS1046A/LS1026A family of network SoCs. [R1]

DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
components designed for high-speed network packet processing. This
architecture provides the infrastructure to support simplified sharing of
networking interfaces and accelerators by multiple CPU cores, and the
accelerators themselves.

This patchset introduces the following:
1. DPAA Bus (drivers/bus/dpaa)
 The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
 or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
 The patches introduce necessary layers to expose the DPAA hardware
 blocks for interfacing with RTE framework.

2. DPAA Mempool (drivers/mempool/dpaa)
 BMan, or Buffer Manager, block of DPAA features a hardware offloaded
 mempool. These patches add support for a driver to manage the BMan
 block. This driver allows for mempool creation, deletion, buffer
 acquire and release, as per the RTE APIs.

3. DPAA PMD (drivers/net/dpaa)
 The Poll Mode Driver for DPAA NIC Interfaces.

Patch Layout
============

01: Add DPAA SoC build configuration
02~16: Add DPAA Bus support and features, incrementally
17: Add Documentation
18~21: Add DPAA Mempool support
22~41: Add PMD and its various features, incrementally

References
==========

[R1] http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-layerscape-arm-processors:QORIQ-ARM
[R2] http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf
[R3] RFC: http://dpdk.org/ml/archives/dev/2017-May/066675.html
[R4] v1: http://dpdk.org/ml/archives/dev/2017-June/068020.html
[R5] v2: http://dpdk.org/ml/archives/dev/2017-July/070113.html
[R6] v3: http://dpdk.org/ml/archives/dev/2017-August/073269.html

Hemant Agrawal (3):
  bus/dpaa: add compatibility and helper macros
  net/dpaa: support for firmware version get API
  net/dpaa: support for extended statistics

Shreyansh Jain (38):
  config: add NXP DPAA SoC build configuration
  bus/dpaa: introduce NXP DPAA Bus driver skeleton
  bus/dpaa: add OF parser for device scanning
  bus/dpaa: introducing FMan configurations
  bus/dpaa: add FMan hardware operations
  bus/dpaa: enable DPAA IOCTL portal driver
  bus/dpaa: add layer for interrupt emulation using pthread
  bus/dpaa: add routines for managing a RB tree
  bus/dpaa: add QMAN interface driver
  bus/dpaa: add QMan driver core routines
  bus/dpaa: add BMAN driver core
  bus/dpaa: add support for FMAN frame queue lookup
  bus/dpaa: add BMan hardware interfaces
  bus/dpaa: add fman flow control threshold setting
  bus/dpaa: integrate DPAA Bus with hardware blocks
  doc: add NXP DPAA PMD documentation
  bus/dpaa: add DPAA mempool logging macros
  mempool/dpaa: add support for NXP DPAA Mempool
  drivers: enable compilation of DPAA Mempool driver
  maintainers: claim ownership of DPAA Mempool driver
  bus/dpaa: add DPAA PMD logging macros
  net/dpaa: add NXP DPAA PMD driver skeleton
  config: enable NXP DPAA PMD compilation
  net/dpaa: add support for Tx and Rx queue setup
  net/dpaa: add support for MTU update
  net/dpaa: add support for jumbo frames
  net/dpaa: add support for link status update
  net/dpaa: add support for device info and speed capability
  net/dpaa: add support for promiscuous toggle
  net/dpaa: add support for multicast toggle
  net/dpaa: add support for MAC address update
  net/dpaa: add support for basic stats
  net/dpaa: add support for flow control
  net/dpaa: add support for hashed RSS
  net/dpaa: add support for packet type parsing
  net/dpaa: add support for checksum offload
  net/dpaa: add support for Scattered Rx
  net/dpaa: add packet dump for debugging

 MAINTAINERS                                       |    9 +
 config/common_base                                |    5 +
 config/defconfig_arm64-dpaa-linuxapp-gcc          |   64 +
 doc/guides/nics/dpaa.rst                          |  374 +++
 doc/guides/nics/features/dpaa.ini                 |   24 +
 doc/guides/nics/index.rst                         |    1 +
 drivers/bus/Makefile                              |    3 +
 drivers/bus/dpaa/Makefile                         |   83 +
 drivers/bus/dpaa/base/fman/fman.c                 |  611 +++++
 drivers/bus/dpaa/base/fman/fman_hw.c              |  590 +++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c         |  214 ++
 drivers/bus/dpaa/base/fman/of.c                   |  576 +++++
 drivers/bus/dpaa/base/qbman/bman.c                |  394 ++++
 drivers/bus/dpaa/base/qbman/bman.h                |  550 +++++
 drivers/bus/dpaa/base/qbman/bman_driver.c         |  323 +++
 drivers/bus/dpaa/base/qbman/bman_priv.h           |  125 ++
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c          |  104 +
 drivers/bus/dpaa/base/qbman/dpaa_sys.c            |  136 ++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h            |   65 +
 drivers/bus/dpaa/base/qbman/process.c             |  331 +++
 drivers/bus/dpaa/base/qbman/qman.c                | 2497 +++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h                |  888 ++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c         |  288 +++
 drivers/bus/dpaa/base/qbman/qman_priv.h           |  314 +++
 drivers/bus/dpaa/dpaa_bus.c                       |  465 ++++
 drivers/bus/dpaa/include/compat.h                 |  389 ++++
 drivers/bus/dpaa/include/dpaa_bits.h              |   65 +
 drivers/bus/dpaa/include/dpaa_list.h              |  101 +
 drivers/bus/dpaa/include/dpaa_rbtree.h            |  143 ++
 drivers/bus/dpaa/include/fman.h                   |  458 ++++
 drivers/bus/dpaa/include/fsl_bman.h               |  375 ++++
 drivers/bus/dpaa/include/fsl_fman.h               |  181 ++
 drivers/bus/dpaa/include/fsl_fman_crc64.h         |  263 +++
 drivers/bus/dpaa/include/fsl_qman.h               | 2021 +++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h                |  107 +
 drivers/bus/dpaa/include/netcfg.h                 |   96 +
 drivers/bus/dpaa/include/of.h                     |  190 ++
 drivers/bus/dpaa/include/process.h                |  107 +
 drivers/bus/dpaa/rte_bus_dpaa_version.map         |   48 +
 drivers/bus/dpaa/rte_dpaa_bus.h                   |  173 ++
 drivers/bus/dpaa/rte_dpaa_logs.h                  |  130 ++
 drivers/mempool/Makefile                          |    2 +
 drivers/mempool/dpaa/Makefile                     |   64 +
 drivers/mempool/dpaa/dpaa_mempool.c               |  285 +++
 drivers/mempool/dpaa/dpaa_mempool.h               |   77 +
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map |    6 +
 drivers/net/Makefile                              |    2 +
 drivers/net/dpaa/Makefile                         |   67 +
 drivers/net/dpaa/dpaa_ethdev.c                    | 1106 +++++++++
 drivers/net/dpaa/dpaa_ethdev.h                    |  177 ++
 drivers/net/dpaa/dpaa_rxtx.c                      |  760 +++++++
 drivers/net/dpaa/dpaa_rxtx.h                      |  297 +++
 drivers/net/dpaa/rte_pmd_dpaa_version.map         |    4 +
 mk/machine/dpaa/rte.vars.mk                       |   61 +
 mk/rte.app.mk                                     |    6 +
 55 files changed, 16795 insertions(+)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h
 create mode 100644 drivers/bus/dpaa/include/of.h
 create mode 100644 drivers/bus/dpaa/include/process.h
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map
 create mode 100644 mk/machine/dpaa/rte.vars.mk

-- 
2.9.3

^ permalink raw reply	[flat|nested] 367+ messages in thread

* [PATCH v4 01/41] config: add NXP DPAA SoC build configuration
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
@ 2017-09-09 11:20       ` Shreyansh Jain
  2017-09-09 11:20       ` [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
                         ` (42 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:20 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch adds skeleton build configuration for DPAA platform.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc | 39 ++++++++++++++++++++
 mk/machine/dpaa/rte.vars.mk              | 61 ++++++++++++++++++++++++++++++++
 2 files changed, 100 insertions(+)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 mk/machine/dpaa/rte.vars.mk

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
new file mode 100644
index 0000000..0815026
--- /dev/null
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -0,0 +1,39 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 Freescale Semiconductor, Inc.
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+#include "defconfig_arm64-armv8a-linuxapp-gcc"
+
+# NXP (Freescale) - Soc Architecture with FMAN, QMAN & BMAN support
+CONFIG_RTE_MACHINE="dpaa"
+CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
+CONFIG_RTE_LIBRTE_VHOST_NUMA=n
+CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
diff --git a/mk/machine/dpaa/rte.vars.mk b/mk/machine/dpaa/rte.vars.mk
new file mode 100644
index 0000000..356a6af
--- /dev/null
+++ b/mk/machine/dpaa/rte.vars.mk
@@ -0,0 +1,61 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# machine:
+#
+#   - can define ARCH variable (overridden by cmdline value)
+#   - can define CROSS variable (overridden by cmdline value)
+#   - define MACHINE_CFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+#   - can define CPU_CFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_LDFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_ASFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - may override any previously defined variable
+#
+
+# ARCH =
+# CROSS =
+# MACHINE_CFLAGS =
+# MACHINE_LDFLAGS =
+# MACHINE_ASFLAGS =
+# CPU_CFLAGS =
+# CPU_LDFLAGS =
+# CPU_ASFLAGS =
+MACHINE_CFLAGS += -march=armv8-a+crc
+
+ifdef CONFIG_RTE_ARCH_ARM_TUNE
+MACHINE_CFLAGS += -mtune=$(CONFIG_RTE_ARCH_ARM_TUNE:"%"=%)
+endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  2017-09-09 11:20       ` [PATCH v4 01/41] config: add NXP DPAA SoC build configuration Shreyansh Jain
@ 2017-09-09 11:20       ` Shreyansh Jain
  2017-09-18 14:47         ` Ferruh Yigit
  2017-09-09 11:20       ` [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros Shreyansh Jain
                         ` (41 subsequent siblings)
  43 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:20 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 MAINTAINERS                               |   5 +
 config/common_base                        |   3 +
 config/defconfig_arm64-dpaa-linuxapp-gcc  |   6 +
 drivers/bus/Makefile                      |   3 +
 drivers/bus/dpaa/Makefile                 |  62 +++++++++
 drivers/bus/dpaa/dpaa_bus.c               | 207 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |   7 +
 drivers/bus/dpaa/rte_dpaa_bus.h           | 164 +++++++++++++++++++++++
 drivers/bus/dpaa/rte_dpaa_logs.h          |  66 ++++++++++
 9 files changed, 523 insertions(+)
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h

diff --git a/MAINTAINERS b/MAINTAINERS
index a0cd75e..6ee20ce 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -405,6 +405,11 @@ F: drivers/net/nfp/
 F: doc/guides/nics/nfp.rst
 F: doc/guides/nics/features/nfp.ini
 
+NXP dpaa
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+M: Shreyansh Jain <shreyansh.jain@nxp.com>
+F: drivers/bus/dpaa/
+
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/config/common_base b/config/common_base
index 5e97a08..2bb2269 100644
--- a/config/common_base
+++ b/config/common_base
@@ -303,6 +303,9 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_MBOX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=n
+
 #
 # Compile NXP DPAA2 FSL-MC Bus
 #
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 0815026..110042c 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -37,3 +37,9 @@ CONFIG_RTE_MACHINE="dpaa"
 CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
 CONFIG_RTE_LIBRTE_VHOST_NUMA=n
 CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
+
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=y
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 0224214..6cb6466 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -32,6 +32,9 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mbuf librte_mempool librte_ring librte_ether
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += dpaa
+DEPDIRS-dpaa = $(core-libs)
+
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 DEPDIRS-fslmc = $(core-libs)
 
diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
new file mode 100644
index 0000000..ef508d3
--- /dev/null
+++ b/drivers/bus/dpaa/Makefile
@@ -0,0 +1,62 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_BUS_DPAA=$(RTE_SDK)/drivers/bus/dpaa
+
+#
+# library name
+#
+LIB = librte_bus_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+
+CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+
+# versioning export map
+EXPORT_MAP := rte_bus_dpaa_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	dpaa_bus.c
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
new file mode 100644
index 0000000..cc343b3
--- /dev/null
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -0,0 +1,207 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_bus.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+int dpaa_logtype_bus;
+
+struct rte_dpaa_bus rte_dpaa_bus;
+
+static inline void
+dpaa_add_to_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+
+static inline void
+dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+
+static int
+rte_dpaa_bus_scan(void)
+{
+	BUS_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/* register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
+{
+	RTE_VERIFY(driver);
+
+	BUS_INIT_FUNC_TRACE();
+
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = &rte_dpaa_bus;
+}
+
+/* un-register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
+{
+	struct rte_dpaa_bus *dpaa_bus;
+
+	BUS_INIT_FUNC_TRACE();
+
+	dpaa_bus = driver->dpaa_bus;
+
+	TAILQ_REMOVE(&dpaa_bus->driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = NULL;
+}
+
+static int
+rte_dpaa_device_match(struct rte_dpaa_driver *drv,
+		      struct rte_dpaa_device *dev)
+{
+	int ret = -1;
+
+	BUS_INIT_FUNC_TRACE();
+
+	if (!drv || !dev) {
+		DPAA_BUS_DEBUG("Invalid drv or dev received.");
+		return ret;
+	}
+
+	if (drv->drv_type == dev->device_type) {
+		DPAA_BUS_INFO("Device: %s matches for driver: %s",
+			      dev->name, drv->driver.name);
+		ret = 0; /* Found a match */
+	}
+
+	return ret;
+}
+
+static int
+rte_dpaa_bus_probe(void)
+{
+	int ret = -1;
+	struct rte_dpaa_device *dev;
+	struct rte_dpaa_driver *drv;
+
+	BUS_INIT_FUNC_TRACE();
+
+	/* For each registered driver, and device, call the driver->probe */
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		TAILQ_FOREACH(drv, &rte_dpaa_bus.driver_list, next) {
+			ret = rte_dpaa_device_match(drv, dev);
+			if (ret)
+				continue;
+
+			if (!drv->probe)
+				continue;
+
+			ret = drv->probe(drv, dev);
+			if (ret)
+				DPAA_BUS_ERR("Unable to probe.\n");
+			break;
+		}
+	}
+	return 0;
+}
+
+static struct rte_device *
+rte_dpaa_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
+		     const void *data)
+{
+	struct rte_dpaa_device *dev;
+
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		if (start && &dev->device == start) {
+			start = NULL;  /* starting point found */
+			continue;
+		}
+
+		if (cmp(&dev->device, data) == 0)
+			return &dev->device;
+	}
+
+	return NULL;
+}
+
+struct rte_dpaa_bus rte_dpaa_bus = {
+	.bus = {
+		.scan = rte_dpaa_bus_scan,
+		.probe = rte_dpaa_bus_probe,
+		.find_device = rte_dpaa_find_device,
+	},
+	.device_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.device_list),
+	.driver_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.driver_list),
+	.device_count = 0,
+};
+
+RTE_REGISTER_BUS(FSL_DPAA_BUS_NAME, rte_dpaa_bus.bus);
+
+RTE_INIT(dpaa_init_log);
+static void
+dpaa_init_log(void)
+{
+	dpaa_logtype_bus = rte_log_register("bus.dpaa");
+	if (dpaa_logtype_bus >= 0)
+		rte_log_set_level(dpaa_logtype_bus, RTE_LOG_NOTICE);
+}
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
new file mode 100644
index 0000000..d97a009
--- /dev/null
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -0,0 +1,7 @@
+DPDK_17.11 {
+	global:
+
+	rte_dpaa_driver_register;
+	rte_dpaa_driver_unregister;
+
+};
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
new file mode 100644
index 0000000..8a1e192
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -0,0 +1,164 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __RTE_DPAA_BUS_H__
+#define __RTE_DPAA_BUS_H__
+
+#include <rte_bus.h>
+#include <rte_mempool.h>
+
+#define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
+
+#define DEV_TO_DPAA_DEVICE(ptr)	\
+		container_of(ptr, struct rte_dpaa_device, device)
+
+struct rte_dpaa_device;
+struct rte_dpaa_driver;
+
+/* DPAA Device and Driver lists for DPAA bus */
+TAILQ_HEAD(rte_dpaa_device_list, rte_dpaa_device);
+TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
+
+enum rte_dpaa_type {
+	FSL_DPAA_ETH = 1,
+	FSL_DPAA_CRYPTO,
+};
+
+struct rte_dpaa_bus {
+	struct rte_bus bus;
+	struct rte_dpaa_device_list device_list;
+	struct rte_dpaa_driver_list driver_list;
+	int device_count;
+};
+
+struct dpaa_device_id {
+	uint8_t fman_id; /**< Fman interface ID, for ETH type device */
+	uint8_t mac_id; /**< Fman MAC interface ID, for ETH type device */
+	uint16_t dev_id; /**< Device Identifier from DPDK */
+};
+
+struct rte_dpaa_device {
+	TAILQ_ENTRY(rte_dpaa_device) next;
+	struct rte_device device;
+	union {
+		struct rte_eth_dev *eth_dev;
+		struct rte_cryptodev *crypto_dev;
+	};
+	struct rte_dpaa_driver *driver;
+	struct dpaa_device_id id;
+	enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
+	char name[RTE_ETH_NAME_MAX_LEN];
+};
+
+typedef int (*rte_dpaa_probe_t)(struct rte_dpaa_driver *dpaa_drv,
+				struct rte_dpaa_device *dpaa_dev);
+typedef int (*rte_dpaa_remove_t)(struct rte_dpaa_device *dpaa_dev);
+
+struct rte_dpaa_driver {
+	TAILQ_ENTRY(rte_dpaa_driver) next;
+	struct rte_driver driver;
+	struct rte_dpaa_bus *dpaa_bus;
+	enum rte_dpaa_type drv_type;
+	rte_dpaa_probe_t probe;
+	rte_dpaa_remove_t remove;
+};
+
+struct dpaa_portal {
+	uint32_t bman_idx; /**< BMAN Portal ID*/
+	uint32_t qman_idx; /**< QMAN Portal ID*/
+	uint64_t tid;/**< Parent Thread id for this portal */
+};
+
+/* TODO - this is costly, need to write a fast coversion routine */
+static inline void *rte_dpaa_mem_ptov(phys_addr_t paddr)
+{
+	const struct rte_memseg *memseg = rte_eal_get_physmem_layout();
+	int i;
+
+	for (i = 0; i < RTE_MAX_MEMSEG && memseg[i].addr != NULL; i++) {
+		if (paddr >= memseg[i].phys_addr && paddr <
+			memseg[i].phys_addr + memseg[i].len)
+			return (uint8_t *)(memseg[i].addr) +
+			       (paddr - memseg[i].phys_addr);
+	}
+
+	return NULL;
+}
+
+/**
+ * Register a DPAA driver.
+ *
+ * @param driver
+ *   A pointer to a rte_dpaa_driver structure describing the driver
+ *   to be registered.
+ */
+void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
+
+/**
+ * Unregister a DPAA driver.
+ *
+ * @param driver
+ *	A pointer to a rte_dpaa_driver structure describing the driver
+ *	to be unregistered.
+ */
+void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
+
+/**
+ * Initialize a DPAA portal
+ *
+ * @param arg
+ *	Per thread ID
+ *
+ * @return
+ *	0 in case of success, error otherwise
+ */
+int rte_dpaa_portal_init(void *arg);
+
+/**
+ * Cleanup a DPAA Portal
+ */
+void dpaa_portal_finish(void *arg);
+
+/** Helper for DPAA device registration from driver (eth, crypto) instance */
+#define RTE_PMD_REGISTER_DPAA(nm, dpaa_drv) \
+RTE_INIT(dpaainitfn_ ##nm); \
+static void dpaainitfn_ ##nm(void) \
+{\
+	(dpaa_drv).driver.name = RTE_STR(nm);\
+	rte_dpaa_driver_register(&dpaa_drv); \
+} \
+RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_DPAA_BUS_H__ */
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
new file mode 100644
index 0000000..3ca3f9b
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -0,0 +1,66 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA_LOGS_H_
+#define _DPAA_LOGS_H_
+
+#include <rte_log.h>
+
+extern int dpaa_logtype_bus;
+
+#define DPAA_BUS_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define BUS_INIT_FUNC_TRACE() DPAA_BUS_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_BUS
+#define DPAA_BUS_HWWARN(cond, fmt, args...) \
+	do {\
+		if (cond) \
+			DPAA_BUS_LOG(DEBUG, "WARN: " fmt, ##args); \
+	} while (0)
+#define DPAA_BUS_DEBUG(fmt, args...) \
+	DPAA_BUS_LOG(DEBUG, fmt, ## args)
+#else
+#define DPAA_BUS_HWWARN(cond, fmt, args...) do { } while (0)
+#define DPAA_BUS_DEBUG(fmt, args...) do { } while (0)
+#endif
+
+#define DPAA_BUS_INFO(fmt, args...) \
+	DPAA_BUS_LOG(INFO, fmt, ## args)
+#define DPAA_BUS_ERR(fmt, args...) \
+	DPAA_BUS_LOG(ERR, fmt, ## args)
+#define DPAA_BUS_WARN(fmt, args...) \
+	DPAA_BUS_LOG(WARNING, fmt, ## args)
+
+#endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  2017-09-09 11:20       ` [PATCH v4 01/41] config: add NXP DPAA SoC build configuration Shreyansh Jain
  2017-09-09 11:20       ` [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
@ 2017-09-09 11:20       ` Shreyansh Jain
  2017-09-18 14:49         ` Ferruh Yigit
  2017-09-09 11:20       ` [PATCH v4 04/41] bus/dpaa: add OF parser for device scanning Shreyansh Jain
                         ` (40 subsequent siblings)
  43 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:20 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Linked list, bit operations and compatibility macros.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 v3:
 - Removed checkpatch warning and duplicate PER_CPU macro
---
 drivers/bus/dpaa/include/compat.h    | 389 +++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/dpaa_bits.h |  65 ++++++
 drivers/bus/dpaa/include/dpaa_list.h | 101 +++++++++
 3 files changed, 555 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h

diff --git a/drivers/bus/dpaa/include/compat.h b/drivers/bus/dpaa/include/compat.h
new file mode 100644
index 0000000..a1fd53e
--- /dev/null
+++ b/drivers/bus/dpaa/include/compat.h
@@ -0,0 +1,389 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __COMPAT_H
+#define __COMPAT_H
+
+#include <sched.h>
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+#include <stdint.h>
+#include <stdlib.h>
+#include <stddef.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <pthread.h>
+#include <linux/types.h>
+#include <stdbool.h>
+#include <ctype.h>
+#include <malloc.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/mman.h>
+#include <limits.h>
+#include <assert.h>
+#include <dirent.h>
+#include <inttypes.h>
+#include <error.h>
+#include <rte_byteorder.h>
+#include <rte_atomic.h>
+#include <rte_spinlock.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+
+/* The following definitions are primarily to allow the single-source driver
+ * interfaces to be included by arbitrary program code. Ie. for interfaces that
+ * are also available in kernel-space, these definitions provide compatibility
+ * with certain attributes and types used in those interfaces.
+ */
+
+/* Required compiler attributes */
+#define __maybe_unused	__rte_unused
+#define __always_unused	__rte_unused
+#define __packed	__rte_packed
+#define noinline	__attribute__((noinline))
+
+#define L1_CACHE_BYTES 64
+#define ____cacheline_aligned __attribute__((aligned(L1_CACHE_BYTES)))
+#define __stringify_1(x) #x
+#define __stringify(x)	__stringify_1(x)
+
+#ifdef ARRAY_SIZE
+#undef ARRAY_SIZE
+#endif
+#define ARRAY_SIZE(a) (sizeof(a) / sizeof((a)[0]))
+
+/* Debugging */
+#define prflush(fmt, args...) \
+	do { \
+		printf(fmt, ##args); \
+		fflush(stdout); \
+	} while (0)
+
+#define pr_crit(fmt, args...)	 prflush("CRIT:" fmt, ##args)
+#define pr_err(fmt, args...)	 prflush("ERR:" fmt, ##args)
+#define pr_warn(fmt, args...)	 prflush("WARN:" fmt, ##args)
+#define pr_info(fmt, args...)	 prflush(fmt, ##args)
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_BUS
+#ifdef pr_debug
+#undef pr_debug
+#endif
+#define pr_debug(fmt, args...)	printf(fmt, ##args)
+#else
+#define pr_debug(fmt, args...) {}
+#endif
+
+#define ASSERT(x) do {\
+	if (!(x)) \
+		rte_panic("DPAA: x"); \
+} while (0)
+#define DPAA_BUG_ON(x) ASSERT(!(x))
+
+/* Required types */
+typedef uint8_t		u8;
+typedef uint16_t	u16;
+typedef uint32_t	u32;
+typedef uint64_t	u64;
+typedef uint64_t	dma_addr_t;
+typedef cpu_set_t	cpumask_t;
+typedef uint32_t	phandle;
+typedef uint32_t	gfp_t;
+typedef uint32_t	irqreturn_t;
+
+#define IRQ_HANDLED	0
+#define request_irq	qbman_request_irq
+#define free_irq	qbman_free_irq
+
+#define __iomem
+#define GFP_KERNEL	0
+#define __raw_readb(p)	(*(const volatile unsigned char *)(p))
+#define __raw_readl(p)	(*(const volatile unsigned int *)(p))
+#define __raw_writel(v, p) {*(volatile unsigned int *)(p) = (v); }
+
+/* to be used as an upper-limit only */
+#define NR_CPUS			64
+
+/* Waitqueue stuff */
+typedef struct { }		wait_queue_head_t;
+#define DECLARE_WAIT_QUEUE_HEAD(x) int dummy_##x __always_unused
+#define wake_up(x)		do { } while (0)
+
+/* I/O operations */
+static inline u32 in_be32(volatile void *__p)
+{
+	volatile u32 *p = __p;
+	return rte_be_to_cpu_32(*p);
+}
+
+static inline void out_be32(volatile void *__p, u32 val)
+{
+	volatile u32 *p = __p;
+	*p = rte_cpu_to_be_32(val);
+}
+
+#define dcbt_ro(p) __builtin_prefetch(p, 0)
+#define dcbt_rw(p) __builtin_prefetch(p, 1)
+
+#define dcbz(p) { asm volatile("dc zva, %0" : : "r" (p) : "memory"); }
+#define dcbz_64(p) dcbz(p)
+#define hwsync() rte_rmb()
+#define lwsync() rte_wmb()
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+#define dccivac(p) { asm volatile("dc civac, %0" : : "r"(p) : "memory"); }
+
+#define dcbit_ro(p) \
+	do { \
+		dccivac(p);						\
+		asm volatile("prfm pldl1keep, [%0, #64]" : : "r" (p));	\
+	} while (0)
+
+#define barrier() { asm volatile ("" : : : "memory"); }
+#define cpu_relax barrier
+
+static inline uint64_t mfatb(void)
+{
+	uint64_t ret, ret_new, timeout = 200;
+
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret));
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	while (ret != ret_new && timeout--) {
+		ret = ret_new;
+		asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	}
+	DPAA_BUG_ON(!timeout && (ret != ret_new));
+	return ret * 64;
+}
+
+/* Spin for a few cycles without bothering the bus */
+static inline void cpu_spin(int cycles)
+{
+	uint64_t now = mfatb();
+
+	while (mfatb() < (now + cycles))
+		;
+}
+
+/* Qman/Bman API inlines and macros; */
+#ifdef lower_32_bits
+#undef lower_32_bits
+#endif
+#define lower_32_bits(x) ((u32)(x))
+
+#ifdef upper_32_bits
+#undef upper_32_bits
+#endif
+#define upper_32_bits(x) ((u32)(((x) >> 16) >> 16))
+
+/*
+ * Swap bytes of a 48-bit value.
+ */
+static inline uint64_t
+__bswap_48(uint64_t x)
+{
+	return  ((x & 0x0000000000ffULL) << 40) |
+		((x & 0x00000000ff00ULL) << 24) |
+		((x & 0x000000ff0000ULL) <<  8) |
+		((x & 0x0000ff000000ULL) >>  8) |
+		((x & 0x00ff00000000ULL) >> 24) |
+		((x & 0xff0000000000ULL) >> 40);
+}
+
+/*
+ * Swap bytes of a 40-bit value.
+ */
+static inline uint64_t
+__bswap_40(uint64_t x)
+{
+	return  ((x & 0x00000000ffULL) << 32) |
+		((x & 0x000000ff00ULL) << 16) |
+		((x & 0x0000ff0000ULL)) |
+		((x & 0x00ff000000ULL) >> 16) |
+		((x & 0xff00000000ULL) >> 32);
+}
+
+/*
+ * Swap bytes of a 24-bit value.
+ */
+static inline uint32_t
+__bswap_24(uint32_t x)
+{
+	return  ((x & 0x0000ffULL) << 16) |
+		((x & 0x00ff00ULL)) |
+		((x & 0xff0000ULL) >> 16);
+}
+
+#define be64_to_cpu(x) rte_be_to_cpu_64(x)
+#define be32_to_cpu(x) rte_be_to_cpu_32(x)
+#define be16_to_cpu(x) rte_be_to_cpu_16(x)
+
+#define cpu_to_be64(x) rte_cpu_to_be_64(x)
+#define cpu_to_be32(x) rte_cpu_to_be_32(x)
+#define cpu_to_be16(x) rte_cpu_to_be_16(x)
+
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+
+#define cpu_to_be48(x) __bswap_48(x)
+#define be48_to_cpu(x) __bswap_48(x)
+
+#define cpu_to_be40(x) __bswap_40(x)
+#define be40_to_cpu(x) __bswap_40(x)
+
+#define cpu_to_be24(x) __bswap_24(x)
+#define be24_to_cpu(x) __bswap_24(x)
+
+#else /* RTE_BIG_ENDIAN */
+
+#define cpu_to_be48(x) (x)
+#define be48_to_cpu(x) (x)
+
+#define cpu_to_be40(x) (x)
+#define be40_to_cpu(x) (x)
+
+#define cpu_to_be24(x) (x)
+#define be24_to_cpu(x) (x)
+
+#endif /* RTE_BIG_ENDIAN */
+
+/* When copying aligned words or shorts, try to avoid memcpy() */
+/* memcpy() stuff - when you know alignments in advance */
+#define CONFIG_TRY_BETTER_MEMCPY
+
+#ifdef CONFIG_TRY_BETTER_MEMCPY
+static inline void copy_words(void *dest, const void *src, size_t sz)
+{
+	u32 *__dest = dest;
+	const u32 *__src = src;
+	size_t __sz = sz >> 2;
+
+	DPAA_BUG_ON((unsigned long)dest & 0x3);
+	DPAA_BUG_ON((unsigned long)src & 0x3);
+	DPAA_BUG_ON(sz & 0x3);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_shorts(void *dest, const void *src, size_t sz)
+{
+	u16 *__dest = dest;
+	const u16 *__src = src;
+	size_t __sz = sz >> 1;
+
+	DPAA_BUG_ON((unsigned long)dest & 0x1);
+	DPAA_BUG_ON((unsigned long)src & 0x1);
+	DPAA_BUG_ON(sz & 0x1);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_bytes(void *dest, const void *src, size_t sz)
+{
+	u8 *__dest = dest;
+	const u8 *__src = src;
+
+	while (sz--)
+		*(__dest++) = *(__src++);
+}
+#else
+#define copy_words memcpy
+#define copy_shorts memcpy
+#define copy_bytes memcpy
+#endif
+
+/* Allocator stuff */
+#define kmalloc(sz, t)	malloc(sz)
+#define vmalloc(sz)	malloc(sz)
+#define kfree(p)	{ if (p) free(p); }
+static inline void *kzalloc(size_t sz, gfp_t __foo __rte_unused)
+{
+	void *ptr = malloc(sz);
+
+	if (ptr)
+		memset(ptr, 0, sz);
+	return ptr;
+}
+
+static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused)
+{
+	void *p;
+
+	if (posix_memalign(&p, 4096, 4096))
+		return 0;
+	memset(p, 0, 4096);
+	return (unsigned long)p;
+}
+
+/* Spinlock stuff */
+#define spinlock_t		rte_spinlock_t
+#define __SPIN_LOCK_UNLOCKED(x)	RTE_SPINLOCK_INITIALIZER
+#define DEFINE_SPINLOCK(x)	spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
+#define spin_lock_init(x)	rte_spinlock_init(x)
+#define spin_lock_destroy(x)
+#define spin_lock(x)		rte_spinlock_lock(x)
+#define spin_unlock(x)		rte_spinlock_unlock(x)
+#define spin_lock_irq(x)	spin_lock(x)
+#define spin_unlock_irq(x)	spin_unlock(x)
+#define spin_lock_irqsave(x, f) spin_lock_irq(x)
+#define spin_unlock_irqrestore(x, f) spin_unlock_irq(x)
+
+#define atomic_t                rte_atomic32_t
+#define atomic_read(v)          rte_atomic32_read(v)
+#define atomic_set(v, i)        rte_atomic32_set(v, i)
+
+#define atomic_inc(v)           rte_atomic32_add(v, 1)
+#define atomic_dec(v)           rte_atomic32_sub(v, 1)
+
+#define atomic_inc_and_test(v)  rte_atomic32_inc_and_test(v)
+#define atomic_dec_and_test(v)  rte_atomic32_dec_and_test(v)
+
+#define atomic_inc_return(v)    rte_atomic32_add_return(v, 1)
+#define atomic_dec_return(v)    rte_atomic32_sub_return(v, 1)
+#define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0)
+
+#include <dpaa_list.h>
+#include <dpaa_bits.h>
+
+#endif /* __COMPAT_H */
diff --git a/drivers/bus/dpaa/include/dpaa_bits.h b/drivers/bus/dpaa/include/dpaa_bits.h
new file mode 100644
index 0000000..71f2d80
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_bits.h
@@ -0,0 +1,65 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_BITS_H
+#define __DPAA_BITS_H
+
+/* Bitfield stuff. */
+#define BITS_PER_ULONG	(sizeof(unsigned long) << 3)
+#define SHIFT_PER_ULONG	(((1 << 5) == BITS_PER_ULONG) ? 5 : 6)
+#define BITS_MASK(idx)	(1UL << ((idx) & (BITS_PER_ULONG - 1)))
+#define BITS_IDX(idx)	((idx) >> SHIFT_PER_ULONG)
+
+static inline void dpaa_set_bits(unsigned long mask,
+				 volatile unsigned long *p)
+{
+	*p |= mask;
+}
+
+static inline void dpaa_set_bit(int idx, volatile unsigned long *bits)
+{
+	dpaa_set_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+static inline void dpaa_clear_bits(unsigned long mask,
+				   volatile unsigned long *p)
+{
+	*p &= ~mask;
+}
+
+static inline void dpaa_clear_bit(int idx,
+				  volatile unsigned long *bits)
+{
+	dpaa_clear_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+#endif /* __DPAA_BITS_H */
diff --git a/drivers/bus/dpaa/include/dpaa_list.h b/drivers/bus/dpaa/include/dpaa_list.h
new file mode 100644
index 0000000..871e612
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_list.h
@@ -0,0 +1,101 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_LIST_H
+#define __DPAA_LIST_H
+
+/****************/
+/* Linked-lists */
+/****************/
+
+struct list_head {
+	struct list_head *prev;
+	struct list_head *next;
+};
+
+#define COMPAT_LIST_HEAD(n) \
+struct list_head n = { \
+	.prev = &n, \
+	.next = &n \
+}
+
+#define INIT_LIST_HEAD(p) \
+do { \
+	struct list_head *__p298 = (p); \
+	__p298->next = __p298; \
+	__p298->prev = __p298->next; \
+} while (0)
+#define list_entry(node, type, member) \
+	(type *)((void *)node - offsetof(type, member))
+#define list_empty(p) \
+({ \
+	const struct list_head *__p298 = (p); \
+	((__p298->next == __p298) && (__p298->prev == __p298)); \
+})
+#define list_add(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->next = __l298->next; \
+	__p298->prev = __l298; \
+	__l298->next->prev = __p298; \
+	__l298->next = __p298; \
+} while (0)
+#define list_add_tail(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->prev = __l298->prev; \
+	__p298->next = __l298; \
+	__l298->prev->next = __p298; \
+	__l298->prev = __p298; \
+} while (0)
+#define list_for_each(i, l)				\
+	for (i = (l)->next; i != (l); i = i->next)
+#define list_for_each_safe(i, j, l)			\
+	for (i = (l)->next, j = i->next; i != (l);	\
+	     i = j, j = i->next)
+#define list_for_each_entry(i, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name); &i->name != (l); \
+		i = list_entry(i->name.next, typeof(*i), name))
+#define list_for_each_entry_safe(i, j, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name), \
+		j = list_entry(i->name.next, typeof(*j), name); \
+		&i->name != (l); \
+		i = j, j = list_entry(j->name.next, typeof(*j), name))
+#define list_del(i) \
+do { \
+	(i)->next->prev = (i)->prev; \
+	(i)->prev->next = (i)->next; \
+} while (0)
+
+#endif /* __DPAA_LIST_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 04/41] bus/dpaa: add OF parser for device scanning
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (2 preceding siblings ...)
  2017-09-09 11:20       ` [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros Shreyansh Jain
@ 2017-09-09 11:20       ` Shreyansh Jain
  2017-09-18 14:49         ` Ferruh Yigit
  2017-09-09 11:20       ` [PATCH v4 05/41] bus/dpaa: introducing FMan configurations Shreyansh Jain
                         ` (39 subsequent siblings)
  43 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:20 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This layer is used by Bus driver's scan function. Devices are parsed
using OF parser and added to DPAA device list.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile       |   7 +
 drivers/bus/dpaa/base/fman/of.c | 576 ++++++++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/of.h   | 190 +++++++++++++
 3 files changed, 773 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/include/of.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index ef508d3..488e263 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -44,7 +44,12 @@ CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 endif
 
+CFLAGS +=-Wno-pointer-arith
+CFLAGS +=-Wno-cast-qual
+CFLAGS += -D _GNU_SOURCE
+
 CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_BUS_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -58,5 +63,7 @@ LIBABIVER := 1
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/of.c \
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/fman/of.c b/drivers/bus/dpaa/base/fman/of.c
new file mode 100644
index 0000000..b2d7c02
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/of.c
@@ -0,0 +1,576 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <of.h>
+#include <rte_dpaa_logs.h>
+
+static int alive;
+static struct dt_dir root_dir;
+static const char *base_dir;
+static COMPAT_LIST_HEAD(linear);
+
+static int
+of_open_dir(const char *relative_path, struct dirent ***d)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = scandir(full_path, d, 0, versionsort);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+of_close_dir(struct dirent **d, int num)
+{
+	while (num--)
+		free(d[num]);
+	free(d);
+}
+
+static int
+of_open_file(const char *relative_path)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = open(full_path, O_RDONLY);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+process_file(struct dirent *dent, struct dt_dir *parent)
+{
+	int fd;
+	struct dt_file *f = malloc(sizeof(*f));
+
+	if (!f) {
+		DPAA_BUS_LOG(DEBUG, "Unable to allocate memory for file node");
+		return;
+	}
+	f->node.is_file = 1;
+	snprintf(f->node.node.name, NAME_MAX, "%s", dent->d_name);
+	snprintf(f->node.node.full_name, PATH_MAX, "%s/%s",
+		 parent->node.node.full_name, dent->d_name);
+	f->parent = parent;
+	fd = of_open_file(f->node.node.full_name);
+	if (fd < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to open file node");
+		free(f);
+		return;
+	}
+	f->len = read(fd, f->buf, OF_FILE_BUF_MAX);
+	close(fd);
+	if (f->len < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to read file node");
+		free(f);
+		return;
+	}
+	list_add_tail(&f->node.list, &parent->files);
+}
+
+static const struct dt_dir *
+node2dir(const struct device_node *n)
+{
+	struct dt_node *dn = container_of((struct device_node *)n,
+					  struct dt_node, node);
+	const struct dt_dir *d = container_of(dn, struct dt_dir, node);
+
+	assert(!dn->is_file);
+	return d;
+}
+
+/* process_dir() calls iterate_dir(), but the latter will also call the former
+ * when recursing into sub-directories, so a predeclaration is needed.
+ */
+static int process_dir(const char *relative_path, struct dt_dir *dt);
+
+static int
+iterate_dir(struct dirent **d, int num, struct dt_dir *dt)
+{
+	int loop;
+	/* Iterate the directory contents */
+	for (loop = 0; loop < num; loop++) {
+		struct dt_dir *subdir;
+		int ret;
+		/* Ignore dot files of all types (especially "..") */
+		if (d[loop]->d_name[0] == '.')
+			continue;
+		switch (d[loop]->d_type) {
+		case DT_REG:
+			process_file(d[loop], dt);
+			break;
+		case DT_DIR:
+			subdir = malloc(sizeof(*subdir));
+			if (!subdir) {
+				perror("malloc");
+				return -ENOMEM;
+			}
+			snprintf(subdir->node.node.name, NAME_MAX, "%s",
+				 d[loop]->d_name);
+			snprintf(subdir->node.node.full_name, PATH_MAX,
+				 "%s/%s", dt->node.node.full_name,
+				 d[loop]->d_name);
+			subdir->parent = dt;
+			ret = process_dir(subdir->node.node.full_name, subdir);
+			if (ret)
+				return ret;
+			list_add_tail(&subdir->node.list, &dt->subdirs);
+			break;
+		default:
+			DPAA_BUS_LOG(DEBUG, "Ignoring invalid dt entry %s/%s",
+				     dt->node.node.full_name, d[loop]->d_name);
+		}
+	}
+	return 0;
+}
+
+static int
+process_dir(const char *relative_path, struct dt_dir *dt)
+{
+	struct dirent **d;
+	int ret, num;
+
+	dt->node.is_file = 0;
+	INIT_LIST_HEAD(&dt->subdirs);
+	INIT_LIST_HEAD(&dt->files);
+	ret = of_open_dir(relative_path, &d);
+	if (ret < 0)
+		return ret;
+	num = ret;
+	ret = iterate_dir(d, num, dt);
+	of_close_dir(d, num);
+	return (ret < 0) ? ret : 0;
+}
+
+static void
+linear_dir(struct dt_dir *d)
+{
+	struct dt_file *f;
+	struct dt_dir *dd;
+
+	d->compatible = NULL;
+	d->status = NULL;
+	d->lphandle = NULL;
+	d->a_cells = NULL;
+	d->s_cells = NULL;
+	d->reg = NULL;
+	list_for_each_entry(f, &d->files, node.list) {
+		if (!strcmp(f->node.node.name, "compatible")) {
+			if (d->compatible)
+				DPAA_BUS_LOG(DEBUG, "Duplicate compatible in"
+					     " %s", d->node.node.full_name);
+			d->compatible = f;
+		} else if (!strcmp(f->node.node.name, "status")) {
+			if (d->status)
+				DPAA_BUS_LOG(DEBUG, "Duplicate status in %s",
+					     d->node.node.full_name);
+			d->status = f;
+		} else if (!strcmp(f->node.node.name, "linux,phandle")) {
+			if (d->lphandle)
+				DPAA_BUS_LOG(DEBUG, "Duplicate lphandle in %s",
+					     d->node.node.full_name);
+			d->lphandle = f;
+		} else if (!strcmp(f->node.node.name, "#address-cells")) {
+			if (d->a_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate a_cells in %s",
+					     d->node.node.full_name);
+			d->a_cells = f;
+		} else if (!strcmp(f->node.node.name, "#size-cells")) {
+			if (d->s_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate s_cells in %s",
+					     d->node.node.full_name);
+			d->s_cells = f;
+		} else if (!strcmp(f->node.node.name, "reg")) {
+			if (d->reg)
+				DPAA_BUS_LOG(DEBUG, "Duplicate reg in %s",
+					     d->node.node.full_name);
+			d->reg = f;
+		}
+	}
+
+	list_for_each_entry(dd, &d->subdirs, node.list) {
+		list_add_tail(&dd->linear, &linear);
+		linear_dir(dd);
+	}
+}
+
+int
+of_init_path(const char *dt_path)
+{
+	int ret;
+
+	base_dir = dt_path;
+
+	/* This needs to be singleton initialization */
+	DPAA_BUS_HWWARN(alive, "Double-init of device-tree driver!");
+
+	/* Prepare root node (the remaining fields are set in process_dir()) */
+	root_dir.node.node.name[0] = '\0';
+	root_dir.node.node.full_name[0] = '\0';
+	INIT_LIST_HEAD(&root_dir.node.list);
+	root_dir.parent = NULL;
+
+	/* Kick things off... */
+	ret = process_dir("", &root_dir);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to parse device tree");
+		return ret;
+	}
+
+	/* Now make a flat, linear list of directories */
+	linear_dir(&root_dir);
+	alive = 1;
+	return 0;
+}
+
+static void
+destroy_dir(struct dt_dir *d)
+{
+	struct dt_file *f, *tmpf;
+	struct dt_dir *dd, *tmpd;
+
+	list_for_each_entry_safe(f, tmpf, &d->files, node.list) {
+		list_del(&f->node.list);
+		free(f);
+	}
+	list_for_each_entry_safe(dd, tmpd, &d->subdirs, node.list) {
+		destroy_dir(dd);
+		list_del(&dd->node.list);
+		free(dd);
+	}
+}
+
+void
+of_finish(void)
+{
+	DPAA_BUS_HWWARN(!alive, "Double-finish of device-tree driver!");
+
+	destroy_dir(&root_dir);
+	INIT_LIST_HEAD(&linear);
+	alive = 0;
+}
+
+static const struct dt_dir *
+next_linear(const struct dt_dir *f)
+{
+	if (f->linear.next == &linear)
+		return NULL;
+	return list_entry(f->linear.next, struct dt_dir, linear);
+}
+
+static int
+check_compatible(const struct dt_file *f, const char *compatible)
+{
+	const char *c = (char *)f->buf;
+	unsigned int len, remains = f->len;
+
+	while (remains) {
+		len = strlen(c);
+		if (!strcmp(c, compatible))
+			return 1;
+
+		if (remains < len + 1)
+			break;
+
+		c += (len + 1);
+		remains -= (len + 1);
+	}
+	return 0;
+}
+
+const struct device_node *
+of_find_compatible_node(const struct device_node *from,
+			const char *type __always_unused,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (list_empty(&linear))
+		return NULL;
+	if (!from)
+		d = list_entry(linear.next, struct dt_dir, linear);
+	else
+		d = node2dir(from);
+	for (d = next_linear(d); d && (!d->compatible ||
+				       !check_compatible(d->compatible,
+				       compatible));
+			d = next_linear(d))
+		;
+	if (d)
+		return &d->node.node;
+	return NULL;
+}
+
+const void *
+of_get_property(const struct device_node *from, const char *name,
+		size_t *lenp)
+{
+	const struct dt_dir *d;
+	const struct dt_file *f;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	d = node2dir(from);
+	list_for_each_entry(f, &d->files, node.list)
+		if (!strcmp(f->node.node.name, name)) {
+			if (lenp)
+				*lenp = f->len;
+			return f->buf;
+		}
+	return NULL;
+}
+
+bool
+of_device_is_available(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	d = node2dir(dev_node);
+	if (!d->status)
+		return true;
+	if (!strcmp((char *)d->status->buf, "okay"))
+		return true;
+	if (!strcmp((char *)d->status->buf, "ok"))
+		return true;
+	return false;
+}
+
+const struct device_node *
+of_find_node_by_phandle(phandle ph)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	list_for_each_entry(d, &linear, linear)
+		if (d->lphandle && (d->lphandle->len == 4) &&
+		    !memcmp(d->lphandle->buf, &ph, 4))
+			return &d->node.node;
+	return NULL;
+}
+
+const struct device_node *
+of_get_parent(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	d = node2dir(dev_node);
+	if (!d->parent)
+		return NULL;
+	return &d->parent->node.node;
+}
+
+const struct device_node *
+of_get_next_child(const struct device_node *dev_node,
+		  const struct device_node *prev)
+{
+	const struct dt_dir *p, *c;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	p = node2dir(dev_node);
+	if (prev) {
+		c = node2dir(prev);
+		DPAA_BUS_HWWARN((c->parent != p), "Parent/child mismatch");
+		if (c->parent != p)
+			return NULL;
+		if (c->node.list.next == &p->subdirs)
+			/* prev was the last child */
+			return NULL;
+		c = list_entry(c->node.list.next, struct dt_dir, node.list);
+		return &c->node.node;
+	}
+	/* Return first child */
+	if (list_empty(&p->subdirs))
+		return NULL;
+	c = list_entry(p->subdirs.next, struct dt_dir, node.list);
+	return &c->node.node;
+}
+
+uint32_t
+of_n_addr_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->a_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->a_cells->buf[0];
+			assert(d->a_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NA;
+}
+
+uint32_t
+of_n_size_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->s_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->s_cells->buf[0];
+			assert(d->s_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NS;
+}
+
+const uint32_t *
+of_get_address(const struct device_node *dev_node, size_t idx,
+	       uint64_t *size, uint32_t *flags __rte_unused)
+{
+	const struct dt_dir *d;
+	const unsigned char *buf;
+	uint32_t na = of_n_addr_cells(dev_node);
+	uint32_t ns = of_n_size_cells(dev_node);
+
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (!d->reg)
+		return NULL;
+	assert(d->reg->len % ((na + ns) * 4) == 0);
+	assert(d->reg->len / ((na + ns) * 4) > (unsigned int) idx);
+	buf = (const unsigned char *)&d->reg->buf[0];
+	buf += (na + ns) * idx * 4;
+	if (size)
+		for (*size = 0; ns > 0; ns--, na++)
+			*size = (*size << 32) +
+				(((uint32_t)buf[4 * na] << 24) |
+				((uint32_t)buf[4 * na + 1] << 16) |
+				((uint32_t)buf[4 * na + 2] << 8) |
+				(uint32_t)buf[4 * na + 3]);
+	return (const uint32_t *)buf;
+}
+
+uint64_t
+of_translate_address(const struct device_node *dev_node,
+		     const uint32_t *addr)
+{
+	uint64_t phys_addr, tmp_addr;
+	const struct device_node *parent;
+	const uint32_t *ranges;
+	size_t rlen;
+	uint32_t na, pna;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	assert(dev_node != NULL);
+
+	na = of_n_addr_cells(dev_node);
+	phys_addr = of_read_number(addr, na);
+
+	dev_node = of_get_parent(dev_node);
+	if (!dev_node)
+		return 0;
+	else if (node2dir(dev_node) == &root_dir)
+		return phys_addr;
+
+	do {
+		pna = of_n_addr_cells(dev_node);
+		parent = of_get_parent(dev_node);
+		if (!parent)
+			return 0;
+
+		ranges = of_get_property(dev_node, "ranges", &rlen);
+		/* "ranges" property is missing. Translation breaks */
+		if (!ranges)
+			return 0;
+		/* "ranges" property is empty. Do 1:1 translation */
+		else if (rlen == 0)
+			continue;
+		else
+			tmp_addr = of_read_number(ranges + na, pna);
+
+		na = pna;
+		dev_node = parent;
+		phys_addr += tmp_addr;
+	} while (node2dir(parent) != &root_dir);
+
+	return phys_addr;
+}
+
+bool
+of_device_is_compatible(const struct device_node *dev_node,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (d->compatible && check_compatible(d->compatible, compatible))
+		return true;
+	return false;
+}
diff --git a/drivers/bus/dpaa/include/of.h b/drivers/bus/dpaa/include/of.h
new file mode 100644
index 0000000..2984b1e
--- /dev/null
+++ b/drivers/bus/dpaa/include/of.h
@@ -0,0 +1,190 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor, Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __OF_H
+#define	__OF_H
+
+#include <compat.h>
+
+#ifndef OF_INIT_DEFAULT_PATH
+#define OF_INIT_DEFAULT_PATH "/proc/device-tree"
+#endif
+
+#define OF_DEFAULT_NA 1
+#define OF_DEFAULT_NS 1
+
+#define OF_FILE_BUF_MAX 256
+
+/**
+ * Layout of Device Tree:
+ * dt_dir
+ *  |- dt_dir
+ *  |   |- dt_dir
+ *  |   |  |- dt_dir
+ *  |   |  |  |- dt_file
+ *  |   |  |  ``- dt_file
+ *  |   |  ``- dt_file
+ *  |   `-dt_file`
+ *  ``- dt_file
+ *
+ *  +------------------+
+ *  |dt_dir            |
+ *  |+----------------+|
+ *  ||dt_node         ||
+ *  ||+--------------+||
+ *  |||device_node   |||
+ *  ||+--------------+||
+ *  || list_dt_nodes  ||
+ *  |+----------------+|
+ *  | list of subdir   |
+ *  | list of files    |
+ *  +------------------+
+ */
+
+/**
+ * Device description on of a device node in device tree.
+ */
+struct device_node {
+	char name[NAME_MAX];
+	char full_name[PATH_MAX];
+};
+
+/**
+ * List of device nodes available in a device tree layout
+ */
+struct dt_node {
+	struct device_node node; /**< Property of node */
+	int is_file; /**< FALSE==dir, TRUE==file */
+	struct list_head list; /**< Nodes within a parent subdir */
+};
+
+/**
+ * Types we use to represent directories and files
+ */
+struct dt_file;
+struct dt_dir {
+	struct dt_node node;
+	struct list_head subdirs;
+	struct list_head files;
+	struct list_head linear;
+	struct dt_dir *parent;
+	struct dt_file *compatible;
+	struct dt_file *status;
+	struct dt_file *lphandle;
+	struct dt_file *a_cells;
+	struct dt_file *s_cells;
+	struct dt_file *reg;
+};
+
+struct dt_file {
+	struct dt_node node;
+	struct dt_dir *parent;
+	ssize_t len;
+	uint64_t buf[OF_FILE_BUF_MAX >> 3];
+};
+
+const struct device_node *of_find_compatible_node(
+					const struct device_node *from,
+					const char *type __always_unused,
+					const char *compatible)
+	__attribute__((nonnull(3)));
+
+#define for_each_compatible_node(dev_node, type, compatible) \
+	for (dev_node = of_find_compatible_node(NULL, type, compatible); \
+		dev_node != NULL; \
+		dev_node = of_find_compatible_node(dev_node, type, compatible))
+
+const void *of_get_property(const struct device_node *from, const char *name,
+			    size_t *lenp) __attribute__((nonnull(2)));
+bool of_device_is_available(const struct device_node *dev_node);
+
+const struct device_node *of_find_node_by_phandle(phandle ph);
+
+const struct device_node *of_get_parent(const struct device_node *dev_node);
+
+const struct device_node *of_get_next_child(const struct device_node *dev_node,
+					    const struct device_node *prev);
+
+#define for_each_child_node(parent, child) \
+	for (child = of_get_next_child(parent, NULL); child != NULL; \
+			child = of_get_next_child(parent, child))
+
+uint32_t of_n_addr_cells(const struct device_node *dev_node);
+uint32_t of_n_size_cells(const struct device_node *dev_node);
+
+const uint32_t *of_get_address(const struct device_node *dev_node, size_t idx,
+			       uint64_t *size, uint32_t *flags);
+
+uint64_t of_translate_address(const struct device_node *dev_node,
+			      const u32 *addr) __attribute__((nonnull));
+
+bool of_device_is_compatible(const struct device_node *dev_node,
+			     const char *compatible);
+
+/* of_init() must be called prior to initialisation or use of any driver
+ * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers, etc.
+ * The path should usually be "/proc/device-tree".
+ */
+int of_init_path(const char *dt_path);
+
+/* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
+ * full reload is desired without a process exit.
+ */
+void of_finish(void);
+
+/* Use of this wrapper is recommended. */
+static inline int of_init(void)
+{
+	return of_init_path(OF_INIT_DEFAULT_PATH);
+}
+
+/* Read a numeric property according to its size and return it as a 64-bit
+ * value.
+ */
+static inline uint64_t of_read_number(const __be32 *cell, int size)
+{
+	uint64_t r = 0;
+
+	while (size--)
+		r = (r << 32) | be32toh(*(cell++));
+	return r;
+}
+
+#endif	/*  __OF_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 05/41] bus/dpaa: introducing FMan configurations
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (3 preceding siblings ...)
  2017-09-09 11:20       ` [PATCH v4 04/41] bus/dpaa: add OF parser for device scanning Shreyansh Jain
@ 2017-09-09 11:20       ` Shreyansh Jain
  2017-09-18 14:50         ` Ferruh Yigit
  2017-09-09 11:20       ` [PATCH v4 06/41] bus/dpaa: add FMan hardware operations Shreyansh Jain
                         ` (38 subsequent siblings)
  43 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:20 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

FMan or Frame Manager, inspects traffic, splits it into queueson ingress.
It is also responsible for directing traffic on queues on egress.

This patch introduces FMan configurational interfaces. This layer is
used by Bus driver for configuring the hardware block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   2 +
 drivers/bus/dpaa/base/fman/fman.c         | 611 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c | 214 +++++++++++
 drivers/bus/dpaa/include/fman.h           | 458 ++++++++++++++++++++++
 drivers/bus/dpaa/include/netcfg.h         |  96 +++++
 5 files changed, 1381 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 488e263..4b1715d 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -64,6 +64,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/fman.c \
 	base/fman/of.c \
+	base/fman/netcfg_layer.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
new file mode 100644
index 0000000..2c6029e
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -0,0 +1,611 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+
+#include <rte_malloc.h>
+
+/* This header declares the driver interface we implement */
+#include <fman.h>
+#include <of.h>
+#include <rte_dpaa_logs.h>
+
+#define QMI_PORT_REGS_OFFSET		0x400
+
+/* CCSR map address to access ccsr based register */
+void *fman_ccsr_map;
+/* fman version info */
+u16 fman_ip_rev;
+static int get_once;
+u32 fman_dealloc_bufs_mask_hi;
+u32 fman_dealloc_bufs_mask_lo;
+
+int fman_ccsr_map_fd = -1;
+static COMPAT_LIST_HEAD(__ifs);
+
+/* This is the (const) global variable that callers have read-only access to.
+ * Internally, we have read-write access directly to __ifs.
+ */
+const struct list_head *fman_if_list = &__ifs;
+
+static void
+if_destructor(struct __fman_if *__if)
+{
+	struct fman_if_bpool *bp, *tmpbp;
+
+	if (__if->__if.mac_type == fman_offline)
+		goto cleanup;
+
+	list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+cleanup:
+	rte_free(__if);
+}
+
+static int
+fman_get_ip_rev(const struct device_node *fman_node)
+{
+	const uint32_t *fman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	uint32_t ip_rev_1;
+	int _errno;
+
+	fman_addr = of_get_address(fman_node, 0, &regs_size, NULL);
+	if (!fman_addr) {
+		pr_err("of_get_address cannot return fman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(fman_node, fman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+	fman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, fman_ccsr_map_fd, phys_addr);
+	if (fman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map FMan ccsr base");
+		return -EINVAL;
+	}
+
+	ip_rev_1 = in_be32(fman_ccsr_map + FMAN_IP_REV_1);
+	fman_ip_rev = (ip_rev_1 & FMAN_IP_REV_1_MAJOR_MASK) >>
+			FMAN_IP_REV_1_MAJOR_SHIFT;
+
+	_errno = munmap(fman_ccsr_map, regs_size);
+	if (_errno)
+		pr_err("munmap() of FMan ccsr failed");
+
+	return 0;
+}
+
+static int
+fman_get_mac_index(uint64_t regs_addr_host, uint8_t *mac_idx)
+{
+	int ret = 0;
+
+	/*
+	 * MAC1 : E_0000h
+	 * MAC2 : E_2000h
+	 * MAC3 : E_4000h
+	 * MAC4 : E_6000h
+	 * MAC5 : E_8000h
+	 * MAC6 : E_A000h
+	 * MAC7 : E_C000h
+	 * MAC8 : E_E000h
+	 * MAC9 : F_0000h
+	 * MAC10: F_2000h
+	 */
+	switch (regs_addr_host) {
+	case 0xE0000:
+		*mac_idx = 1;
+		break;
+	case 0xE2000:
+		*mac_idx = 2;
+		break;
+	case 0xE4000:
+		*mac_idx = 3;
+		break;
+	case 0xE6000:
+		*mac_idx = 4;
+		break;
+	case 0xE8000:
+		*mac_idx = 5;
+		break;
+	case 0xEA000:
+		*mac_idx = 6;
+		break;
+	case 0xEC000:
+		*mac_idx = 7;
+		break;
+	case 0xEE000:
+		*mac_idx = 8;
+		break;
+	case 0xF0000:
+		*mac_idx = 9;
+		break;
+	case 0xF2000:
+		*mac_idx = 10;
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+fman_if_init(const struct device_node *dpa_node)
+{
+	const char *rprop, *mprop;
+	uint64_t phys_addr;
+	struct __fman_if *__if;
+	struct fman_if_bpool *bpool;
+
+	const phandle *mac_phandle, *ports_phandle, *pools_phandle;
+	const phandle *tx_channel_id = NULL, *mac_addr, *cell_idx;
+	const phandle *rx_phandle, *tx_phandle;
+	uint64_t tx_phandle_host[4] = {0};
+	uint64_t rx_phandle_host[4] = {0};
+	uint64_t regs_addr_host = 0;
+	uint64_t cell_idx_host = 0;
+
+	const struct device_node *mac_node = NULL, *tx_node;
+	const struct device_node *pool_node, *fman_node, *rx_node;
+	const uint32_t *regs_addr = NULL;
+	const char *mname, *fname;
+	const char *dname = dpa_node->full_name;
+	size_t lenp;
+	int _errno;
+	const char *char_prop;
+	uint32_t na;
+
+	if (of_device_is_available(dpa_node) == false)
+		return 0;
+
+	rprop = "fsl,qman-frame-queues-rx";
+	mprop = "fsl,fman-mac";
+
+	/* Allocate an object for this network interface */
+	__if = rte_malloc(NULL, sizeof(*__if), RTE_CACHE_LINE_SIZE);
+	if (!__if) {
+		FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*__if));
+		goto err;
+	}
+	memset(__if, 0, sizeof(*__if));
+	INIT_LIST_HEAD(&__if->__if.bpool_list);
+	strncpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
+	__if->node_path[PATH_MAX - 1] = '\0';
+
+	/* Obtain the MAC node used by this interface except macless */
+	mac_phandle = of_get_property(dpa_node, mprop, &lenp);
+	if (!mac_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no %s\n", dname, mprop);
+		goto err;
+	}
+	assert(lenp == sizeof(phandle));
+	mac_node = of_find_node_by_phandle(*mac_phandle);
+	if (!mac_node) {
+		FMAN_ERR(-ENXIO, "%s: bad 'fsl,fman-mac\n", dname);
+		goto err;
+	}
+	mname = mac_node->full_name;
+
+	/* Map the CCSR regs for the MAC node */
+	regs_addr = of_get_address(mac_node, 0, &__if->regs_size, NULL);
+	if (!regs_addr) {
+		FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+		goto err;
+	}
+	phys_addr = of_translate_address(mac_node, regs_addr);
+	if (!phys_addr) {
+		FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+			 mname, regs_addr);
+			 __if->ccsr_map = mmap(NULL, __if->regs_size,
+			 PROT_READ | PROT_WRITE, MAP_SHARED,
+			 fman_ccsr_map_fd, phys_addr);
+		goto err;
+	}
+	if (__if->ccsr_map == MAP_FAILED) {
+		FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+		goto err;
+	}
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	regs_addr_host = of_read_number(regs_addr, na);
+
+
+	/* Get the index of the Fman this i/f belongs to */
+	fman_node = of_get_parent(mac_node);
+	na = of_n_addr_cells(mac_node);
+	if (!fman_node) {
+		FMAN_ERR(-ENXIO, "of_get_parent(%s)\n", mname);
+		goto err;
+	}
+	fname = fman_node->full_name;
+	cell_idx = of_get_property(fman_node, "cell-index", &lenp);
+	if (!cell_idx) {
+		FMAN_ERR(-ENXIO, "%s: no cell-index)\n", fname);
+		goto err;
+	}
+	assert(lenp == sizeof(*cell_idx));
+	cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+	__if->__if.fman_idx = cell_idx_host;
+	if (!get_once) {
+		_errno = fman_get_ip_rev(fman_node);
+		if (_errno) {
+			FMAN_ERR(-ENXIO, "%s: ip_rev is not available\n",
+				 fname);
+			goto err;
+		}
+	}
+
+	if (fman_ip_rev >= FMAN_V3) {
+		/*
+		 * Set A2V, OVOM, EBD bits in contextA to allow external
+		 * buffer deallocation by fman.
+		 */
+		fman_dealloc_bufs_mask_hi = FMAN_V3_CONTEXTA_EN_A2V |
+						FMAN_V3_CONTEXTA_EN_OVOM;
+		fman_dealloc_bufs_mask_lo = FMAN_V3_CONTEXTA_EN_EBD;
+	} else {
+		fman_dealloc_bufs_mask_hi = 0;
+		fman_dealloc_bufs_mask_lo = 0;
+	}
+	/* Is the MAC node 1G, 10G? */
+	__if->__if.is_memac = 0;
+
+	if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
+		__if->__if.mac_type = fman_mac_1g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
+		__if->__if.mac_type = fman_mac_10g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-memac")) {
+		__if->__if.is_memac = 1;
+		char_prop = of_get_property(mac_node, "phy-connection-type",
+					    NULL);
+		if (!char_prop) {
+			printf("memac: unknown MII type assuming 1G\n");
+			/* Right now forcing memac to 1g in case of error*/
+			__if->__if.mac_type = fman_mac_1g;
+		} else {
+			if (strstr(char_prop, "sgmii"))
+				__if->__if.mac_type = fman_mac_1g;
+			else if (strstr(char_prop, "rgmii")) {
+				__if->__if.mac_type = fman_mac_1g;
+				__if->__if.is_rgmii = 1;
+			} else if (strstr(char_prop, "xgmii"))
+				__if->__if.mac_type = fman_mac_10g;
+		}
+	} else {
+		FMAN_ERR(-EINVAL, "%s: unknown MAC type\n", mname);
+		goto err;
+	}
+
+	/*
+	 * For MAC ports, we cannot rely on cell-index. In
+	 * T2080, two of the 10G ports on single FMAN have same
+	 * duplicate cell-indexes as the other two 10G ports on
+	 * same FMAN. Hence, we now rely upon addresses of the
+	 * ports from device tree to deduce the index.
+	 */
+
+	_errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
+	if (_errno) {
+		FMAN_ERR(-EINVAL, "Invalid register address: %lu",
+			 regs_addr_host);
+		goto err;
+	}
+
+	/* Extract the MAC address for private and shared interfaces */
+	mac_addr = of_get_property(mac_node, "local-mac-address",
+				   &lenp);
+	if (!mac_addr) {
+		FMAN_ERR(-EINVAL, "%s: no local-mac-address\n",
+			 mname);
+		goto err;
+	}
+	memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
+
+	/* Extract the Tx port (it's the second of the two port handles)
+	 * and get its channel ID
+	 */
+	ports_phandle = of_get_property(mac_node, "fsl,port-handles",
+					&lenp);
+	if (!ports_phandle)
+		ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+						&lenp);
+	if (!ports_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,port-handles\n",
+			 mname);
+		goto err;
+	}
+	assert(lenp == (2 * sizeof(phandle)));
+	tx_node = of_find_node_by_phandle(ports_phandle[1]);
+	if (!tx_node) {
+		FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]\n", mname);
+		goto err;
+	}
+	/* Extract the channel ID (from tx-port-handle) */
+	tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
+					&lenp);
+	if (!tx_channel_id) {
+		FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id\n",
+			 tx_node->full_name);
+		goto err;
+	}
+
+	rx_node = of_find_node_by_phandle(ports_phandle[0]);
+	if (!rx_node) {
+		FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]\n", mname);
+		goto err;
+	}
+	regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+	if (!regs_addr) {
+		FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+		goto err;
+	}
+	phys_addr = of_translate_address(rx_node, regs_addr);
+	if (!phys_addr) {
+		FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+			 mname, regs_addr);
+		goto err;
+	}
+	__if->bmi_map = mmap(NULL, __if->regs_size,
+				 PROT_READ | PROT_WRITE, MAP_SHARED,
+				 fman_ccsr_map_fd, phys_addr);
+	if (__if->bmi_map == MAP_FAILED) {
+		FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+		goto err;
+	}
+
+	/* No channel ID for MAC-less */
+	assert(lenp == sizeof(*tx_channel_id));
+	na = of_n_addr_cells(mac_node);
+	__if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+	/* Extract the Rx FQIDs. (Note, the device representation is silly,
+	 * there are "counts" that must always be 1.)
+	 */
+	rx_phandle = of_get_property(dpa_node, rprop, &lenp);
+	if (!rx_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-rx\n", dname);
+		goto err;
+	}
+
+	assert(lenp == (4 * sizeof(phandle)));
+
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
+	rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
+	rx_phandle_host[2] = of_read_number(&rx_phandle[2], na);
+	rx_phandle_host[3] = of_read_number(&rx_phandle[3], na);
+
+	assert((rx_phandle_host[1] == 1) && (rx_phandle_host[3] == 1));
+	__if->__if.fqid_rx_err = rx_phandle_host[0];
+	__if->__if.fqid_rx_def = rx_phandle_host[2];
+
+	/* Extract the Tx FQIDs */
+	tx_phandle = of_get_property(dpa_node,
+				     "fsl,qman-frame-queues-tx", &lenp);
+	if (!tx_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-tx\n", dname);
+		goto err;
+	}
+
+	assert(lenp == (4 * sizeof(phandle)));
+	/*TODO: Fix for other cases also */
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	tx_phandle_host[0] = of_read_number(&tx_phandle[0], na);
+	tx_phandle_host[1] = of_read_number(&tx_phandle[1], na);
+	tx_phandle_host[2] = of_read_number(&tx_phandle[2], na);
+	tx_phandle_host[3] = of_read_number(&tx_phandle[3], na);
+	assert((tx_phandle_host[1] == 1) && (tx_phandle_host[3] == 1));
+	__if->__if.fqid_tx_err = tx_phandle_host[0];
+	__if->__if.fqid_tx_confirm = tx_phandle_host[2];
+
+	/* Obtain the buffer pool nodes used by this interface */
+	pools_phandle = of_get_property(dpa_node, "fsl,bman-buffer-pools",
+					&lenp);
+	if (!pools_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,bman-buffer-pools\n", dname);
+		goto err;
+	}
+	/* For each pool, parse the corresponding node and add a pool object
+	 * to the interface's "bpool_list"
+	 */
+	assert(lenp && !(lenp % sizeof(phandle)));
+	while (lenp) {
+		size_t proplen;
+		const phandle *prop;
+		uint64_t bpid_host = 0;
+		uint64_t bpool_host[6] = {0};
+		const char *pname;
+		/* Allocate an object for the pool */
+		bpool = rte_malloc(NULL, sizeof(*bpool), RTE_CACHE_LINE_SIZE);
+		if (!bpool) {
+			FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*bpool));
+			goto err;
+		}
+		/* Find the pool node */
+		pool_node = of_find_node_by_phandle(*pools_phandle);
+		if (!pool_node) {
+			FMAN_ERR(-ENXIO, "%s: bad fsl,bman-buffer-pools\n",
+				 dname);
+			goto err;
+		}
+		pname = pool_node->full_name;
+		/* Extract the BPID property */
+		prop = of_get_property(pool_node, "fsl,bpid", &proplen);
+		if (!prop) {
+			FMAN_ERR(-EINVAL, "%s: no fsl,bpid\n", pname);
+			goto err;
+		}
+		assert(proplen == sizeof(*prop));
+		na = of_n_addr_cells(mac_node);
+		/* Get rid of endianness (issues).
+		 * Convert to host byte-order
+		 */
+		bpid_host = of_read_number(prop, na);
+		bpool->bpid = bpid_host;
+		/* Extract the cfg property (count/size/addr). "fsl,bpool-cfg"
+		 * indicates for the Bman driver to seed the pool.
+		 * "fsl,bpool-ethernet-cfg" is used by the network driver. The
+		 * two are mutually exclusive, so check for either of them.
+		 */
+		prop = of_get_property(pool_node, "fsl,bpool-cfg",
+				       &proplen);
+		if (!prop)
+			prop = of_get_property(pool_node,
+					       "fsl,bpool-ethernet-cfg",
+					       &proplen);
+		if (!prop) {
+			/* It's OK for there to be no bpool-cfg */
+			bpool->count = bpool->size = bpool->addr = 0;
+		} else {
+			assert(proplen == (6 * sizeof(*prop)));
+			na = of_n_addr_cells(mac_node);
+			/* Get rid of endianness (issues).
+			 * Convert to host byte order
+			 */
+			bpool_host[0] = of_read_number(&prop[0], na);
+			bpool_host[1] = of_read_number(&prop[1], na);
+			bpool_host[2] = of_read_number(&prop[2], na);
+			bpool_host[3] = of_read_number(&prop[3], na);
+			bpool_host[4] = of_read_number(&prop[4], na);
+			bpool_host[5] = of_read_number(&prop[5], na);
+
+			bpool->count = ((uint64_t)bpool_host[0] << 32) |
+					bpool_host[1];
+			bpool->size = ((uint64_t)bpool_host[2] << 32) |
+					bpool_host[3];
+			bpool->addr = ((uint64_t)bpool_host[4] << 32) |
+					bpool_host[5];
+		}
+		/* Parsing of the pool is complete, add it to the interface
+		 * list.
+		 */
+		list_add_tail(&bpool->node, &__if->__if.bpool_list);
+		lenp -= sizeof(phandle);
+		pools_phandle++;
+	}
+
+	/* Parsing of the network interface is complete, add it to the list */
+	DPAA_BUS_LOG(DEBUG, "Found %s, Tx Channel = %x, FMAN = %x,"
+		    "Port ID = %x\n",
+		    dname, __if->__if.tx_channel_id, __if->__if.fman_idx,
+		    __if->__if.mac_idx);
+
+	list_add_tail(&__if->__if.node, &__ifs);
+	return 0;
+err:
+	if_destructor(__if);
+	return _errno;
+}
+
+int
+fman_init(void)
+{
+	const struct device_node *dpa_node;
+	int _errno;
+
+	/* If multiple dependencies try to initialise the Fman driver, don't
+	 * panic.
+	 */
+	if (fman_ccsr_map_fd != -1)
+		return 0;
+
+	fman_ccsr_map_fd = open(FMAN_DEVICE_PATH, O_RDWR);
+	if (unlikely(fman_ccsr_map_fd < 0)) {
+		DPAA_BUS_LOG(ERR, "Unable to open (/dev/mem)");
+		return fman_ccsr_map_fd;
+	}
+
+	for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-init") {
+		_errno = fman_if_init(dpa_node);
+		if (_errno) {
+			FMAN_ERR(_errno, "if_init(%s)\n", dpa_node->full_name);
+			goto err;
+		}
+	}
+
+	return 0;
+err:
+	fman_finish();
+	return _errno;
+}
+
+void
+fman_finish(void)
+{
+	struct __fman_if *__if, *tmpif;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	list_for_each_entry_safe(__if, tmpif, &__ifs, __if.node) {
+		int _errno;
+
+		/* disable Rx and Tx */
+		if ((__if->__if.mac_type == fman_mac_1g) &&
+		    (!__if->__if.is_memac))
+			out_be32(__if->ccsr_map + 0x100,
+				 in_be32(__if->ccsr_map + 0x100) & ~(u32)0x5);
+		else
+			out_be32(__if->ccsr_map + 8,
+				 in_be32(__if->ccsr_map + 8) & ~(u32)3);
+		/* release the mapping */
+		_errno = munmap(__if->ccsr_map, __if->regs_size);
+		if (unlikely(_errno < 0))
+			fprintf(stderr, "%s:%hu:%s(): munmap() = %d (%s)\n",
+				__FILE__, __LINE__, __func__,
+				-errno, strerror(errno));
+		printf("Tearing down %s\n", __if->node_path);
+		list_del(&__if->__if.node);
+		rte_free(__if);
+	}
+
+	close(fman_ccsr_map_fd);
+	fman_ccsr_map_fd = -1;
+}
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
new file mode 100644
index 0000000..26cff84
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -0,0 +1,214 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <inttypes.h>
+#include <of.h>
+#include <net/if.h>
+#include <sys/ioctl.h>
+#include <error.h>
+#include <net/if_arp.h>
+#include <assert.h>
+#include <unistd.h>
+
+#include <rte_malloc.h>
+
+#include <rte_dpaa_logs.h>
+#include <netcfg.h>
+
+/* Structure contains information about all the interfaces given by user
+ * on command line.
+ */
+struct netcfg_interface *netcfg_interface;
+
+/* This data structure contaings all configurations information
+ * related to usages of DPA devices.
+ */
+struct netcfg_info *netcfg;
+/* fd to open a socket for making ioctl request to disable/enable shared
+ *  interfaces.
+ */
+static int skfd = -1;
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dump_netcfg(struct netcfg_info *cfg_ptr)
+{
+	int i;
+
+	printf("..........  DPAA Configuration  ..........\n\n");
+
+	/* Network interfaces */
+	printf("Network interfaces: %d\n", cfg_ptr->num_ethports);
+	for (i = 0; i < cfg_ptr->num_ethports; i++) {
+		struct fman_if_bpool *bpool;
+		struct fm_eth_port_cfg *p_cfg = &cfg_ptr->port_cfg[i];
+		struct fman_if *__if = p_cfg->fman_if;
+
+		printf("\n+ Fman %d, MAC %d (%s);\n",
+		       __if->fman_idx, __if->mac_idx,
+		       (__if->mac_type == fman_mac_1g) ? "1G" : "10G");
+
+		printf("\tmac_addr: %02x:%02x:%02x:%02x:%02x:%02x\n",
+		       (&__if->mac_addr)->addr_bytes[0],
+		       (&__if->mac_addr)->addr_bytes[1],
+		       (&__if->mac_addr)->addr_bytes[2],
+		       (&__if->mac_addr)->addr_bytes[3],
+		       (&__if->mac_addr)->addr_bytes[4],
+		       (&__if->mac_addr)->addr_bytes[5]);
+
+		printf("\ttx_channel_id: 0x%02x\n",
+		       __if->tx_channel_id);
+
+		printf("\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
+		printf("\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
+
+		printf("\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
+		printf("\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
+		fman_if_for_each_bpool(bpool, __if)
+			printf("\tbuffer pool: (bpid=%d, count=%"PRId64
+			       " size=%"PRId64", addr=0x%"PRIx64")\n",
+			       bpool->bpid, bpool->count, bpool->size,
+			       bpool->addr);
+	}
+}
+#endif /* RTE_LIBRTE_DPAA_DEBUG_DRIVER */
+
+static inline int
+get_num_netcfg_interfaces(char *str)
+{
+	char *pch;
+	uint8_t count = 0;
+
+	if (str == NULL)
+		return -EINVAL;
+	pch = strtok(str, ",");
+	while (pch != NULL) {
+		count++;
+		pch = strtok(NULL, ",");
+	}
+	return count;
+}
+
+struct netcfg_info *
+netcfg_acquire(void)
+{
+	struct fman_if *__if;
+	int _errno, idx = 0;
+	uint8_t num_ports = 0;
+	uint8_t num_cfg_ports = 0;
+	size_t size;
+
+	/* Extract dpa configuration from fman driver and FMC configuration
+	 * for command-line interfaces.
+	 */
+
+	/* Open a basic socket to enable/disable shared
+	 * interfaces.
+	 */
+	skfd = socket(AF_PACKET, SOCK_RAW, 0);
+	if (unlikely(skfd < 0)) {
+		error(0, errno, "%s(): open(SOCK_RAW)", __func__);
+		return NULL;
+	}
+
+	/* Initialise the Fman driver */
+	_errno = fman_init();
+	if (_errno) {
+		DPAA_BUS_LOG(ERR, "FMAN driver init failed (%d)", errno);
+		close(skfd);
+		skfd = -1;
+		return NULL;
+	}
+
+	/* Number of MAC ports */
+	list_for_each_entry(__if, fman_if_list, node)
+		num_ports++;
+
+	if (!num_ports) {
+		DPAA_BUS_LOG(ERR, "FMAN ports not available");
+		return NULL;
+	}
+	/* Allocate space for all enabled mac ports */
+	size = sizeof(*netcfg) +
+		(num_ports * sizeof(struct fm_eth_port_cfg));
+
+	netcfg = calloc(1, size);
+	if (unlikely(netcfg == NULL)) {
+		DPAA_BUS_LOG(ERR, "Unable to allocat mem for netcfg");
+		goto error;
+	}
+
+	netcfg->num_ethports = num_ports;
+
+	list_for_each_entry(__if, fman_if_list, node) {
+		struct fm_eth_port_cfg *cfg = &netcfg->port_cfg[idx];
+		/* Hook in the fman driver interface */
+		cfg->fman_if = __if;
+		cfg->rx_def = __if->fqid_rx_def;
+		num_cfg_ports++;
+		idx++;
+	}
+
+	if (!num_cfg_ports) {
+		DPAA_BUS_LOG(ERR, "No FMAN ports found");
+		goto error;
+	} else if (num_ports != num_cfg_ports)
+		netcfg->num_ethports = num_cfg_ports;
+
+	return netcfg;
+
+error:
+	if (netcfg) {
+		free(netcfg);
+		netcfg = NULL;
+	}
+
+	return NULL;
+}
+
+void
+netcfg_release(struct netcfg_info *cfg_ptr)
+{
+	free(cfg_ptr);
+	/* Close socket for shared interfaces */
+	if (skfd >= 0) {
+		close(skfd);
+		skfd = -1;
+	}
+}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
new file mode 100644
index 0000000..9890e09
--- /dev/null
+++ b/drivers/bus/dpaa/include/fman.h
@@ -0,0 +1,458 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FMAN_H
+#define __FMAN_H
+
+#include <stdbool.h>
+#include <net/if.h>
+
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+
+#include <compat.h>
+
+#ifndef FMAN_DEVICE_PATH
+#define FMAN_DEVICE_PATH "/dev/mem"
+#endif
+
+#define MEMAC_NUM_OF_PADDRS 7 /* Num of additional exact match MAC adr regs */
+
+/* Control and Configuration Register (COMMAND_CONFIG) for MEMAC */
+#define CMD_CFG_LOOPBACK_EN	0x00000400
+/**< 21 XGMII/GMII loopback enable */
+#define CMD_CFG_PROMIS_EN	0x00000010
+/**< 27 Promiscuous operation enable */
+#define CMD_CFG_PAUSE_IGNORE	0x00000100
+/**< 23 Ignore Pause frame quanta */
+
+/* Statistics Configuration Register (STATN_CONFIG) */
+#define STATS_CFG_CLR           0x00000004
+/**< 29 Reset all counters */
+#define STATS_CFG_CLR_ON_RD     0x00000002
+/**< 30 Clear on read */
+#define STATS_CFG_SATURATE      0x00000001
+/**< 31 Saturate at the maximum val */
+
+/**< Max receive frame length mask */
+#define MAXFRM_SIZE_MEMAC	0x00007fe0
+#define MAXFRM_RX_MASK		0x0000ffff
+
+/**< Interface Mode Register Register for MEMAC */
+#define IF_MODE_RLP 0x00000820
+
+/**< Pool Limits */
+#define FMAN_PORT_MAX_EXT_POOLS_NUM	8
+#define FMAN_PORT_OBS_EXT_POOLS_NUM	2
+
+#define FMAN_PORT_CG_MAP_NUM		8
+#define FMAN_PORT_PRS_RESULT_WORDS_NUM	8
+#define FMAN_PORT_BMI_FIFO_UNITS	0x100
+#define FMAN_PORT_IC_OFFSET_UNITS	0x10
+
+#define FMAN_ENABLE_BPOOL_DEPLETION	0xF00000F0
+
+#define HASH_CTRL_MCAST_EN	0x00000100
+#define GROUP_ADDRESS		0x0000010000000000LL
+#define HASH_CTRL_ADDR_MASK	0x0000003F
+
+/* Pre definitions of FMAN interface and Bpool structures */
+struct __fman_if;
+struct fman_if_bpool;
+/* Lists of fman interfaces and bpools */
+TAILQ_HEAD(rte_fman_if_list, __fman_if);
+
+/* Represents the different flavour of network interface */
+enum fman_mac_type {
+	fman_offline = 0,
+	fman_mac_1g,
+	fman_mac_10g,
+};
+
+struct mac_addr {
+	uint32_t   mac_addr_l;	/**< Lower 32 bits of 48-bit MAC address */
+	uint32_t   mac_addr_u;	/**< Upper 16 bits of 48-bit MAC address */
+};
+
+struct memac_regs {
+	/* General Control and Status */
+	uint32_t res0000[2];
+	uint32_t command_config;	/**< 0x008 Ctrl and cfg */
+	struct mac_addr mac_addr0;	/**< 0x00C-0x010 MAC_ADDR_0...1 */
+	uint32_t maxfrm;		/**< 0x014 Max frame length */
+	uint32_t res0018[5];
+	uint32_t hashtable_ctrl;	/**< 0x02C Hash table control */
+	uint32_t res0030[4];
+	uint32_t ievent;		/**< 0x040 Interrupt event */
+	uint32_t tx_ipg_length;
+	/**< 0x044 Transmitter inter-packet-gap */
+	uint32_t res0048;
+	uint32_t imask;			/**< 0x04C Interrupt mask */
+	uint32_t res0050;
+	uint32_t pause_quanta[4];	/**< 0x054 Pause quanta */
+	uint32_t pause_thresh[4];	/**< 0x064 Pause quanta threshold */
+	uint32_t rx_pause_status;	/**< 0x074 Receive pause status */
+	uint32_t res0078[2];
+	struct mac_addr mac_addr[MEMAC_NUM_OF_PADDRS];
+	/**< 0x80-0x0B4 mac padr */
+	uint32_t lpwake_timer;
+	/**< 0x0B8 Low Power Wakeup Timer */
+	uint32_t sleep_timer;
+	/**< 0x0BC Transmit EEE Low Power Timer */
+	uint32_t res00c0[8];
+	uint32_t statn_config;
+	/**< 0x0E0 Statistics configuration */
+	uint32_t res00e4[7];
+	/* Rx Statistics Counter */
+	uint32_t reoct_l;		/**<Rx Eth Octets Counter */
+	uint32_t reoct_u;
+	uint32_t roct_l;		/**<Rx Octet Counters */
+	uint32_t roct_u;
+	uint32_t raln_l;		/**<Rx Alignment Error Counter */
+	uint32_t raln_u;
+	uint32_t rxpf_l;		/**<Rx valid Pause Frame */
+	uint32_t rxpf_u;
+	uint32_t rfrm_l;		/**<Rx Frame counter */
+	uint32_t rfrm_u;
+	uint32_t rfcs_l;		/**<Rx frame check seq error */
+	uint32_t rfcs_u;
+	uint32_t rvlan_l;		/**<Rx Vlan Frame Counter */
+	uint32_t rvlan_u;
+	uint32_t rerr_l;		/**<Rx Frame error */
+	uint32_t rerr_u;
+	uint32_t ruca_l;		/**<Rx Unicast */
+	uint32_t ruca_u;
+	uint32_t rmca_l;		/**<Rx Multicast */
+	uint32_t rmca_u;
+	uint32_t rbca_l;		/**<Rx Broadcast */
+	uint32_t rbca_u;
+	uint32_t rdrp_l;		/**<Rx Dropper Packet */
+	uint32_t rdrp_u;
+	uint32_t rpkt_l;		/**<Rx packet */
+	uint32_t rpkt_u;
+	uint32_t rund_l;		/**<Rx undersized packets */
+	uint32_t rund_u;
+	uint32_t r64_l;			/**<Rx 64 byte */
+	uint32_t r64_u;
+	uint32_t r127_l;
+	uint32_t r127_u;
+	uint32_t r255_l;
+	uint32_t r255_u;
+	uint32_t r511_l;
+	uint32_t r511_u;
+	uint32_t r1023_l;
+	uint32_t r1023_u;
+	uint32_t r1518_l;
+	uint32_t r1518_u;
+	uint32_t r1519x_l;
+	uint32_t r1519x_u;
+	uint32_t rovr_l;		/**<Rx oversized but good */
+	uint32_t rovr_u;
+	uint32_t rjbr_l;		/**<Rx oversized with bad csum */
+	uint32_t rjbr_u;
+	uint32_t rfrg_l;		/**<Rx fragment Packet */
+	uint32_t rfrg_u;
+	uint32_t rcnp_l;		/**<Rx control packets (0x8808 */
+	uint32_t rcnp_u;
+	uint32_t rdrntp_l;		/**<Rx dropped due to FIFO overflow */
+	uint32_t rdrntp_u;
+	uint32_t res01d0[12];
+	/* Tx Statistics Counter */
+	uint32_t teoct_l;		/**<Tx eth octets */
+	uint32_t teoct_u;
+	uint32_t toct_l;		/**<Tx Octets */
+	uint32_t toct_u;
+	uint32_t res0210[2];
+	uint32_t txpf_l;		/**<Tx valid pause frame */
+	uint32_t txpf_u;
+	uint32_t tfrm_l;		/**<Tx frame counter */
+	uint32_t tfrm_u;
+	uint32_t tfcs_l;		/**<Tx FCS error */
+	uint32_t tfcs_u;
+	uint32_t tvlan_l;		/**<Tx Vlan Frame */
+	uint32_t tvlan_u;
+	uint32_t terr_l;		/**<Tx frame error */
+	uint32_t terr_u;
+	uint32_t tuca_l;		/**<Tx Unicast */
+	uint32_t tuca_u;
+	uint32_t tmca_l;		/**<Tx Multicast */
+	uint32_t tmca_u;
+	uint32_t tbca_l;		/**<Tx Broadcast */
+	uint32_t tbca_u;
+	uint32_t res0258[2];
+	uint32_t tpkt_l;		/**<Tx Packet */
+	uint32_t tpkt_u;
+	uint32_t tund_l;		/**<Tx Undersized */
+	uint32_t tund_u;
+	uint32_t t64_l;
+	uint32_t t64_u;
+	uint32_t t127_l;
+	uint32_t t127_u;
+	uint32_t t255_l;
+	uint32_t t255_u;
+	uint32_t t511_l;
+	uint32_t t511_u;
+	uint32_t t1023_l;
+	uint32_t t1023_u;
+	uint32_t t1518_l;
+	uint32_t t1518_u;
+	uint32_t t1519x_l;
+	uint32_t t1519x_u;
+	uint32_t res02a8[6];
+	uint32_t tcnp_l;		/**<Tx Control Packet type - 0x8808 */
+	uint32_t tcnp_u;
+	uint32_t res02c8[14];
+	/* Line Interface Control */
+	uint32_t if_mode;		/**< 0x300 Interface Mode Control */
+	uint32_t if_status;		/**< 0x304 Interface Status */
+	uint32_t res0308[14];
+	/* HiGig/2 */
+	uint32_t hg_config;		/**< 0x340 Control and cfg */
+	uint32_t res0344[3];
+	uint32_t hg_pause_quanta;	/**< 0x350 Pause quanta */
+	uint32_t res0354[3];
+	uint32_t hg_pause_thresh;	/**< 0x360 Pause quanta threshold */
+	uint32_t res0364[3];
+	uint32_t hgrx_pause_status;	/**< 0x370 Receive pause status */
+	uint32_t hg_fifos_status;	/**< 0x374 fifos status */
+	uint32_t rhm;			/**< 0x378 rx messages counter */
+	uint32_t thm;			/**< 0x37C tx messages counter */
+};
+
+struct rx_bmi_regs {
+	uint32_t fmbm_rcfg;		/**< Rx Configuration */
+	uint32_t fmbm_rst;		/**< Rx Status */
+	uint32_t fmbm_rda;		/**< Rx DMA attributes*/
+	uint32_t fmbm_rfp;		/**< Rx FIFO Parameters*/
+	uint32_t fmbm_rfed;		/**< Rx Frame End Data*/
+	uint32_t fmbm_ricp;		/**< Rx Internal Context Parameters*/
+	uint32_t fmbm_rim;		/**< Rx Internal Buffer Margins*/
+	uint32_t fmbm_rebm;		/**< Rx External Buffer Margins*/
+	uint32_t fmbm_rfne;		/**< Rx Frame Next Engine*/
+	uint32_t fmbm_rfca;		/**< Rx Frame Command Attributes.*/
+	uint32_t fmbm_rfpne;		/**< Rx Frame Parser Next Engine*/
+	uint32_t fmbm_rpso;		/**< Rx Parse Start Offset*/
+	uint32_t fmbm_rpp;		/**< Rx Policer Profile  */
+	uint32_t fmbm_rccb;		/**< Rx Coarse Classification Base */
+	uint32_t fmbm_reth;		/**< Rx Excessive Threshold */
+	uint32_t reserved003c[1];	/**< (0x03C 0x03F) */
+	uint32_t fmbm_rprai[FMAN_PORT_PRS_RESULT_WORDS_NUM];
+					/**< Rx Parse Results Array Init*/
+	uint32_t fmbm_rfqid;		/**< Rx Frame Queue ID*/
+	uint32_t fmbm_refqid;		/**< Rx Error Frame Queue ID*/
+	uint32_t fmbm_rfsdm;		/**< Rx Frame Status Discard Mask*/
+	uint32_t fmbm_rfsem;		/**< Rx Frame Status Error Mask*/
+	uint32_t fmbm_rfene;		/**< Rx Frame Enqueue Next Engine */
+	uint32_t reserved0074[0x2];	/**< (0x074-0x07C)  */
+	uint32_t fmbm_rcmne;
+	/**< Rx Frame Continuous Mode Next Engine */
+	uint32_t reserved0080[0x20];/**< (0x080 0x0FF)  */
+	uint32_t fmbm_ebmpi[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Buffer Manager pool Information-*/
+	uint32_t fmbm_acnt[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Allocate Counter-*/
+	uint32_t reserved0130[8];
+					/**< 0x130/0x140 - 0x15F reserved -*/
+	uint32_t fmbm_rcgm[FMAN_PORT_CG_MAP_NUM];
+					/**< Congestion Group Map*/
+	uint32_t fmbm_mpd;		/**< BM Pool Depletion  */
+	uint32_t reserved0184[0x1F];	/**< (0x184 0x1FF) */
+	uint32_t fmbm_rstc;		/**< Rx Statistics Counters*/
+	uint32_t fmbm_rfrc;		/**< Rx Frame Counter*/
+	uint32_t fmbm_rfbc;		/**< Rx Bad Frames Counter*/
+	uint32_t fmbm_rlfc;		/**< Rx Large Frames Counter*/
+	uint32_t fmbm_rffc;		/**< Rx Filter Frames Counter*/
+	uint32_t fmbm_rfdc;		/**< Rx Frame Discard Counter*/
+	uint32_t fmbm_rfldec;		/**< Rx Frames List DMA Error Counter*/
+	uint32_t fmbm_rodc;		/**< Rx Out of Buffers Discard nntr*/
+	uint32_t fmbm_rbdc;		/**< Rx Buffers Deallocate Counter*/
+	uint32_t reserved0224[0x17];	/**< (0x224 0x27F) */
+	uint32_t fmbm_rpc;		/**< Rx Performance Counters*/
+	uint32_t fmbm_rpcp;		/**< Rx Performance Count Parameters*/
+	uint32_t fmbm_rccn;		/**< Rx Cycle Counter*/
+	uint32_t fmbm_rtuc;		/**< Rx Tasks Utilization Counter*/
+	uint32_t fmbm_rrquc;
+	/**< Rx Receive Queue Utilization cntr*/
+	uint32_t fmbm_rduc;		/**< Rx DMA Utilization Counter*/
+	uint32_t fmbm_rfuc;		/**< Rx FIFO Utilization Counter*/
+	uint32_t fmbm_rpac;		/**< Rx Pause Activation Counter*/
+	uint32_t reserved02a0[0x18];	/**< (0x2A0 0x2FF) */
+	uint32_t fmbm_rdbg;		/**< Rx Debug-*/
+};
+
+struct fman_port_qmi_regs {
+	uint32_t fmqm_pnc;		/**< PortID n Configuration Register */
+	uint32_t fmqm_pns;		/**< PortID n Status Register */
+	uint32_t fmqm_pnts;		/**< PortID n Task Status Register */
+	uint32_t reserved00c[4];	/**< 0xn00C - 0xn01B */
+	uint32_t fmqm_pnen;		/**< PortID n Enqueue NIA Register */
+	uint32_t fmqm_pnetfc;		/**< PortID n Enq Total Frame Counter */
+	uint32_t reserved024[2];	/**< 0xn024 - 0x02B */
+	uint32_t fmqm_pndn;		/**< PortID n Dequeue NIA Register */
+	uint32_t fmqm_pndc;		/**< PortID n Dequeue Config Register */
+	uint32_t fmqm_pndtfc;		/**< PortID n Dequeue tot Frame cntr */
+	uint32_t fmqm_pndfdc;		/**< PortID n Dequeue FQID Dflt Cntr */
+	uint32_t fmqm_pndcc;		/**< PortID n Dequeue Confirm Counter */
+};
+
+/* This struct exports parameters about an Fman network interface, determined
+ * from the device-tree.
+ */
+struct fman_if {
+	/* Which Fman this interface belongs to */
+	uint8_t fman_idx;
+	/* The type/speed of the interface */
+	enum fman_mac_type mac_type;
+	/* Boolean, set when mac type is memac */
+	uint8_t is_memac;
+	/* Boolean, set when PHY is RGMII */
+	uint8_t is_rgmii;
+	/* The index of this MAC (within the Fman it belongs to) */
+	uint8_t mac_idx;
+	/* The MAC address */
+	struct ether_addr mac_addr;
+	/* The Qman channel to schedule Tx FQs to */
+	u16 tx_channel_id;
+	/* The hard-coded FQIDs for this interface. Note: this doesn't cover
+	 * the PCD nor the "Rx default" FQIDs, which are configured via FMC
+	 * and its XML-based configuration.
+	 */
+	uint32_t fqid_rx_def;
+	uint32_t fqid_rx_err;
+	uint32_t fqid_tx_err;
+	uint32_t fqid_tx_confirm;
+
+	struct list_head bpool_list;
+	/* The node for linking this interface into "fman_if_list" */
+	struct list_head node;
+};
+
+/* This struct exposes parameters for buffer pools, extracted from the network
+ * interface settings in the device tree.
+ */
+struct fman_if_bpool {
+	uint32_t bpid;
+	uint64_t count;
+	uint64_t size;
+	uint64_t addr;
+	/* The node for linking this bpool into fman_if::bpool_list */
+	struct list_head node;
+};
+
+/* Internal Context transfer params - FMBM_RICP*/
+struct fman_if_ic_params {
+	/*IC offset in the packet buffer */
+	uint16_t iceof;
+	/*IC internal offset */
+	uint16_t iciof;
+	/*IC size to copy */
+	uint16_t icsz;
+};
+
+/* The exported "struct fman_if" type contains the subset of fields we want
+ * exposed. This struct is embedded in a larger "struct __fman_if" which
+ * contains the extra bits we *don't* want exposed.
+ */
+struct __fman_if {
+	struct fman_if __if;
+	char node_path[PATH_MAX];
+	uint64_t regs_size;
+	void *ccsr_map;
+	void *bmi_map;
+	void *qmi_map;
+	struct list_head node;
+};
+
+/* And this is the base list node that the interfaces are added to. (See
+ * fman_if_enable_all_rx() below for an example of its use.)
+ */
+extern const struct list_head *fman_if_list;
+
+extern int fman_ccsr_map_fd;
+
+/* To iterate the "bpool_list" for an interface. Eg;
+ *        struct fman_if *p = get_ptr_to_some_interface();
+ *        struct fman_if_bpool *bp;
+ *        printf("Interface uses following BPIDs;\n");
+ *        fman_if_for_each_bpool(bp, p) {
+ *            printf("    %d\n", bp->bpid);
+ *            [...]
+ *        }
+ */
+#define fman_if_for_each_bpool(bp, __if) \
+	list_for_each_entry(bp, &(__if)->bpool_list, node)
+
+#define FMAN_ERR(rc, fmt, args...) \
+	do { \
+		_errno = (rc); \
+		DPAA_BUS_LOG(ERR, fmt "(%d)", ##args, errno); \
+	} while (0)
+
+#define FMAN_IP_REV_1	0xC30C4
+#define FMAN_IP_REV_1_MAJOR_MASK 0x0000FF00
+#define FMAN_IP_REV_1_MAJOR_SHIFT 8
+#define FMAN_V3	0x06
+#define FMAN_V3_CONTEXTA_EN_A2V	0x10000000
+#define FMAN_V3_CONTEXTA_EN_OVOM	0x02000000
+#define FMAN_V3_CONTEXTA_EN_EBD	0x80000000
+#define FMAN_CONTEXTA_DIS_CHECKSUM	0x7ull
+#define FMAN_CONTEXTA_SET_OPCODE11 0x2000000b00000000
+extern u16 fman_ip_rev;
+extern u32 fman_dealloc_bufs_mask_hi;
+extern u32 fman_dealloc_bufs_mask_lo;
+
+/**
+ * Initialize the FMAN driver
+ *
+ * @args void
+ * @return
+ *	0 for success; error OTHERWISE
+ */
+int fman_init(void);
+
+/**
+ * Teardown the FMAN driver
+ *
+ * @args void
+ * @return void
+ */
+void fman_finish(void);
+
+#endif	/* __FMAN_H */
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
new file mode 100644
index 0000000..b77a678
--- /dev/null
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -0,0 +1,96 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __NETCFG_H
+#define __NETCFG_H
+
+#include <fman.h>
+#include <argp.h>
+
+/* Configuration information related to a specific ethernet port */
+struct fm_eth_port_cfg {
+	/**< A list of PCD FQ ranges, obtained from FMC configuration */
+	struct list_head *list;
+	/**< The "Rx default" FQID, obtained from FMC configuration */
+	uint32_t rx_def;
+	/**< Other interface details are in the fman driver interface */
+	struct fman_if *fman_if;
+};
+
+struct netcfg_info {
+	uint8_t num_ethports;
+	/**< Number of ports */
+	struct fm_eth_port_cfg port_cfg[0];
+	/**< Variable structure array of size num_ethports */
+};
+
+struct interface_info {
+	char *name;
+	struct ether_addr mac_addr;
+	struct ether_addr peer_mac;
+	int mac_present;
+	int fman_enabled_mac_interface;
+};
+
+struct netcfg_interface {
+	uint8_t numof_netcfg_interface;
+	uint8_t numof_fman_enabled_macless;
+	struct interface_info interface_info[0];
+};
+
+/* pcd_file: FMC netpcd XML ("policy") file, that contains PCD information.
+ * cfg_file: FMC config XML file
+ * Returns the configuration information in newly allocated memory.
+ */
+struct netcfg_info *netcfg_acquire(void);
+
+/* cfg_ptr: configuration information pointer.
+ * Frees the resources allocated by the configuration layer.
+ */
+void netcfg_release(struct netcfg_info *cfg_ptr);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* cfg_ptr: configuration information pointer.
+ * This function dumps configuration data to stdout.
+ */
+void dump_netcfg(struct netcfg_info *cfg_ptr);
+#endif
+
+#endif /* __NETCFG_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 06/41] bus/dpaa: add FMan hardware operations
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (4 preceding siblings ...)
  2017-09-09 11:20       ` [PATCH v4 05/41] bus/dpaa: introducing FMan configurations Shreyansh Jain
@ 2017-09-09 11:20       ` Shreyansh Jain
  2017-09-09 11:20       ` [PATCH v4 07/41] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
                         ` (37 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:20 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/fman/fman_hw.c      | 562 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_fman.h       | 174 +++++++++
 drivers/bus/dpaa/include/fsl_fman_crc64.h | 263 ++++++++++++++
 4 files changed, 1000 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 4b1715d..9f416fe 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -65,6 +65,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
+	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c
 
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
new file mode 100644
index 0000000..a7ca661
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -0,0 +1,562 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+#include <fman.h>
+/* This header declares things about Fman hardware itself (the format of status
+ * words and an inline implementation of CRC64). We include it only in order to
+ * instantiate the one global variable it depends on.
+ */
+#include <fsl_fman.h>
+#include <fsl_fman_crc64.h>
+
+/* Instantiate the global variable that the inline CRC64 implementation (in
+ * <fsl_fman.h>) depends on.
+ */
+DECLARE_FMAN_CRC64_TABLE();
+
+#define ETH_ADDR_TO_UINT64(eth_addr)                  \
+	(uint64_t)(((uint64_t)(eth_addr)[0] << 40) |   \
+	((uint64_t)(eth_addr)[1] << 32) |   \
+	((uint64_t)(eth_addr)[2] << 24) |   \
+	((uint64_t)(eth_addr)[3] << 16) |   \
+	((uint64_t)(eth_addr)[4] << 8) |    \
+	((uint64_t)(eth_addr)[5]))
+
+void
+fman_if_set_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i|HASH_CTRL_MCAST_EN);
+}
+
+void
+fman_if_reset_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i & ~HASH_CTRL_MCAST_EN);
+}
+
+static
+uint32_t get_mac_hash_code(uint64_t eth_addr)
+{
+	uint64_t	mask1, mask2;
+	uint32_t	xorVal = 0;
+	uint8_t		i, j;
+
+	for (i = 0; i < 6; i++) {
+		mask1 = eth_addr & (uint64_t)0x01;
+		eth_addr >>= 1;
+
+		for (j = 0; j < 7; j++) {
+			mask2 = eth_addr & (uint64_t)0x01;
+			mask1 ^= mask2;
+			eth_addr >>= 1;
+		}
+
+		xorVal |= (mask1 << (5 - i));
+	}
+
+	return xorVal;
+}
+
+int
+fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	uint64_t eth_addr;
+	void *hashtable_ctrl;
+	uint32_t hash;
+
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	eth_addr = ETH_ADDR_TO_UINT64(eth);
+
+	if (!(eth_addr & GROUP_ADDRESS))
+		return -1;
+
+	hash = get_mac_hash_code(eth_addr) & HASH_CTRL_ADDR_MASK;
+	hash = hash | HASH_CTRL_MCAST_EN;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	out_be32(hashtable_ctrl, hash);
+
+	return 0;
+}
+
+int
+fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *mac_reg =
+		&((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
+	u32 val = in_be32(mac_reg);
+
+	eth[0] = (val & 0x000000ff) >> 0;
+	eth[1] = (val & 0x0000ff00) >> 8;
+	eth[2] = (val & 0x00ff0000) >> 16;
+	eth[3] = (val & 0xff000000) >> 24;
+
+	mac_reg =  &((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_u;
+	val = in_be32(mac_reg);
+
+	eth[4] = (val & 0x000000ff) >> 0;
+	eth[5] = (val & 0x0000ff00) >> 8;
+
+	return 0;
+}
+
+void
+fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	void *reg;
+
+	if (addr_num) {
+		reg = &((struct memac_regs *)m->ccsr_map)->
+				mac_addr[addr_num-1].mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+		out_be32(reg, 0x0);
+	} else {
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+		out_be32(reg, 0x0);
+	}
+}
+
+int
+fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+
+	void *reg;
+	u32 val;
+
+	memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_l;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+
+	val = (m->__if.mac_addr.addr_bytes[0] |
+	       (m->__if.mac_addr.addr_bytes[1] << 8) |
+	       (m->__if.mac_addr.addr_bytes[2] << 16) |
+	       (m->__if.mac_addr.addr_bytes[3] << 24));
+	out_be32(reg, val);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+
+	val = ((m->__if.mac_addr.addr_bytes[4] << 0) |
+	       (m->__if.mac_addr.addr_bytes[5] << 8));
+	out_be32(reg, val);
+
+	return 0;
+}
+
+void
+fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	u32 value = 0;
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Rx Ignore Pause Frames */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	if (enable)
+		value = in_be32(cmdcfg) | CMD_CFG_PAUSE_IGNORE;
+	else
+		value = in_be32(cmdcfg) & ~CMD_CFG_PAUSE_IGNORE;
+
+	out_be32(cmdcfg, value);
+}
+
+void
+fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	unsigned int *maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Max frame length */
+	maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+	out_be32(maxfrm, (MAXFRM_RX_MASK & max_frame_len));
+}
+
+void
+fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+
+	/* read recved packet count */
+	stats->ipackets = ((u64)in_be32(&regs->rfrm_u)) << 32 |
+			in_be32(&regs->rfrm_l);
+	stats->ibytes = ((u64)in_be32(&regs->roct_u)) << 32 |
+			in_be32(&regs->roct_l);
+	stats->ierrors = ((u64)in_be32(&regs->rerr_u)) << 32 |
+			in_be32(&regs->rerr_l);
+
+	/* read xmited packet count */
+	stats->opackets = ((u64)in_be32(&regs->tfrm_u)) << 32 |
+			in_be32(&regs->tfrm_l);
+	stats->obytes = ((u64)in_be32(&regs->toct_u)) << 32 |
+			in_be32(&regs->toct_l);
+	stats->oerrors = ((u64)in_be32(&regs->terr_u)) << 32 |
+			in_be32(&regs->terr_l);
+}
+
+void
+fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+	int i;
+	uint64_t base_offset = offsetof(struct memac_regs, reoct_l);
+
+	for (i = 0; i < n; i++)
+		value[i] = ((u64)in_be32((char *)regs
+				+ base_offset + 8 * i + 4)) << 32 |
+				((u64)in_be32((char *)regs
+				+ base_offset + 8 * i));
+}
+
+void
+fman_if_stats_reset(struct fman_if *p)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+	uint32_t tmp;
+
+	tmp = in_be32(&regs->statn_config);
+
+	tmp |= STATS_CFG_CLR;
+
+	out_be32(&regs->statn_config, tmp);
+
+	while (in_be32(&regs->statn_config) & STATS_CFG_CLR)
+		;
+}
+
+void
+fman_if_promiscuous_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_PROMIS_EN);
+}
+
+void
+fman_if_promiscuous_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Disable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) & (~CMD_CFG_PROMIS_EN));
+}
+
+void
+fman_if_enable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* enable Rx and Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) | 3);
+}
+
+void
+fman_if_disable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* only disable Rx, not Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) & ~(u32)2);
+}
+
+void
+fman_if_loopback_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) | IF_MODE_RLP);
+	} else{
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_loopback_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+	/* Disable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) & ~IF_MODE_RLP);
+	} else {
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) & ~CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
+		    int bpid, size_t bufsize)
+{
+	u32 fmbm_ebmpi;
+	u32 ebmpi_val_ace = 0xc0000000;
+	u32 ebmpi_mask = 0xffc00000;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ebmpi =
+	       in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0]);
+	fmbm_ebmpi = ebmpi_val_ace | (fmbm_ebmpi & ebmpi_mask) | (bpid << 16) |
+		     (bufsize);
+
+	out_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0],
+		 fmbm_ebmpi);
+}
+
+int
+fman_if_get_fc_quanta(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	return in_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0]);
+}
+
+int
+fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	out_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0],
+		 pause_quanta);
+	return 0;
+}
+
+int
+fman_if_get_fdoff(struct fman_if *fm_if)
+{
+	u32 fmbm_ricp;
+	int fdoff;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ricp =
+		   in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp);
+	/*iceof + icsz*/
+	fdoff = ((fmbm_ricp & iceof_mask) >> 16) * 16 +
+		(fmbm_ricp & icsz_mask) * 16;
+
+	return fdoff;
+}
+
+void
+fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_refqid =
+			&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_refqid;
+	out_be32(fmbm_refqid, err_fqid);
+}
+
+int
+fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	val = in_be32(fmbm_ricp);
+
+	icp->iceof = (val & iceof_mask) >> 12;
+	icp->iciof = (val & iciof_mask) >> 4;
+	icp->icsz = (val & icsz_mask) << 4;
+
+	return 0;
+}
+
+int
+fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	val |= (icp->iceof << 12) & iceof_mask;
+	val |= (icp->iciof << 4) & iciof_mask;
+	val |= (icp->icsz >> 4) & icsz_mask;
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	out_be32(fmbm_ricp, val);
+
+	return 0;
+}
+
+void
+fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rebm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_rebm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rebm;
+
+	out_be32(fmbm_rebm, in_be32(fmbm_rebm) | (fd_offset << 16));
+}
+
+void
+fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	out_be32(reg_maxfrm, (in_be32(reg_maxfrm) & 0xFFFF0000) | max_frm);
+}
+
+uint16_t
+fman_if_get_maxfrm(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	return (in_be32(reg_maxfrm) | 0x0000FFFF);
+}
+
+void
+fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmqm_pndn;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmqm_pndn = &((struct fman_port_qmi_regs *)__if->qmi_map)->fmqm_pndn;
+
+	out_be32(fmqm_pndn, nia);
+}
+
+void
+fman_if_discard_rx_errors(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rfsdm, *fmbm_rfsem;
+
+	fmbm_rfsem = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsem;
+	out_be32(fmbm_rfsem, 0);
+
+	/* Configure the discard mask to discard the error packets which have
+	 * DMA errors, Frame size error, Header error etc. The mask 0x010CE3F0
+	 * is to configured discard all the errors which come in the FD[STATUS]
+	 */
+	fmbm_rfsdm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsdm;
+	out_be32(fmbm_rfsdm, 0x010CE3F0);
+}
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
new file mode 100644
index 0000000..ac38082
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -0,0 +1,174 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_H
+#define __FSL_FMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Status field in FD is updated on Rx side by FMAN with following information.
+ * Refer to field description in FM BG.
+ */
+struct fm_status_t {
+	unsigned int reserved0:3;
+	unsigned int dcl4c:1; /* Don't Check L4 Checksum */
+	unsigned int reserved1:1;
+	unsigned int ufd:1; /* Unsupported Format */
+	unsigned int lge:1; /* Length Error */
+	unsigned int dme:1; /* DMA Error */
+
+	unsigned int reserved2:4;
+	unsigned int fpe:1; /* Frame physical Error */
+	unsigned int fse:1; /* Frame Size Error */
+	unsigned int dis:1; /* Discard by Classification */
+	unsigned int reserved3:1;
+
+	unsigned int eof:1; /* Key Extraction goes out of frame */
+	unsigned int nss:1; /* No Scheme selected */
+	unsigned int kso:1; /* Key Size Overflow */
+	unsigned int reserved4:1;
+	unsigned int fcl:2; /* Frame Color */
+	unsigned int ipp:1; /* Illegal Policer Profile Selected */
+	unsigned int flm:1; /* Frame Length Mismatch */
+	unsigned int pte:1; /* Parser Timeout */
+	unsigned int isp:1; /* Invalid Soft Parser Instruction */
+	unsigned int phe:1; /* Header Error during parsing */
+	unsigned int frdr:1; /* Frame Dropped by disabled port */
+	unsigned int reserved5:4;
+} __attribute__ ((__packed__));
+
+/* Set MAC address for a particular interface */
+int fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num);
+
+/* Remove a MAC address for a particular interface */
+void fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num);
+
+/* Get the FMAN statistics */
+void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);
+
+/* Reset the FMAN statistics */
+void fman_if_stats_reset(struct fman_if *p);
+
+/* Get all of the FMAN statistics */
+void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
+
+/* Set ignore pause option for a specific interface */
+void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
+
+/* Set max frame length */
+void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len);
+
+/* Enable/disable Rx promiscuous mode on specified interface */
+void fman_if_promiscuous_enable(struct fman_if *p);
+void fman_if_promiscuous_disable(struct fman_if *p);
+
+/* Enable/disable Rx on specific interfaces */
+void fman_if_enable_rx(struct fman_if *p);
+void fman_if_disable_rx(struct fman_if *p);
+
+/* Enable/disable loopback on specific interfaces */
+void fman_if_loopback_enable(struct fman_if *p);
+void fman_if_loopback_disable(struct fman_if *p);
+
+/* Set buffer pool on specific interface */
+void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
+		    size_t bufsize);
+
+/* Get Flow Control pause quanta on specific interface */
+int fman_if_get_fc_quanta(struct fman_if *fm_if);
+
+/* Set Flow Control pause quanta on specific interface */
+int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
+
+/* Set default error fqid on specific interface */
+void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
+
+/* Get IC transfer params */
+int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
+
+/* Set IC transfer params */
+int fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp);
+
+/* Get interface fd->offset value */
+int fman_if_get_fdoff(struct fman_if *fm_if);
+
+/* Set interface fd->offset value */
+void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
+
+/* Get interface Max Frame length (MTU) */
+uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
+
+/* Set interface  Max Frame length (MTU) */
+void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
+
+/* Set interface next invoked action for dequeue operation */
+void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
+
+/* discard error packets on rx */
+void fman_if_discard_rx_errors(struct fman_if *fm_if);
+
+void fman_if_set_mcast_filter_table(struct fman_if *p);
+
+void fman_if_reset_mcast_filter_table(struct fman_if *p);
+
+int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
+
+int fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth);
+
+
+/* Enable/disable Rx on all interfaces */
+static inline void fman_if_enable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_enable_rx(__if);
+}
+
+static inline void fman_if_disable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_disable_rx(__if);
+}
+#endif /* __FSL_FMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_fman_crc64.h b/drivers/bus/dpaa/include/fsl_fman_crc64.h
new file mode 100644
index 0000000..af5803f
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman_crc64.h
@@ -0,0 +1,263 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_CRC64_H
+#define __FSL_FMAN_CRC64_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*
+ * This following definitions provide a software implementation of the CRC64
+ * algorithm implemented within Fman.
+ *
+ * The following example shows how to compute a CRC64 hash value based on
+ * SRC_IP, DST_IP and ESP_SPI values
+ *
+ *     #define compute_hash(saddr,daddr,spi) \
+ *        do { \
+ *           uint64_t result; \
+ *           result = fman_crc64_init(); \
+ *           result = fman_crc64_compute_32bit(saddr, result); \
+ *           result = fman_crc64_compute_32bit(daddr, result); \
+ *           result = fman_crc64_compute_32bit(spi, result); \
+ *           return (uint32_t) result & RC_HASH_MASK; \
+ *        } while (0);
+ *
+ * If hashing over a different number of fields (or of different types) is
+ * required, this can be implemented using the following primitives.
+ */
+
+/* The following table provides the constants used by the Fman CRC64
+ * implementation. The table is instantiated within the DPAA fman driver.
+ * However if the application is not going to be linked against the DPAA fman
+ * driver but will use this Fman CRC64 implementation, then it will need to
+ * instantiate this table by using the DECLARE_FMAN_CRC64_TABLE() macro.
+ */
+struct fman_crc64_t {
+	uint64_t initial;
+	uint64_t table[1 << 8];
+};
+extern struct fman_crc64_t FMAN_CRC64_ECMA_182;
+#define DECLARE_FMAN_CRC64_TABLE() \
+struct fman_crc64_t FMAN_CRC64_ECMA_182 = { \
+	0xFFFFFFFFFFFFFFFFULL, \
+	{ \
+		0x0000000000000000ULL, 0xb32e4cbe03a75f6fULL, \
+		0xf4843657a840a05bULL, 0x47aa7ae9abe7ff34ULL, \
+		0x7bd0c384ff8f5e33ULL, 0xc8fe8f3afc28015cULL, \
+		0x8f54f5d357cffe68ULL, 0x3c7ab96d5468a107ULL, \
+		0xf7a18709ff1ebc66ULL, 0x448fcbb7fcb9e309ULL, \
+		0x0325b15e575e1c3dULL, 0xb00bfde054f94352ULL, \
+		0x8c71448d0091e255ULL, 0x3f5f08330336bd3aULL, \
+		0x78f572daa8d1420eULL, 0xcbdb3e64ab761d61ULL, \
+		0x7d9ba13851336649ULL, 0xceb5ed8652943926ULL, \
+		0x891f976ff973c612ULL, 0x3a31dbd1fad4997dULL, \
+		0x064b62bcaebc387aULL, 0xb5652e02ad1b6715ULL, \
+		0xf2cf54eb06fc9821ULL, 0x41e11855055bc74eULL, \
+		0x8a3a2631ae2dda2fULL, 0x39146a8fad8a8540ULL, \
+		0x7ebe1066066d7a74ULL, 0xcd905cd805ca251bULL, \
+		0xf1eae5b551a2841cULL, 0x42c4a90b5205db73ULL, \
+		0x056ed3e2f9e22447ULL, 0xb6409f5cfa457b28ULL, \
+		0xfb374270a266cc92ULL, 0x48190ecea1c193fdULL, \
+		0x0fb374270a266cc9ULL, 0xbc9d3899098133a6ULL, \
+		0x80e781f45de992a1ULL, 0x33c9cd4a5e4ecdceULL, \
+		0x7463b7a3f5a932faULL, 0xc74dfb1df60e6d95ULL, \
+		0x0c96c5795d7870f4ULL, 0xbfb889c75edf2f9bULL, \
+		0xf812f32ef538d0afULL, 0x4b3cbf90f69f8fc0ULL, \
+		0x774606fda2f72ec7ULL, 0xc4684a43a15071a8ULL, \
+		0x83c230aa0ab78e9cULL, 0x30ec7c140910d1f3ULL, \
+		0x86ace348f355aadbULL, 0x3582aff6f0f2f5b4ULL, \
+		0x7228d51f5b150a80ULL, 0xc10699a158b255efULL, \
+		0xfd7c20cc0cdaf4e8ULL, 0x4e526c720f7dab87ULL, \
+		0x09f8169ba49a54b3ULL, 0xbad65a25a73d0bdcULL, \
+		0x710d64410c4b16bdULL, 0xc22328ff0fec49d2ULL, \
+		0x85895216a40bb6e6ULL, 0x36a71ea8a7ace989ULL, \
+		0x0adda7c5f3c4488eULL, 0xb9f3eb7bf06317e1ULL, \
+		0xfe5991925b84e8d5ULL, 0x4d77dd2c5823b7baULL, \
+		0x64b62bcaebc387a1ULL, 0xd7986774e864d8ceULL, \
+		0x90321d9d438327faULL, 0x231c512340247895ULL, \
+		0x1f66e84e144cd992ULL, 0xac48a4f017eb86fdULL, \
+		0xebe2de19bc0c79c9ULL, 0x58cc92a7bfab26a6ULL, \
+		0x9317acc314dd3bc7ULL, 0x2039e07d177a64a8ULL, \
+		0x67939a94bc9d9b9cULL, 0xd4bdd62abf3ac4f3ULL, \
+		0xe8c76f47eb5265f4ULL, 0x5be923f9e8f53a9bULL, \
+		0x1c4359104312c5afULL, 0xaf6d15ae40b59ac0ULL, \
+		0x192d8af2baf0e1e8ULL, 0xaa03c64cb957be87ULL, \
+		0xeda9bca512b041b3ULL, 0x5e87f01b11171edcULL, \
+		0x62fd4976457fbfdbULL, 0xd1d305c846d8e0b4ULL, \
+		0x96797f21ed3f1f80ULL, 0x2557339fee9840efULL, \
+		0xee8c0dfb45ee5d8eULL, 0x5da24145464902e1ULL, \
+		0x1a083bacedaefdd5ULL, 0xa9267712ee09a2baULL, \
+		0x955cce7fba6103bdULL, 0x267282c1b9c65cd2ULL, \
+		0x61d8f8281221a3e6ULL, 0xd2f6b4961186fc89ULL, \
+		0x9f8169ba49a54b33ULL, 0x2caf25044a02145cULL, \
+		0x6b055fede1e5eb68ULL, 0xd82b1353e242b407ULL, \
+		0xe451aa3eb62a1500ULL, 0x577fe680b58d4a6fULL, \
+		0x10d59c691e6ab55bULL, 0xa3fbd0d71dcdea34ULL, \
+		0x6820eeb3b6bbf755ULL, 0xdb0ea20db51ca83aULL, \
+		0x9ca4d8e41efb570eULL, 0x2f8a945a1d5c0861ULL, \
+		0x13f02d374934a966ULL, 0xa0de61894a93f609ULL, \
+		0xe7741b60e174093dULL, 0x545a57dee2d35652ULL, \
+		0xe21ac88218962d7aULL, 0x5134843c1b317215ULL, \
+		0x169efed5b0d68d21ULL, 0xa5b0b26bb371d24eULL, \
+		0x99ca0b06e7197349ULL, 0x2ae447b8e4be2c26ULL, \
+		0x6d4e3d514f59d312ULL, 0xde6071ef4cfe8c7dULL, \
+		0x15bb4f8be788911cULL, 0xa6950335e42fce73ULL, \
+		0xe13f79dc4fc83147ULL, 0x521135624c6f6e28ULL, \
+		0x6e6b8c0f1807cf2fULL, 0xdd45c0b11ba09040ULL, \
+		0x9aefba58b0476f74ULL, 0x29c1f6e6b3e0301bULL, \
+		0xc96c5795d7870f42ULL, 0x7a421b2bd420502dULL, \
+		0x3de861c27fc7af19ULL, 0x8ec62d7c7c60f076ULL, \
+		0xb2bc941128085171ULL, 0x0192d8af2baf0e1eULL, \
+		0x4638a2468048f12aULL, 0xf516eef883efae45ULL, \
+		0x3ecdd09c2899b324ULL, 0x8de39c222b3eec4bULL, \
+		0xca49e6cb80d9137fULL, 0x7967aa75837e4c10ULL, \
+		0x451d1318d716ed17ULL, 0xf6335fa6d4b1b278ULL, \
+		0xb199254f7f564d4cULL, 0x02b769f17cf11223ULL, \
+		0xb4f7f6ad86b4690bULL, 0x07d9ba1385133664ULL, \
+		0x4073c0fa2ef4c950ULL, 0xf35d8c442d53963fULL, \
+		0xcf273529793b3738ULL, 0x7c0979977a9c6857ULL, \
+		0x3ba3037ed17b9763ULL, 0x888d4fc0d2dcc80cULL, \
+		0x435671a479aad56dULL, 0xf0783d1a7a0d8a02ULL, \
+		0xb7d247f3d1ea7536ULL, 0x04fc0b4dd24d2a59ULL, \
+		0x3886b22086258b5eULL, 0x8ba8fe9e8582d431ULL, \
+		0xcc0284772e652b05ULL, 0x7f2cc8c92dc2746aULL, \
+		0x325b15e575e1c3d0ULL, 0x8175595b76469cbfULL, \
+		0xc6df23b2dda1638bULL, 0x75f16f0cde063ce4ULL, \
+		0x498bd6618a6e9de3ULL, 0xfaa59adf89c9c28cULL, \
+		0xbd0fe036222e3db8ULL, 0x0e21ac88218962d7ULL, \
+		0xc5fa92ec8aff7fb6ULL, 0x76d4de52895820d9ULL, \
+		0x317ea4bb22bfdfedULL, 0x8250e80521188082ULL, \
+		0xbe2a516875702185ULL, 0x0d041dd676d77eeaULL, \
+		0x4aae673fdd3081deULL, 0xf9802b81de97deb1ULL, \
+		0x4fc0b4dd24d2a599ULL, 0xfceef8632775faf6ULL, \
+		0xbb44828a8c9205c2ULL, 0x086ace348f355aadULL, \
+		0x34107759db5dfbaaULL, 0x873e3be7d8faa4c5ULL, \
+		0xc094410e731d5bf1ULL, 0x73ba0db070ba049eULL, \
+		0xb86133d4dbcc19ffULL, 0x0b4f7f6ad86b4690ULL, \
+		0x4ce50583738cb9a4ULL, 0xffcb493d702be6cbULL, \
+		0xc3b1f050244347ccULL, 0x709fbcee27e418a3ULL, \
+		0x3735c6078c03e797ULL, 0x841b8ab98fa4b8f8ULL, \
+		0xadda7c5f3c4488e3ULL, 0x1ef430e13fe3d78cULL, \
+		0x595e4a08940428b8ULL, 0xea7006b697a377d7ULL, \
+		0xd60abfdbc3cbd6d0ULL, 0x6524f365c06c89bfULL, \
+		0x228e898c6b8b768bULL, 0x91a0c532682c29e4ULL, \
+		0x5a7bfb56c35a3485ULL, 0xe955b7e8c0fd6beaULL, \
+		0xaeffcd016b1a94deULL, 0x1dd181bf68bdcbb1ULL, \
+		0x21ab38d23cd56ab6ULL, 0x9285746c3f7235d9ULL, \
+		0xd52f0e859495caedULL, 0x6601423b97329582ULL, \
+		0xd041dd676d77eeaaULL, 0x636f91d96ed0b1c5ULL, \
+		0x24c5eb30c5374ef1ULL, 0x97eba78ec690119eULL, \
+		0xab911ee392f8b099ULL, 0x18bf525d915feff6ULL, \
+		0x5f1528b43ab810c2ULL, 0xec3b640a391f4fadULL, \
+		0x27e05a6e926952ccULL, 0x94ce16d091ce0da3ULL, \
+		0xd3646c393a29f297ULL, 0x604a2087398eadf8ULL, \
+		0x5c3099ea6de60cffULL, 0xef1ed5546e415390ULL, \
+		0xa8b4afbdc5a6aca4ULL, 0x1b9ae303c601f3cbULL, \
+		0x56ed3e2f9e224471ULL, 0xe5c372919d851b1eULL, \
+		0xa26908783662e42aULL, 0x114744c635c5bb45ULL, \
+		0x2d3dfdab61ad1a42ULL, 0x9e13b115620a452dULL, \
+		0xd9b9cbfcc9edba19ULL, 0x6a978742ca4ae576ULL, \
+		0xa14cb926613cf817ULL, 0x1262f598629ba778ULL, \
+		0x55c88f71c97c584cULL, 0xe6e6c3cfcadb0723ULL, \
+		0xda9c7aa29eb3a624ULL, 0x69b2361c9d14f94bULL, \
+		0x2e184cf536f3067fULL, 0x9d36004b35545910ULL, \
+		0x2b769f17cf112238ULL, 0x9858d3a9ccb67d57ULL, \
+		0xdff2a94067518263ULL, 0x6cdce5fe64f6dd0cULL, \
+		0x50a65c93309e7c0bULL, 0xe388102d33392364ULL, \
+		0xa4226ac498dedc50ULL, 0x170c267a9b79833fULL, \
+		0xdcd7181e300f9e5eULL, 0x6ff954a033a8c131ULL, \
+		0x28532e49984f3e05ULL, 0x9b7d62f79be8616aULL, \
+		0xa707db9acf80c06dULL, 0x14299724cc279f02ULL, \
+		0x5383edcd67c06036ULL, 0xe0ada17364673f59ULL} \
+}
+
+/*
+ * Return the initial CRC seed. Use the value returned from this API as the
+ * "crc" parameter to the first call to add data.
+ */
+static inline uint64_t fman_crc64_init(void)
+{
+	return FMAN_CRC64_ECMA_182.initial;
+}
+
+/* Updates the CRC with arbitrary data */
+static inline uint64_t fman_crc64_update(uint64_t crc,
+					 void *data, unsigned int len)
+{
+	uint8_t *p = data;
+	while (len--)
+		crc = FMAN_CRC64_ECMA_182.table[(crc ^ *(p++)) & 0xff] ^
+				(crc >> 8);
+	return crc;
+}
+
+/* Shorthands for updating the CRC with 8/16/32 bits of data.
+ * IMPORTANT NOTE: the typed "data" arguments should not be mistaken for
+ * host-endian numerical values, the assumption is that these values contain
+ * big-endian (ie. network byte order) data.
+ */
+static inline uint64_t fman_crc64_compute_32bit(uint32_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_16bit(uint16_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_8bit(uint8_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+
+/*
+ * Finalise the CRC (using 2's complement)
+ */
+static inline uint64_t fman_crc64_finish(uint64_t seed)
+{
+	return ~seed;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_FMAN_CRC64_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 07/41] bus/dpaa: enable DPAA IOCTL portal driver
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (5 preceding siblings ...)
  2017-09-09 11:20       ` [PATCH v4 06/41] bus/dpaa: add FMan hardware operations Shreyansh Jain
@ 2017-09-09 11:20       ` Shreyansh Jain
  2017-09-18 14:51         ` Ferruh Yigit
  2017-09-09 11:20       ` [PATCH v4 08/41] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
                         ` (36 subsequent siblings)
  43 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:20 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Userspace applications interact with DPAA blocks using this IOCTL driver.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile             |   4 +-
 drivers/bus/dpaa/base/qbman/process.c | 331 ++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h    |  88 +++++++++
 drivers/bus/dpaa/include/process.h    | 107 +++++++++++
 4 files changed, 529 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/process.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 9f416fe..b0083c9 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -50,6 +50,7 @@ CFLAGS += -D _GNU_SOURCE
 
 CFLAGS += -I$(RTE_BUS_DPAA)/
 CFLAGS += -I$(RTE_BUS_DPAA)/include
+CFLAGS += -I$(RTE_BUS_DPAA)/base/qbman
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -67,6 +68,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
-	base/fman/netcfg_layer.c
+	base/fman/netcfg_layer.c \
+	base/qbman/process.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
new file mode 100644
index 0000000..b8ec539
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -0,0 +1,331 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <assert.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+
+#include "process.h"
+
+#include <fsl_usd.h>
+
+/* As higher-level drivers will be built on top of this (dma_mem, qbman, ...),
+ * it's preferable that the process driver itself not provide any exported API.
+ * As such, combined with the fact that none of these operations are
+ * performance critical, it is justified to use lazy initialisation, so that's
+ * what the lock is for.
+ */
+static int fd = -1;
+static pthread_mutex_t fd_init_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static int check_fd(void)
+{
+	int ret;
+
+	if (fd >= 0)
+		return 0;
+	ret = pthread_mutex_lock(&fd_init_lock);
+	assert(!ret);
+	/* check again with the lock held */
+	if (fd < 0)
+		fd = open(PROCESS_PATH, O_RDWR);
+	ret = pthread_mutex_unlock(&fd_init_lock);
+	assert(!ret);
+	return (fd >= 0) ? 0 : -ENODEV;
+}
+
+#define DPAA_IOCTL_MAGIC 'u'
+struct dpaa_ioctl_id_alloc {
+	uint32_t base; /* Return value, the start of the allocated range */
+	enum dpaa_id_type id_type; /* what kind of resource(s) to allocate */
+	uint32_t num; /* how many IDs to allocate (and return value) */
+	uint32_t align; /* must be a power of 2, 0 is treated like 1 */
+	int partial; /* whether to allow less than 'num' */
+};
+
+struct dpaa_ioctl_id_release {
+	/* Input; */
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+struct dpaa_ioctl_id_reserve {
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+#define DPAA_IOCTL_ID_ALLOC \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x01, struct dpaa_ioctl_id_alloc)
+#define DPAA_IOCTL_ID_RELEASE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x02, struct dpaa_ioctl_id_release)
+#define DPAA_IOCTL_ID_RESERVE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x0A, struct dpaa_ioctl_id_reserve)
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial)
+{
+	struct dpaa_ioctl_id_alloc id = {
+		.id_type = id_type,
+		.num = num,
+		.align = align,
+		.partial = partial
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	ret = ioctl(fd, DPAA_IOCTL_ID_ALLOC, &id);
+	if (ret)
+		return ret;
+	for (ret = 0; ret < (int)id.num; ret++)
+		base[ret] = id.base + ret;
+	return id.num;
+}
+
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_release id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret) {
+		fprintf(stderr, "Process FD failure\n");
+		return;
+	}
+	ret = ioctl(fd, DPAA_IOCTL_ID_RELEASE, &id);
+	if (ret)
+		fprintf(stderr, "Process FD ioctl failure type %d base 0x%x num %d\n",
+			id_type, base, num);
+}
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_reserve id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	return ioctl(fd, DPAA_IOCTL_ID_RESERVE, &id);
+}
+
+/***************************************/
+/* Mapping and using QMan/BMan portals */
+/***************************************/
+
+#define DPAA_IOCTL_PORTAL_MAP \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x07, struct dpaa_ioctl_portal_map)
+#define DPAA_IOCTL_PORTAL_UNMAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x08, struct dpaa_portal_map)
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_MAP, params);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_MAP)");
+		return ret;
+	}
+	return 0;
+}
+
+int process_portal_unmap(struct dpaa_portal_map *map)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_UNMAP, map);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_UNMAP)");
+		return ret;
+	}
+	return 0;
+}
+
+#define DPAA_IOCTL_PORTAL_IRQ_MAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x09, struct dpaa_ioctl_irq_map)
+
+int process_portal_irq_map(int ifd, struct dpaa_ioctl_irq_map *map)
+{
+	map->fd = fd;
+	return ioctl(ifd, DPAA_IOCTL_PORTAL_IRQ_MAP, map);
+}
+
+int process_portal_irq_unmap(int ifd)
+{
+	return close(ifd);
+}
+
+struct dpaa_ioctl_raw_portal {
+	/* inputs */
+	enum dpaa_portal_type type; /* Type of portal to allocate */
+
+	uint8_t enable_stash; /* set to non zero to turn on stashing */
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+#define DPAA_IOCTL_ALLOC_RAW_PORTAL \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x0C, struct dpaa_ioctl_raw_portal)
+
+#define DPAA_IOCTL_FREE_RAW_PORTAL \
+	_IOR(DPAA_IOCTL_MAGIC, 0x0D, struct dpaa_ioctl_raw_portal)
+
+static int process_portal_allocate(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_ALLOC_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_ALLOC_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+static int process_portal_free(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_FREE_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_FREE_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.enable_stash = portal->enable_stash;
+	input.cpu = portal->cpu;
+	input.cache = portal->cache;
+	input.window = portal->window;
+	input.sdest = portal->sdest;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int qman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.enable_stash = 0;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int bman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
new file mode 100644
index 0000000..4ff48c6
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_USD_H
+#define __FSL_USD_H
+
+#include <compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define QBMAN_ANY_PORTAL_IDX 0xffffffff
+
+/* Obtain and free raw (unitialized) portals */
+
+struct dpaa_raw_portal {
+	/* inputs */
+
+	/* set to non zero to turn on stashing */
+	uint8_t enable_stash;
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int qman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int bman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_USD_H */
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
new file mode 100644
index 0000000..989ddcd
--- /dev/null
+++ b/drivers/bus/dpaa/include/process.h
@@ -0,0 +1,107 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __PROCESS_H
+#define	__PROCESS_H
+
+#include <compat.h>
+
+/* The process device underlies process-wide user/kernel interactions, such as
+ * mapping dma_mem memory and providing accompanying ioctl()s. (This isn't used
+ * for portals, which use one UIO device each.).
+ */
+#define PROCESS_PATH		"/dev/fsl-usdpaa"
+
+/* Allocation of resource IDs uses a generic interface. This enum is used to
+ * distinguish between the type of underlying object being manipulated.
+ */
+enum dpaa_id_type {
+	dpaa_id_fqid,
+	dpaa_id_bpid,
+	dpaa_id_qpool,
+	dpaa_id_cgrid,
+	dpaa_id_max /* <-- not a valid type, represents the number of types */
+};
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial);
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+/* Mapping and using QMan/BMan portals */
+enum dpaa_portal_type {
+	dpaa_portal_qman,
+	dpaa_portal_bman,
+};
+
+struct dpaa_ioctl_portal_map {
+	/* Input parameter, is a qman or bman portal required. */
+	enum dpaa_portal_type type;
+	/* Specifes a specific portal index to map or 0xffffffff
+	 * for don't care.
+	 */
+	uint32_t index;
+
+	/* Return value if the map succeeds, this gives the mapped
+	 * cache-inhibited (cinh) and cache-enabled (cena) addresses.
+	 */
+	struct dpaa_portal_map {
+		void *cinh;
+		void *cena;
+	} addr;
+	/* Qman-specific return values */
+	u16 channel;
+	uint32_t pools;
+};
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params);
+int process_portal_unmap(struct dpaa_portal_map *map);
+
+struct dpaa_ioctl_irq_map {
+	enum dpaa_portal_type type; /* Type of portal to map */
+	int fd; /* File descriptor that contains the portal */
+	void *portal_cinh; /* Cache inhibited area to identify the portal */
+};
+
+int process_portal_irq_map(int fd,  struct dpaa_ioctl_irq_map *irq);
+int process_portal_irq_unmap(int fd);
+
+#endif	/*  __PROCESS_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 08/41] bus/dpaa: add layer for interrupt emulation using pthread
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (6 preceding siblings ...)
  2017-09-09 11:20       ` [PATCH v4 07/41] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
@ 2017-09-09 11:20       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 09/41] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
                         ` (35 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:20 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

An interrupt manager is implemented by emulating over pthreads.
Handlers are registered by QBMAN layer for being notified about
any interrupt request from DPAA blocks in userspace.

Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile              |   3 +-
 drivers/bus/dpaa/base/qbman/dpaa_sys.c | 136 +++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h |  65 ++++++++++++++++
 3 files changed, 203 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index b0083c9..ad6f8c0 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -69,6 +69,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
-	base/qbman/process.c
+	base/qbman/process.c \
+	base/qbman/dpaa_sys.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.c b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
new file mode 100644
index 0000000..0017da5
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
@@ -0,0 +1,136 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <process.h>
+#include "dpaa_sys.h"
+
+struct process_interrupt {
+	int irq;
+	irqreturn_t (*isr)(int irq, void *arg);
+	unsigned long flags;
+	const char *name;
+	void *arg;
+	struct list_head node;
+};
+
+static COMPAT_LIST_HEAD(process_irq_list);
+static pthread_mutex_t process_irq_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static void process_interrupt_install(struct process_interrupt *irq)
+{
+	int ret;
+	/* Add the irq to the end of the list */
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_add_tail(&irq->node, &process_irq_list);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static void process_interrupt_remove(struct process_interrupt *irq)
+{
+	int ret;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_del(&irq->node);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static struct process_interrupt *process_interrupt_find(int irq_num)
+{
+	int ret;
+	struct process_interrupt *i = NULL;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_for_each_entry(i, &process_irq_list, node) {
+		if (i->irq == irq_num)
+			goto done;
+	}
+done:
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+	return i;
+}
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name,
+		      void *arg __maybe_unused)
+{
+	struct process_interrupt *irq_node =
+		kmalloc(sizeof(*irq_node), GFP_KERNEL);
+
+	if (!irq_node)
+		return -ENOMEM;
+	irq_node->irq = irq;
+	irq_node->isr = isr;
+	irq_node->flags = flags;
+	irq_node->name = name;
+	irq_node->arg = arg;
+	process_interrupt_install(irq_node);
+	return 0;
+}
+
+int qbman_free_irq(int irq, __maybe_unused void *arg)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (!irq_node)
+		return -EINVAL;
+	process_interrupt_remove(irq_node);
+	kfree(irq_node);
+	return 0;
+}
+
+/* This is the interface from the platform-specific driver code to obtain
+ * interrupt handlers that have been registered.
+ */
+void qbman_invoke_irq(int irq)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (irq_node)
+		irq_node->isr(irq, irq_node->arg);
+}
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.h b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
new file mode 100644
index 0000000..c53035a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
@@ -0,0 +1,65 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_SYS_H
+#define __DPAA_SYS_H
+
+#include <of.h>
+
+/* For 2-element tables related to cache-inhibited and cache-enabled mappings */
+#define DPAA_PORTAL_CE 0
+#define DPAA_PORTAL_CI 1
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+#define DPAA_ASSERT(x) ASSERT(x)
+#else
+#define DPAA_ASSERT(x)	do {  } while (0)
+#endif
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name, void *arg);
+int qbman_free_irq(int irq, void *arg);
+
+void qbman_invoke_irq(int irq);
+
+#endif /* __DPAA_SYS_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 09/41] bus/dpaa: add routines for managing a RB tree
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (7 preceding siblings ...)
  2017-09-09 11:20       ` [PATCH v4 08/41] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 10/41] bus/dpaa: add QMAN interface driver Shreyansh Jain
                         ` (34 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

QMAN frames are managed over a RB tree data structure.
This patch introduces necessary routines for implementing a RB tree.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/include/dpaa_rbtree.h | 143 +++++++++++++++++++++++++++++++++
 1 file changed, 143 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h

diff --git a/drivers/bus/dpaa/include/dpaa_rbtree.h b/drivers/bus/dpaa/include/dpaa_rbtree.h
new file mode 100644
index 0000000..f8c9b59
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_rbtree.h
@@ -0,0 +1,143 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_RBTREE_H
+#define __DPAA_RBTREE_H
+
+#include <rte_common.h>
+/************/
+/* RB-trees */
+/************/
+
+/* Linux has a good RB-tree implementation, that we can't use (GPL). It also has
+ * a flat/hooked-in interface that virtually requires license-contamination in
+ * order to write a caller-compatible implementation. Instead, I've created an
+ * RB-tree encapsulation on top of linux's primitives (it does some of the work
+ * the client logic would normally do), and this gives us something we can
+ * reimplement on LWE. Unfortunately there's no good+free RB-tree
+ * implementations out there that are license-compatible and "flat" (ie. no
+ * dynamic allocation). I did find a malloc-based one that I could convert, but
+ * that will be a task for later on. For now, LWE's RB-tree is implemented using
+ * an ordered linked-list.
+ *
+ * Note, the only linux-esque type is "struct rb_node", because it's used
+ * statically in the exported header, so it can't be opaque. Our version doesn't
+ * include a "rb_parent_color" field because we're doing linked-list instead of
+ * a true rb-tree.
+ */
+
+struct rb_node {
+	struct rb_node *prev, *next;
+};
+
+struct dpa_rbtree {
+	struct rb_node *head, *tail;
+};
+
+#define DPAA_RBTREE { NULL, NULL }
+static inline void dpa_rbtree_init(struct dpa_rbtree *tree)
+{
+	tree->head = tree->tail = NULL;
+}
+
+#define QMAN_NODE2OBJ(ptr, type, node_field) \
+	(type *)((char *)ptr - offsetof(type, node_field))
+
+#define IMPLEMENT_DPAA_RBTREE(name, type, node_field, val_field) \
+static inline int name##_push(struct dpa_rbtree *tree, type *obj) \
+{ \
+	struct rb_node *node = tree->head; \
+	if (!node) { \
+		tree->head = tree->tail = &obj->node_field; \
+		obj->node_field.prev = obj->node_field.next = NULL; \
+		return 0; \
+	} \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (obj->val_field == item->val_field) \
+			return -EBUSY; \
+		if (obj->val_field < item->val_field) { \
+			if (tree->head == node) \
+				tree->head = &obj->node_field; \
+			else \
+				node->prev->next = &obj->node_field; \
+			obj->node_field.prev = node->prev; \
+			obj->node_field.next = node; \
+			node->prev = &obj->node_field; \
+			return 0; \
+		} \
+		node = node->next; \
+	} \
+	obj->node_field.prev = tree->tail; \
+	obj->node_field.next = NULL; \
+	tree->tail->next = &obj->node_field; \
+	tree->tail = &obj->node_field; \
+	return 0; \
+} \
+static inline void name##_del(struct dpa_rbtree *tree, type *obj) \
+{ \
+	if (tree->head == &obj->node_field) { \
+		if (tree->tail == &obj->node_field) \
+			/* Only item in the list */ \
+			tree->head = tree->tail = NULL; \
+		else { \
+			/* Is the head, next != NULL */ \
+			tree->head = tree->head->next; \
+			tree->head->prev = NULL; \
+		} \
+	} else { \
+		if (tree->tail == &obj->node_field) { \
+			/* Is the tail, prev != NULL */ \
+			tree->tail = tree->tail->prev; \
+			tree->tail->next = NULL; \
+		} else { \
+			/* Is neither the head nor the tail */ \
+			obj->node_field.prev->next = obj->node_field.next; \
+			obj->node_field.next->prev = obj->node_field.prev; \
+		} \
+	} \
+} \
+static inline type *name##_find(struct dpa_rbtree *tree, u32 val) \
+{ \
+	struct rb_node *node = tree->head; \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (val == item->val_field) \
+			return item; \
+		if (val < item->val_field) \
+			return NULL; \
+		node = node->next; \
+	} \
+	return NULL; \
+}
+
+#endif /* __DPAA_RBTREE_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 10/41] bus/dpaa: add QMAN interface driver
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (8 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 09/41] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 11/41] bus/dpaa: add QMan driver core routines Shreyansh Jain
                         ` (33 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Queue Manager (QMan) is a hardware queue management block that
allows software and accelerators on the datapath to enqueue and dequeue
frames in order to communicate.

This part of QBMAN DPAA Block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |    4 +
 drivers/bus/dpaa/base/qbman/qman_driver.c |  271 +++++++
 drivers/bus/dpaa/base/qbman/qman_priv.h   |  303 +++++++
 drivers/bus/dpaa/include/fsl_qman.h       | 1254 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   13 +
 5 files changed, 1845 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index ad6f8c0..29f01df 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -70,6 +70,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman_driver.c \
 	base/qbman/dpaa_sys.c
 
+# Link Pthread
+LDLIBS += -lpthread
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
new file mode 100644
index 0000000..80dde20
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -0,0 +1,271 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "qman_priv.h"
+#include <sys/ioctl.h>
+#include <rte_branch_prediction.h>
+
+/* Global variable containing revision id (even on non-control plane systems
+ * where CCSR isn't available).
+ */
+u16 qman_ip_rev;
+u16 qm_channel_pool1 = QMAN_CHANNEL_POOL1;
+u16 qm_channel_caam = QMAN_CHANNEL_CAAM;
+u16 qm_channel_pme = QMAN_CHANNEL_PME;
+
+/* Ccsr map address to access ccsrbased register */
+void *qman_ccsr_map;
+/* The qman clock frequency */
+u32 qman_clk;
+
+static __thread int fd = -1;
+static __thread struct qm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_qman
+};
+
+static int fsl_qman_portal_init(uint32_t index, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu\n");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!\n");
+		return -EINVAL;
+	}
+
+	/* Allocate and map a qman portal */
+	map.index = index;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	pcfg.channel = map.channel;
+	pcfg.pools = map.pools;
+	pcfg.index = map.index;
+
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+
+	fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("QMan irq init failed\n");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
+	pcfg.is_shared = is_shared;
+	pcfg.node = NULL;
+	pcfg.irq = fd;
+
+	irq_map.type = dpaa_portal_qman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_qman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int qman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_qman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int qman_thread_finish(void)
+{
+	return fsl_qman_portal_finish();
+}
+
+void qman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int qman_global_init(void)
+{
+	const struct device_node *dt_node;
+	int ret = 0;
+	size_t lenp;
+	const u32 *chanid;
+	static int ccsr_map_fd;
+	const uint32_t *qman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	const u32 *clk;
+
+	static int done;
+
+	if (done)
+		return -EBUSY;
+
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman-portal");
+	if (!dt_node) {
+		pr_err("No qman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,qman-portal-1.0.0"))
+		pr_err("QMan rev1.0 on P4080 rev1 is not supported!\n");
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.1") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.1.0"))
+		qman_ip_rev = QMAN_REV11;
+	else if	(of_device_is_compatible(dt_node, "fsl,qman-portal-1.2") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.2.0"))
+		qman_ip_rev = QMAN_REV12;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-2.0.0"))
+		qman_ip_rev = QMAN_REV20;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.1"))
+		qman_ip_rev = QMAN_REV30;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.3"))
+		qman_ip_rev = QMAN_REV31;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.1"))
+		qman_ip_rev = QMAN_REV32;
+	else
+		qman_ip_rev = QMAN_REV11;
+
+	if (!qman_ip_rev) {
+		pr_err("Unknown qman portal version\n");
+		return -ENODEV;
+	}
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30) {
+		qm_channel_pool1 = QMAN_CHANNEL_POOL1_REV3;
+		qm_channel_caam = QMAN_CHANNEL_CAAM_REV3;
+		qm_channel_pme = QMAN_CHANNEL_PME_REV3;
+	}
+
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,pool-channel-range");
+	if (!dt_node) {
+		pr_err("No qman pool channel range available\n");
+		return -ENODEV;
+	}
+	chanid = of_get_property(dt_node, "fsl,pool-channel-range", &lenp);
+	if (!chanid) {
+		pr_err("Can not get pool-channel-range property\n");
+		return -EINVAL;
+	}
+
+	/* get ccsr base */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman");
+	if (!dt_node) {
+		pr_err("No qman device node available\n");
+		return -ENODEV;
+	}
+	qman_addr = of_get_address(dt_node, 0, &regs_size, NULL);
+	if (!qman_addr) {
+		pr_err("of_get_address cannot return qman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(dt_node, qman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open("/dev/mem", O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for qman ccsr map\n");
+		return ccsr_map_fd;
+	}
+
+	qman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (qman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map qman ccsr base\n");
+		return -EINVAL;
+	}
+
+	clk = of_get_property(dt_node, "clock-frequency", NULL);
+	if (!clk)
+		pr_warn("Can't find Qman clock frequency\n");
+	else
+		qman_clk = be32_to_cpu(*clk);
+
+	return ret;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
new file mode 100644
index 0000000..4a11e40
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -0,0 +1,303 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __QMAN_PRIV_H
+#define __QMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_qman.h>
+
+/* Congestion Groups */
+/*
+ * This wrapper represents a bit-array for the state of the 256 QMan congestion
+ * groups. Is also used as a *mask* for congestion groups, eg. so we ignore
+ * those that don't concern us. We harness the structure and accessor details
+ * already used in the management command to query congestion groups.
+ */
+struct qman_cgrs {
+	struct __qm_mcr_querycongestion q;
+};
+
+static inline void qman_cgrs_init(struct qman_cgrs *c)
+{
+	memset(c, 0, sizeof(*c));
+}
+
+static inline void qman_cgrs_fill(struct qman_cgrs *c)
+{
+	memset(c, 0xff, sizeof(*c));
+}
+
+static inline int qman_cgrs_get(struct qman_cgrs *c, int num)
+{
+	return QM_MCR_QUERYCONGESTION(&c->q, num);
+}
+
+static inline void qman_cgrs_set(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] |= (0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline void qman_cgrs_unset(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] &= ~(0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline int qman_cgrs_next(struct qman_cgrs *c, int num)
+{
+	while ((++num < (int)__CGR_NUM) && !qman_cgrs_get(c, num))
+		;
+	return num;
+}
+
+static inline void qman_cgrs_cp(struct qman_cgrs *dest,
+				const struct qman_cgrs *src)
+{
+	memcpy(dest, src, sizeof(*dest));
+}
+
+static inline void qman_cgrs_and(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) & *(_b++);
+}
+
+static inline void qman_cgrs_xor(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) ^ *(_b++);
+}
+
+/* used by CCSR and portal interrupt code */
+enum qm_isr_reg {
+	qm_isr_status = 0,
+	qm_isr_enable = 1,
+	qm_isr_disable = 2,
+	qm_isr_inhibit = 3
+};
+
+struct qm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	struct device_node *node;
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* If the caller enables DQRR stashing (and thus wishes to operate the
+	 * portal from only one cpu), this is the logical CPU that the portal
+	 * will stash to. Whether stashing is enabled or not, this setting is
+	 * also used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* The portal's dedicated channel id, use this value for initialising
+	 * frame queues to target this portal when scheduled.
+	 */
+	u16 channel;
+	/* A mask of which pool channels this portal has dequeue access to
+	 * (using QM_SDQCR_CHANNELS_POOL(n) for the bitmask).
+	 */
+	u32 pools;
+
+};
+
+/* Revision info (for errata and feature handling) */
+#define QMAN_REV11 0x0101
+#define QMAN_REV12 0x0102
+#define QMAN_REV20 0x0200
+#define QMAN_REV30 0x0300
+#define QMAN_REV31 0x0301
+#define QMAN_REV32 0x0302
+extern u16 qman_ip_rev; /* 0 if uninitialised, otherwise QMAN_REVx */
+extern u32 qman_clk;
+
+int qm_set_wpm(int wpm);
+int qm_get_wpm(int *wpm);
+
+struct qman_portal *qman_create_affine_portal(
+			const struct qm_portal_config *config,
+			const struct qman_cgrs *cgrs);
+const struct qm_portal_config *qman_destroy_affine_portal(void);
+
+struct qm_portal_config *qm_get_unused_portal(void);
+struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx);
+
+void qm_put_unused_portal(struct qm_portal_config *pcfg);
+void qm_set_liodns(struct qm_portal_config *pcfg);
+
+/* This CGR feature is supported by h/w and required by unit-tests and the
+ * debugfs hooks, so is implemented in the driver. However it allows an explicit
+ * corruption of h/w fields by s/w that are usually incorruptible (because the
+ * counters are usually maintained entirely within h/w). As such, we declare
+ * this API internally.
+ */
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result);
+
+/*   QMan s/w corenet portal, low-level i/face	 */
+
+/*
+ * For Choose one SOURCE. Choose one COUNT. Choose one
+ * dequeue TYPE. Choose TOKEN (8-bit).
+ * If SOURCE == CHANNELS,
+ *   Choose CHANNELS_DEDICATED and/or CHANNELS_POOL(n).
+ *   You can choose DEDICATED_PRECEDENCE if the portal channel should have
+ *   priority.
+ * If SOURCE == SPECIFICWQ,
+ *     Either select the work-queue ID with SPECIFICWQ_WQ(), or select the
+ *     channel (SPECIFICWQ_DEDICATED or SPECIFICWQ_POOL()) and specify the
+ *     work-queue priority (0-7) with SPECIFICWQ_WQ() - either way, you get the
+ *     same value.
+ */
+#define QM_SDQCR_SOURCE_CHANNELS	0x0
+#define QM_SDQCR_SOURCE_SPECIFICWQ	0x40000000
+#define QM_SDQCR_COUNT_EXACT1		0x0
+#define QM_SDQCR_COUNT_UPTO3		0x20000000
+#define QM_SDQCR_DEDICATED_PRECEDENCE	0x10000000
+#define QM_SDQCR_TYPE_MASK		0x03000000
+#define QM_SDQCR_TYPE_NULL		0x0
+#define QM_SDQCR_TYPE_PRIO_QOS		0x01000000
+#define QM_SDQCR_TYPE_ACTIVE_QOS	0x02000000
+#define QM_SDQCR_TYPE_ACTIVE		0x03000000
+#define QM_SDQCR_TOKEN_MASK		0x00ff0000
+#define QM_SDQCR_TOKEN_SET(v)		(((v) & 0xff) << 16)
+#define QM_SDQCR_TOKEN_GET(v)		(((v) >> 16) & 0xff)
+#define QM_SDQCR_CHANNELS_DEDICATED	0x00008000
+#define QM_SDQCR_SPECIFICWQ_MASK	0x000000f7
+#define QM_SDQCR_SPECIFICWQ_DEDICATED	0x00000000
+#define QM_SDQCR_SPECIFICWQ_POOL(n)	((n) << 4)
+#define QM_SDQCR_SPECIFICWQ_WQ(n)	(n)
+
+#define QM_VDQCR_FQID_MASK		0x00ffffff
+#define QM_VDQCR_FQID(n)		((n) & QM_VDQCR_FQID_MASK)
+
+#define QM_EQCR_VERB_VBIT		0x80
+#define QM_EQCR_VERB_CMD_MASK		0x61	/* but only one value; */
+#define QM_EQCR_VERB_CMD_ENQUEUE	0x01
+#define QM_EQCR_VERB_COLOUR_MASK	0x18	/* 4 possible values; */
+#define QM_EQCR_VERB_COLOUR_GREEN	0x00
+#define QM_EQCR_VERB_COLOUR_YELLOW	0x08
+#define QM_EQCR_VERB_COLOUR_RED		0x10
+#define QM_EQCR_VERB_COLOUR_OVERRIDE	0x18
+#define QM_EQCR_VERB_INTERRUPT		0x04	/* on command consumption */
+#define QM_EQCR_VERB_ORP		0x02	/* enable order restoration */
+#define QM_EQCR_DCA_ENABLE		0x80
+#define QM_EQCR_DCA_PARK		0x40
+#define QM_EQCR_DCA_IDXMASK		0x0f	/* "DQRR::idx" goes here */
+#define QM_EQCR_SEQNUM_NESN		0x8000	/* Advance NESN */
+#define QM_EQCR_SEQNUM_NLIS		0x4000	/* More fragments to come */
+#define QM_EQCR_SEQNUM_SEQMASK		0x3fff	/* sequence number goes here */
+#define QM_EQCR_FQID_NULL		0	/* eg. for an ORP seqnum hole */
+
+#define QM_MCC_VERB_VBIT		0x80
+#define QM_MCC_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_MCC_VERB_INITFQ_PARKED	0x40
+#define QM_MCC_VERB_INITFQ_SCHED	0x41
+#define QM_MCC_VERB_QUERYFQ		0x44
+#define QM_MCC_VERB_QUERYFQ_NP		0x45	/* "non-programmable" fields */
+#define QM_MCC_VERB_QUERYWQ		0x46
+#define QM_MCC_VERB_QUERYWQ_DEDICATED	0x47
+#define QM_MCC_VERB_ALTER_SCHED		0x48	/* Schedule FQ */
+#define QM_MCC_VERB_ALTER_FE		0x49	/* Force Eligible FQ */
+#define QM_MCC_VERB_ALTER_RETIRE	0x4a	/* Retire FQ */
+#define QM_MCC_VERB_ALTER_OOS		0x4b	/* Take FQ out of service */
+#define QM_MCC_VERB_ALTER_FQXON		0x4d	/* FQ XON */
+#define QM_MCC_VERB_ALTER_FQXOFF	0x4e	/* FQ XOFF */
+#define QM_MCC_VERB_INITCGR		0x50
+#define QM_MCC_VERB_MODIFYCGR		0x51
+#define QM_MCC_VERB_CGRTESTWRITE	0x52
+#define QM_MCC_VERB_QUERYCGR		0x58
+#define QM_MCC_VERB_QUERYCONGESTION	0x59
+
+/*
+ * Used by all portal interrupt registers except 'inhibit'
+ * Channels with frame availability
+ */
+#define QM_PIRQ_DQAVAIL	0x0000ffff
+
+/* The DQAVAIL interrupt fields break down into these bits; */
+#define QM_DQAVAIL_PORTAL	0x8000		/* Portal channel */
+#define QM_DQAVAIL_POOL(n)	(0x8000 >> (n))	/* Pool channel, n==[1..15] */
+#define QM_DQAVAIL_MASK		0xffff
+/* This mask contains all the "irqsource" bits visible to API users */
+#define QM_PIRQ_VISIBLE	(QM_PIRQ_SLOW | QM_PIRQ_DQRI)
+
+/* These are qm_<reg>_<verb>(). So for example, qm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define qm_isr_status_read(qm)		__qm_isr_read(qm, qm_isr_status)
+#define qm_isr_status_clear(qm, m)	__qm_isr_write(qm, qm_isr_status, m)
+#define qm_isr_enable_read(qm)		__qm_isr_read(qm, qm_isr_enable)
+#define qm_isr_enable_write(qm, v)	__qm_isr_write(qm, qm_isr_enable, v)
+#define qm_isr_disable_read(qm)		__qm_isr_read(qm, qm_isr_disable)
+#define qm_isr_disable_write(qm, v)	__qm_isr_write(qm, qm_isr_disable, v)
+/* TODO: unfortunate name-clash here, reword? */
+#define qm_isr_inhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 1)
+#define qm_isr_uninhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 0)
+
+#define QMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+
+#endif /* _QMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
new file mode 100644
index 0000000..784fe60
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -0,0 +1,1254 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_QMAN_H
+#define __FSL_QMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <dpaa_rbtree.h>
+
+/* Last updated for v00.800 of the BG */
+
+/* Hardware constants */
+#define QM_CHANNEL_SWPORTAL0 0
+#define QMAN_CHANNEL_POOL1 0x21
+#define QMAN_CHANNEL_CAAM 0x80
+#define QMAN_CHANNEL_PME 0xa0
+#define QMAN_CHANNEL_POOL1_REV3 0x401
+#define QMAN_CHANNEL_CAAM_REV3 0x840
+#define QMAN_CHANNEL_PME_REV3 0x860
+extern u16 qm_channel_pool1;
+extern u16 qm_channel_caam;
+extern u16 qm_channel_pme;
+enum qm_dc_portal {
+	qm_dc_portal_fman0 = 0,
+	qm_dc_portal_fman1 = 1,
+	qm_dc_portal_caam = 2,
+	qm_dc_portal_pme = 3
+};
+
+/* Portal processing (interrupt) sources */
+#define QM_PIRQ_CCSCI	0x00200000	/* CEETM Congestion State Change */
+#define QM_PIRQ_CSCI	0x00100000	/* Congestion State Change */
+#define QM_PIRQ_EQCI	0x00080000	/* Enqueue Command Committed */
+#define QM_PIRQ_EQRI	0x00040000	/* EQCR Ring (below threshold) */
+#define QM_PIRQ_DQRI	0x00020000	/* DQRR Ring (non-empty) */
+#define QM_PIRQ_MRI	0x00010000	/* MR Ring (non-empty) */
+/*
+ * This mask contains all the interrupt sources that need handling except DQRI,
+ * ie. that if present should trigger slow-path processing.
+ */
+#define QM_PIRQ_SLOW	(QM_PIRQ_CSCI | QM_PIRQ_EQCI | QM_PIRQ_EQRI | \
+			QM_PIRQ_MRI | QM_PIRQ_CCSCI)
+
+/* For qman_static_dequeue_*** APIs */
+#define QM_SDQCR_CHANNELS_POOL_MASK	0x00007fff
+/* for n in [1,15] */
+#define QM_SDQCR_CHANNELS_POOL(n)	(0x00008000 >> (n))
+/* for conversion from n of qm_channel */
+static inline u32 QM_SDQCR_CHANNELS_POOL_CONV(u16 channel)
+{
+	return QM_SDQCR_CHANNELS_POOL(channel + 1 - qm_channel_pool1);
+}
+
+/* For qman_volatile_dequeue(); Choose one PRECEDENCE. EXACT is optional. Use
+ * NUMFRAMES(n) (6-bit) or NUMFRAMES_TILLEMPTY to fill in the frame-count. Use
+ * FQID(n) to fill in the frame queue ID.
+ */
+#define QM_VDQCR_PRECEDENCE_VDQCR	0x0
+#define QM_VDQCR_PRECEDENCE_SDQCR	0x80000000
+#define QM_VDQCR_EXACT			0x40000000
+#define QM_VDQCR_NUMFRAMES_MASK		0x3f000000
+#define QM_VDQCR_NUMFRAMES_SET(n)	(((n) & 0x3f) << 24)
+#define QM_VDQCR_NUMFRAMES_GET(n)	(((n) >> 24) & 0x3f)
+#define QM_VDQCR_NUMFRAMES_TILLEMPTY	QM_VDQCR_NUMFRAMES_SET(0)
+
+/* --- QMan data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct qm_eqcr_entry;	/* EQCR (EnQueue Command Ring) entries */
+struct qm_dqrr_entry;	/* DQRR (DeQueue Response Ring) entries */
+struct qm_mr_entry;	/* MR (Message Ring) entries */
+struct qm_mc_command;	/* MC (Management Command) command */
+struct qm_mc_result;	/* MC result */
+
+#define QM_FD_FORMAT_SG		0x4
+#define QM_FD_FORMAT_LONG	0x2
+#define QM_FD_FORMAT_COMPOUND	0x1
+enum qm_fd_format {
+	/*
+	 * 'contig' implies a contiguous buffer, whereas 'sg' implies a
+	 * scatter-gather table. 'big' implies a 29-bit length with no offset
+	 * field, otherwise length is 20-bit and offset is 9-bit. 'compound'
+	 * implies a s/g-like table, where each entry itself represents a frame
+	 * (contiguous or scatter-gather) and the 29-bit "length" is
+	 * interpreted purely for congestion calculations, ie. a "congestion
+	 * weight".
+	 */
+	qm_fd_contig = 0,
+	qm_fd_contig_big = QM_FD_FORMAT_LONG,
+	qm_fd_sg = QM_FD_FORMAT_SG,
+	qm_fd_sg_big = QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG,
+	qm_fd_compound = QM_FD_FORMAT_COMPOUND
+};
+
+/* Capitalised versions are un-typed but can be used in static expressions */
+#define QM_FD_CONTIG	0
+#define QM_FD_CONTIG_BIG QM_FD_FORMAT_LONG
+#define QM_FD_SG	QM_FD_FORMAT_SG
+#define QM_FD_SG_BIG	(QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG)
+#define QM_FD_COMPOUND	QM_FD_FORMAT_COMPOUND
+
+/* "Frame Descriptor (FD)" */
+struct qm_fd {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 dd:2;	/* dynamic debug */
+			u8 liodn_offset:6;
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 eliodn_offset:4;
+			u8 __reserved:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u8 liodn_offset:6;
+			u8 dd:2;	/* dynamic debug */
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 __reserved:4;
+			u8 eliodn_offset:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#endif
+		};
+		struct {
+			u64 __notaddress:24;
+			/* More efficient address accessor */
+			u64 addr:40;
+		};
+		u64 opaque_addr;
+	};
+	/* The 'format' field indicates the interpretation of the remaining 29
+	 * bits of the 32-bit word. For packing reasons, it is duplicated in the
+	 * other union elements. Note, union'd structs are difficult to use with
+	 * static initialisation under gcc, in which case use the "opaque" form
+	 * with one of the macros.
+	 */
+	union {
+		/* For easier/faster copying of this part of the fd (eg. from a
+		 * DQRR entry to an EQCR entry) copy 'opaque'
+		 */
+		u32 opaque;
+		/* If 'format' is _contig or _sg, 20b length and 9b offset */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format format:3;
+			u16 offset:9;
+			u32 length20:20;
+#else
+			u32 length20:20;
+			u16 offset:9;
+			enum qm_fd_format format:3;
+#endif
+		};
+		/* If 'format' is _contig_big or _sg_big, 29b length */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format1:3;
+			u32 length29:29;
+#else
+			u32 length29:29;
+			enum qm_fd_format _format1:3;
+#endif
+		};
+		/* If 'format' is _compound, 29b "congestion weight" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format2:3;
+			u32 cong_weight:29;
+#else
+			u32 cong_weight:29;
+			enum qm_fd_format _format2:3;
+#endif
+		};
+	};
+	union {
+		u32 cmd;
+		u32 status;
+	};
+} __attribute__((aligned(8)));
+#define QM_FD_DD_NULL		0x00
+#define QM_FD_PID_MASK		0x3f
+static inline u64 qm_fd_addr_get64(const struct qm_fd *fd)
+{
+	return fd->addr;
+}
+
+static inline dma_addr_t qm_fd_addr(const struct qm_fd *fd)
+{
+	return (dma_addr_t)fd->addr;
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_fd_addr_set64(fd, v) \
+	do { \
+		struct qm_fd *__fd931 = (fd); \
+		__fd931->addr = v; \
+	} while (0)
+
+/* Scatter/Gather table entry */
+struct qm_sg_entry {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1[3];
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u8 __reserved1[3];
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:24;
+			u64 addr:40;
+#else
+			u64 addr:40;
+			u64 __notaddress:24;
+#endif
+		};
+		u64 opaque;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 extension:1;	/* Extension bit */
+			u32 final:1;		/* Final bit */
+			u32 length:30;
+#else
+			u32 length:30;
+			u32 final:1;		/* Final bit */
+			u32 extension:1;	/* Extension bit */
+#endif
+		};
+		u32 val;
+	};
+	u8 __reserved2;
+	u8 bpid;
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved3:3;
+			u16 offset:13;
+#else
+			u16 offset:13;
+			u16 __reserved3:3;
+#endif
+		};
+		u16 val_off;
+	};
+} __packed;
+static inline u64 qm_sg_entry_get64(const struct qm_sg_entry *sg)
+{
+	return sg->addr;
+}
+
+static inline dma_addr_t qm_sg_addr(const struct qm_sg_entry *sg)
+{
+	return (dma_addr_t)sg->addr;
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_sg_entry_set64(sg, v) \
+	do { \
+		struct qm_sg_entry *__sg931 = (sg); \
+		__sg931->addr = v; \
+	} while (0)
+
+/* See 1.5.8.1: "Enqueue Command" */
+struct qm_eqcr_entry {
+	u8 __dont_write_directly__verb;
+	u8 dca;
+	u16 seqnum;
+	u32 orp;	/* 24-bit */
+	u32 fqid;	/* 24-bit */
+	u32 tag;
+	struct qm_fd fd;
+	u8 __reserved3[32];
+} __packed;
+
+
+/* "Frame Dequeue Response" */
+struct qm_dqrr_entry {
+	u8 verb;
+	u8 stat;
+	u16 seqnum;	/* 15-bit */
+	u8 tok;
+	u8 __reserved2[3];
+	u32 fqid;	/* 24-bit */
+	u32 contextB;
+	struct qm_fd fd;
+	u8 __reserved4[32];
+};
+
+#define QM_DQRR_VERB_VBIT		0x80
+#define QM_DQRR_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_DQRR_VERB_FRAME_DEQUEUE	0x60	/* "this format" */
+#define QM_DQRR_STAT_FQ_EMPTY		0x80	/* FQ empty */
+#define QM_DQRR_STAT_FQ_HELDACTIVE	0x40	/* FQ held active */
+#define QM_DQRR_STAT_FQ_FORCEELIGIBLE	0x20	/* FQ was force-eligible'd */
+#define QM_DQRR_STAT_FD_VALID		0x10	/* has a non-NULL FD */
+#define QM_DQRR_STAT_UNSCHEDULED	0x02	/* Unscheduled dequeue */
+#define QM_DQRR_STAT_DQCR_EXPIRED	0x01	/* VDQCR or PDQCR expired*/
+
+
+/* "ERN Message Response" */
+/* "FQ State Change Notification" */
+struct qm_mr_entry {
+	u8 verb;
+	union {
+		struct {
+			u8 dca;
+			u16 seqnum;
+			u8 rc;		/* Rejection Code */
+			u32 orp:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed ern;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+			u8 __reserved1:4;
+			enum qm_dc_portal portal:2;
+#else
+			enum qm_dc_portal portal:3;
+			u8 __reserved1:3;
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+#endif
+			u16 __reserved2;
+			u8 rc;		/* Rejection Code */
+			u32 __reserved3:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed dcern;
+		struct {
+			u8 fqs;		/* Frame Queue Status */
+			u8 __reserved1[6];
+			u32 fqid;	/* 24-bit */
+			u32 contextB;
+			u8 __reserved2[16];
+		} __packed fq;		/* FQRN/FQRNI/FQRL/FQPN */
+	};
+	u8 __reserved2[32];
+} __packed;
+#define QM_MR_VERB_VBIT			0x80
+/*
+ * ERNs originating from direct-connect portals ("dcern") use 0x20 as a verb
+ * which would be invalid as a s/w enqueue verb. A s/w ERN can be distinguished
+ * from the other MR types by noting if the 0x20 bit is unset.
+ */
+#define QM_MR_VERB_TYPE_MASK		0x27
+#define QM_MR_VERB_DC_ERN		0x20
+#define QM_MR_VERB_FQRN			0x21
+#define QM_MR_VERB_FQRNI		0x22
+#define QM_MR_VERB_FQRL			0x23
+#define QM_MR_VERB_FQPN			0x24
+#define QM_MR_RC_MASK			0xf0	/* contains one of; */
+#define QM_MR_RC_CGR_TAILDROP		0x00
+#define QM_MR_RC_WRED			0x10
+#define QM_MR_RC_ERROR			0x20
+#define QM_MR_RC_ORPWINDOW_EARLY	0x30
+#define QM_MR_RC_ORPWINDOW_LATE		0x40
+#define QM_MR_RC_FQ_TAILDROP		0x50
+#define QM_MR_RC_ORPWINDOW_RETIRED	0x60
+#define QM_MR_RC_ORP_ZERO		0x70
+#define QM_MR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+#define QM_MR_DCERN_COLOUR_GREEN	0x00
+#define QM_MR_DCERN_COLOUR_YELLOW	0x01
+#define QM_MR_DCERN_COLOUR_RED		0x02
+#define QM_MR_DCERN_COLOUR_OVERRIDE	0x03
+/*
+ * An identical structure of FQD fields is present in the "Init FQ" command and
+ * the "Query FQ" result, it's suctioned out into the "struct qm_fqd" type.
+ * Within that, the 'stashing' and 'taildrop' pieces are also factored out, the
+ * latter has two inlines to assist with converting to/from the mant+exp
+ * representation.
+ */
+struct qm_fqd_stashing {
+	/* See QM_STASHING_EXCL_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 exclusive;
+	u8 __reserved1:2;
+	/* Numbers of cachelines */
+	u8 annotation_cl:2;
+	u8 data_cl:2;
+	u8 context_cl:2;
+#else
+	u8 context_cl:2;
+	u8 data_cl:2;
+	u8 annotation_cl:2;
+	u8 __reserved1:2;
+	u8 exclusive;
+#endif
+} __packed;
+struct qm_fqd_taildrop {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved1:3;
+	u16 mant:8;
+	u16 exp:5;
+#else
+	u16 exp:5;
+	u16 mant:8;
+	u16 __reserved1:3;
+#endif
+} __packed;
+struct qm_fqd_oac {
+	/* "Overhead Accounting Control", see QM_OAC_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 oac:2; /* "Overhead Accounting Control" */
+	u8 __reserved1:6;
+#else
+	u8 __reserved1:6;
+	u8 oac:2; /* "Overhead Accounting Control" */
+#endif
+	/* Two's-complement value (-128 to +127) */
+	signed char oal; /* "Overhead Accounting Length" */
+} __packed;
+struct qm_fqd {
+	union {
+		u8 orpc;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1:2;
+			u8 orprws:3;
+			u8 oa:1;
+			u8 olws:2;
+#else
+			u8 olws:2;
+			u8 oa:1;
+			u8 orprws:3;
+			u8 __reserved1:2;
+#endif
+		} __packed;
+	};
+	u8 cgid;
+	u16 fq_ctrl;	/* See QM_FQCTRL_<...> */
+	union {
+		u16 dest_wq;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 channel:13; /* qm_channel */
+			u16 wq:3;
+#else
+			u16 wq:3;
+			u16 channel:13; /* qm_channel */
+#endif
+		} __packed dest;
+	};
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#else
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#endif
+	/*
+	 * For "Initialize Frame Queue" commands, the write-enable mask
+	 * determines whether 'td' or 'oac_init' is observed. For query
+	 * commands, this field is always 'td', and 'oac_query' (below) reflects
+	 * the Overhead ACcounting values.
+	 */
+	union {
+		uint16_t opaque_td;
+		struct qm_fqd_taildrop td;
+		struct qm_fqd_oac oac_init;
+	};
+	u32 context_b;
+	union {
+		/* Treat it as 64-bit opaque */
+		u64 opaque;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 hi;
+			u32 lo;
+#else
+			u32 lo;
+			u32 hi;
+#endif
+		};
+		/* Treat it as s/w portal stashing config */
+		/* see "FQD Context_A field used for [...]" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			struct qm_fqd_stashing stashing;
+			/*
+			 * 48-bit address of FQ context to
+			 * stash, must be cacheline-aligned
+			 */
+			u16 context_hi;
+			u32 context_lo;
+#else
+			u32 context_lo;
+			u16 context_hi;
+			struct qm_fqd_stashing stashing;
+#endif
+		} __packed;
+	} context_a;
+	struct qm_fqd_oac oac_query;
+} __packed;
+/* 64-bit converters for context_hi/lo */
+static inline u64 qm_fqd_stashing_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.context_hi << 32) |
+		(u64)fqd->context_a.context_lo;
+}
+
+static inline dma_addr_t qm_fqd_stashing_addr(const struct qm_fqd *fqd)
+{
+	return (dma_addr_t)qm_fqd_stashing_get64(fqd);
+}
+
+static inline u64 qm_fqd_context_a_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.hi << 32) |
+		(u64)fqd->context_a.lo;
+}
+
+static inline void qm_fqd_stashing_set64(struct qm_fqd *fqd, u64 addr)
+{
+		fqd->context_a.context_hi = upper_32_bits(addr);
+		fqd->context_a.context_lo = lower_32_bits(addr);
+}
+
+static inline void qm_fqd_context_a_set64(struct qm_fqd *fqd, u64 addr)
+{
+	fqd->context_a.hi = upper_32_bits(addr);
+	fqd->context_a.lo = lower_32_bits(addr);
+}
+
+/* convert a threshold value into mant+exp representation */
+static inline int qm_fqd_taildrop_set(struct qm_fqd_taildrop *td, u32 val,
+				      int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	if (val > 0xe0000000)
+		return -ERANGE;
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	td->exp = e;
+	td->mant = val;
+	return 0;
+}
+
+/* and the other direction */
+static inline u32 qm_fqd_taildrop_get(const struct qm_fqd_taildrop *td)
+{
+	return (u32)td->mant << td->exp;
+}
+
+
+/* See "Frame Queue Descriptor (FQD)" */
+/* Frame Queue Descriptor (FQD) field 'fq_ctrl' uses these constants */
+#define QM_FQCTRL_MASK		0x07ff	/* 'fq_ctrl' flags; */
+#define QM_FQCTRL_CGE		0x0400	/* Congestion Group Enable */
+#define QM_FQCTRL_TDE		0x0200	/* Tail-Drop Enable */
+#define QM_FQCTRL_ORP		0x0100	/* ORP Enable */
+#define QM_FQCTRL_CTXASTASHING	0x0080	/* Context-A stashing */
+#define QM_FQCTRL_CPCSTASH	0x0040	/* CPC Stash Enable */
+#define QM_FQCTRL_FORCESFDR	0x0008	/* High-priority SFDRs */
+#define QM_FQCTRL_AVOIDBLOCK	0x0004	/* Don't block active */
+#define QM_FQCTRL_HOLDACTIVE	0x0002	/* Hold active in portal */
+#define QM_FQCTRL_PREFERINCACHE	0x0001	/* Aggressively cache FQD */
+#define QM_FQCTRL_LOCKINCACHE	QM_FQCTRL_PREFERINCACHE /* older naming */
+
+/* See "FQD Context_A field used for [...] */
+/* Frame Queue Descriptor (FQD) field 'CONTEXT_A' uses these constants */
+#define QM_STASHING_EXCL_ANNOTATION	0x04
+#define QM_STASHING_EXCL_DATA		0x02
+#define QM_STASHING_EXCL_CTX		0x01
+
+/* See "Intra Class Scheduling" */
+/* FQD field 'OAC' (Overhead ACcounting) uses these constants */
+#define QM_OAC_ICS		0x2 /* Accounting for Intra-Class Scheduling */
+#define QM_OAC_CG		0x1 /* Accounting for Congestion Groups */
+
+/*
+ * This struct represents the 32-bit "WR_PARM_[GYR]" parameters in CGR fields
+ * and associated commands/responses. The WRED parameters are calculated from
+ * these fields as follows;
+ *   MaxTH = MA * (2 ^ Mn)
+ *   Slope = SA / (2 ^ Sn)
+ *    MaxP = 4 * (Pn + 1)
+ */
+struct qm_cgr_wr_parm {
+	union {
+		u32 word;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 MA:8;
+			u32 Mn:5;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Sn:6;
+			u32 Pn:6;
+#else
+			u32 Pn:6;
+			u32 Sn:6;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Mn:5;
+			u32 MA:8;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This struct represents the 13-bit "CS_THRES" CGR field. In the corresponding
+ * management commands, this is padded to a 16-bit structure field, so that's
+ * how we represent it here. The congestion state threshold is calculated from
+ * these fields as follows;
+ *   CS threshold = TA * (2 ^ Tn)
+ */
+struct qm_cgr_cs_thres {
+	union {
+		u16 hword;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved:3;
+			u16 TA:8;
+			u16 Tn:5;
+#else
+			u16 Tn:5;
+			u16 TA:8;
+			u16 __reserved:3;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This identical structure of CGR fields is present in the "Init/Modify CGR"
+ * commands and the "Query CGR" result. It's suctioned out here into its own
+ * struct.
+ */
+struct __qm_mc_cgr {
+	struct qm_cgr_wr_parm wr_parm_g;
+	struct qm_cgr_wr_parm wr_parm_y;
+	struct qm_cgr_wr_parm wr_parm_r;
+	u8 wr_en_g;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_y;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_r;	/* boolean, use QM_CGR_EN */
+	u8 cscn_en;	/* boolean, use QM_CGR_EN */
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+#else
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+#endif
+		};
+		u32 cscn_targ;	/* use QM_CGR_TARG_* */
+	};
+	u8 cstd_en;	/* boolean, use QM_CGR_EN */
+	u8 cs;		/* boolean, only used in query response */
+	union {
+		struct qm_cgr_cs_thres cs_thres;
+		/* use qm_cgr_cs_thres_set64() */
+		u16 __cs_thres;
+	};
+	u8 mode;	/* QMAN_CGR_MODE_FRAME not supported in rev1.0 */
+} __packed;
+#define QM_CGR_EN		0x01 /* For wr_en_*, cscn_en, cstd_en */
+#define QM_CGR_TARG_UDP_CTRL_WRITE_BIT	0x8000 /* value written to portal bit*/
+#define QM_CGR_TARG_UDP_CTRL_DCP	0x4000 /* 0: SWP, 1: DCP */
+#define QM_CGR_TARG_PORTAL(n)	(0x80000000 >> (n)) /* s/w portal, 0-9 */
+#define QM_CGR_TARG_FMAN0	0x00200000 /* direct-connect portal: fman0 */
+#define QM_CGR_TARG_FMAN1	0x00100000 /*			   : fman1 */
+/* Convert CGR thresholds to/from "cs_thres" format */
+static inline u64 qm_cgr_cs_thres_get64(const struct qm_cgr_cs_thres *th)
+{
+	return (u64)th->TA << th->Tn;
+}
+
+static inline int qm_cgr_cs_thres_set64(struct qm_cgr_cs_thres *th, u64 val,
+					int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	th->Tn = e;
+	th->TA = val;
+	return 0;
+}
+
+/* See 1.5.8.5.1: "Initialize FQ" */
+/* See 1.5.8.5.2: "Query FQ" */
+/* See 1.5.8.5.3: "Query FQ Non-Programmable Fields" */
+/* See 1.5.8.5.4: "Alter FQ State Commands " */
+/* See 1.5.8.6.1: "Initialize/Modify CGR" */
+/* See 1.5.8.6.2: "CGR Test Write" */
+/* See 1.5.8.6.3: "Query CGR" */
+/* See 1.5.8.6.4: "Query Congestion Group State" */
+struct qm_mcc_initfq {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	u32 fqid;	/* 24-bit */
+	u16 count;	/* Initialises 'count+1' FQDs */
+	struct qm_fqd fqd; /* the FQD fields go here */
+	u8 __reserved3[30];
+} __packed;
+struct qm_mcc_queryfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_queryfq_np {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_alterfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2;
+	u8 count;	/* number of consecutive FQID */
+	u8 __reserved3[10];
+	u32 context_b;	/* frame queue context b */
+	u8 __reserved4[40];
+} __packed;
+struct qm_mcc_initcgr {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	struct __qm_mc_cgr cgr;	/* CGR fields */
+	u8 __reserved2[2];
+	u8 cgid;
+	u8 __reserved4[32];
+} __packed;
+struct qm_mcc_cgrtestwrite {
+	u8 __reserved1[2];
+	u8 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u8 __reserved2[23];
+	u8 cgid;
+	u8 __reserved3[32];
+} __packed;
+struct qm_mcc_querycgr {
+	u8 __reserved1[30];
+	u8 cgid;
+	u8 __reserved2[32];
+} __packed;
+struct qm_mcc_querycongestion {
+	u8 __reserved[63];
+} __packed;
+struct qm_mcc_querywq {
+	u8 __reserved;
+	/* select channel if verb != QUERYWQ_DEDICATED */
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved1:3;
+#else
+			u16 __reserved1:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved2[60];
+} __packed;
+
+struct qm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct qm_mcc_initfq initfq;
+		struct qm_mcc_queryfq queryfq;
+		struct qm_mcc_queryfq_np queryfq_np;
+		struct qm_mcc_alterfq alterfq;
+		struct qm_mcc_initcgr initcgr;
+		struct qm_mcc_cgrtestwrite cgrtestwrite;
+		struct qm_mcc_querycgr querycgr;
+		struct qm_mcc_querycongestion querycongestion;
+		struct qm_mcc_querywq querywq;
+	};
+} __packed;
+
+/* INITFQ-specific flags */
+#define QM_INITFQ_WE_MASK		0x01ff	/* 'Write Enable' flags; */
+#define QM_INITFQ_WE_OAC		0x0100
+#define QM_INITFQ_WE_ORPC		0x0080
+#define QM_INITFQ_WE_CGID		0x0040
+#define QM_INITFQ_WE_FQCTRL		0x0020
+#define QM_INITFQ_WE_DESTWQ		0x0010
+#define QM_INITFQ_WE_ICSCRED		0x0008
+#define QM_INITFQ_WE_TDTHRESH		0x0004
+#define QM_INITFQ_WE_CONTEXTB		0x0002
+#define QM_INITFQ_WE_CONTEXTA		0x0001
+/* INITCGR/MODIFYCGR-specific flags */
+#define QM_CGR_WE_MASK			0x07ff	/* 'Write Enable Mask'; */
+#define QM_CGR_WE_WR_PARM_G		0x0400
+#define QM_CGR_WE_WR_PARM_Y		0x0200
+#define QM_CGR_WE_WR_PARM_R		0x0100
+#define QM_CGR_WE_WR_EN_G		0x0080
+#define QM_CGR_WE_WR_EN_Y		0x0040
+#define QM_CGR_WE_WR_EN_R		0x0020
+#define QM_CGR_WE_CSCN_EN		0x0010
+#define QM_CGR_WE_CSCN_TARG		0x0008
+#define QM_CGR_WE_CSTD_EN		0x0004
+#define QM_CGR_WE_CS_THRES		0x0002
+#define QM_CGR_WE_MODE			0x0001
+
+struct qm_mcr_initfq {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_queryfq {
+	u8 __reserved1[8];
+	struct qm_fqd fqd;	/* the FQD fields are here */
+	u8 __reserved2[30];
+} __packed;
+struct qm_mcr_queryfq_np {
+	u8 __reserved1;
+	u8 state;	/* QM_MCR_NP_STATE_*** */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 __reserved2;
+	u32 fqd_link:24;
+	u16 __reserved3:2;
+	u16 odp_seq:14;
+	u16 __reserved4:2;
+	u16 orp_nesn:14;
+	u16 __reserved5:1;
+	u16 orp_ea_hseq:15;
+	u16 __reserved6:1;
+	u16 orp_ea_tseq:15;
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+	u8 __reserved11[5];
+	u8 __reserved12:7;
+	u8 is:1;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#else
+	u8 __reserved2;
+	u32 fqd_link:24;
+
+	u16 odp_seq:14;
+	u16 __reserved3:2;
+
+	u16 orp_nesn:14;
+	u16 __reserved4:2;
+
+	u16 orp_ea_hseq:15;
+	u16 __reserved5:1;
+
+	u16 orp_ea_tseq:15;
+	u16 __reserved6:1;
+
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+
+	u8 __reserved11[5];
+	u8 is:1;
+	u8 __reserved12:7;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#endif
+} __packed;
+
+struct qm_mcr_alterfq {
+	u8 fqs;		/* Frame Queue Status */
+	u8 __reserved1[61];
+} __packed;
+struct qm_mcr_initcgr {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_cgrtestwrite {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	u32 __reserved3:24;
+	u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u32 __reserved4:24;
+	u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+	u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+	u16 lgt;	/* Last Group Tick */
+	u16 wr_prob_g;
+	u16 wr_prob_y;
+	u16 wr_prob_r;
+	u8 __reserved5[8];
+} __packed;
+struct qm_mcr_querycgr {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved3:24;
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 __reserved3:24;
+#endif
+		};
+		u64 i_bcnt;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved4:24;
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 __reserved4:24;
+#endif
+		};
+		u64 a_bcnt;
+	};
+	union {
+		u32 cscn_targ_swp[4];
+		u8 __reserved5[16];
+	};
+} __packed;
+
+struct __qm_mcr_querycongestion {
+	u32 state[8];
+};
+
+struct qm_mcr_querycongestion {
+	u8 __reserved[30];
+	/* Access this struct using QM_MCR_QUERYCONGESTION() */
+	struct __qm_mcr_querycongestion state;
+} __packed;
+struct qm_mcr_querywq {
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved:3;
+#else
+			u16 __reserved:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved[28];
+	u32 wq_len[8];
+} __packed;
+
+struct qm_mc_result {
+	u8 verb;
+	u8 result;
+	union {
+		struct qm_mcr_initfq initfq;
+		struct qm_mcr_queryfq queryfq;
+		struct qm_mcr_queryfq_np queryfq_np;
+		struct qm_mcr_alterfq alterfq;
+		struct qm_mcr_initcgr initcgr;
+		struct qm_mcr_cgrtestwrite cgrtestwrite;
+		struct qm_mcr_querycgr querycgr;
+		struct qm_mcr_querycongestion querycongestion;
+		struct qm_mcr_querywq querywq;
+	};
+} __packed;
+
+#define QM_MCR_VERB_RRID		0x80
+#define QM_MCR_VERB_MASK		QM_MCC_VERB_MASK
+#define QM_MCR_VERB_INITFQ_PARKED	QM_MCC_VERB_INITFQ_PARKED
+#define QM_MCR_VERB_INITFQ_SCHED	QM_MCC_VERB_INITFQ_SCHED
+#define QM_MCR_VERB_QUERYFQ		QM_MCC_VERB_QUERYFQ
+#define QM_MCR_VERB_QUERYFQ_NP		QM_MCC_VERB_QUERYFQ_NP
+#define QM_MCR_VERB_QUERYWQ		QM_MCC_VERB_QUERYWQ
+#define QM_MCR_VERB_QUERYWQ_DEDICATED	QM_MCC_VERB_QUERYWQ_DEDICATED
+#define QM_MCR_VERB_ALTER_SCHED		QM_MCC_VERB_ALTER_SCHED
+#define QM_MCR_VERB_ALTER_FE		QM_MCC_VERB_ALTER_FE
+#define QM_MCR_VERB_ALTER_RETIRE	QM_MCC_VERB_ALTER_RETIRE
+#define QM_MCR_VERB_ALTER_OOS		QM_MCC_VERB_ALTER_OOS
+#define QM_MCR_RESULT_NULL		0x00
+#define QM_MCR_RESULT_OK		0xf0
+#define QM_MCR_RESULT_ERR_FQID		0xf1
+#define QM_MCR_RESULT_ERR_FQSTATE	0xf2
+#define QM_MCR_RESULT_ERR_NOTEMPTY	0xf3	/* OOS fails if FQ is !empty */
+#define QM_MCR_RESULT_ERR_BADCHANNEL	0xf4
+#define QM_MCR_RESULT_PENDING		0xf8
+#define QM_MCR_RESULT_ERR_BADCOMMAND	0xff
+#define QM_MCR_NP_STATE_FE		0x10
+#define QM_MCR_NP_STATE_R		0x08
+#define QM_MCR_NP_STATE_MASK		0x07	/* Reads FQD::STATE; */
+#define QM_MCR_NP_STATE_OOS		0x00
+#define QM_MCR_NP_STATE_RETIRED		0x01
+#define QM_MCR_NP_STATE_TEN_SCHED	0x02
+#define QM_MCR_NP_STATE_TRU_SCHED	0x03
+#define QM_MCR_NP_STATE_PARKED		0x04
+#define QM_MCR_NP_STATE_ACTIVE		0x05
+#define QM_MCR_NP_PTR_MASK		0x07ff	/* for RA[12] & OD[123] */
+#define QM_MCR_NP_RA1_NRA(v)		(((v) >> 14) & 0x3)	/* FQD::NRA */
+#define QM_MCR_NP_RA2_IT(v)		(((v) >> 14) & 0x1)	/* FQD::IT */
+#define QM_MCR_NP_OD1_NOD(v)		(((v) >> 14) & 0x3)	/* FQD::NOD */
+#define QM_MCR_NP_OD3_NPC(v)		(((v) >> 14) & 0x3)	/* FQD::NPC */
+#define QM_MCR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MCR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+/* This extracts the state for congestion group 'n' from a query response.
+ * Eg.
+ *   u8 cgr = [...];
+ *   struct qm_mc_result *res = [...];
+ *   printf("congestion group %d congestion state: %d\n", cgr,
+ *       QM_MCR_QUERYCONGESTION(&res->querycongestion.state, cgr));
+ */
+#define __CGR_WORD(num)		(num >> 5)
+#define __CGR_SHIFT(num)	(num & 0x1f)
+#define __CGR_NUM		(sizeof(struct __qm_mcr_querycongestion) << 3)
+static inline int QM_MCR_QUERYCONGESTION(struct __qm_mcr_querycongestion *p,
+					 u8 cgr)
+{
+	return p->state[__CGR_WORD(cgr)] & (0x80000000 >> __CGR_SHIFT(cgr));
+}
+
+	/* Portal and Frame Queues */
+/* Represents a managed portal */
+struct qman_portal;
+
+/*
+ * This object type represents QMan frame queue descriptors (FQD), it is
+ * cacheline-aligned, and initialised by qman_create_fq(). The structure is
+ * defined further down.
+ */
+struct qman_fq;
+
+/*
+ * This object type represents a QMan congestion group, it is defined further
+ * down.
+ */
+struct qman_cgr;
+
+/*
+ * This enum, and the callback type that returns it, are used when handling
+ * dequeued frames via DQRR. Note that for "null" callbacks registered with the
+ * portal object (for handling dequeues that do not demux because context_b is
+ * NULL), the return value *MUST* be qman_cb_dqrr_consume.
+ */
+enum qman_cb_dqrr_result {
+	/* DQRR entry can be consumed */
+	qman_cb_dqrr_consume,
+	/* Like _consume, but requests parking - FQ must be held-active */
+	qman_cb_dqrr_park,
+	/* Does not consume, for DCA mode only. This allows out-of-order
+	 * consumes by explicit calls to qman_dca() and/or the use of implicit
+	 * DCA via EQCR entries.
+	 */
+	qman_cb_dqrr_defer,
+	/*
+	 * Stop processing without consuming this ring entry. Exits the current
+	 * qman_p_poll_dqrr() or interrupt-handling, as appropriate. If within
+	 * an interrupt handler, the callback would typically call
+	 * qman_irqsource_remove(QM_PIRQ_DQRI) before returning this value,
+	 * otherwise the interrupt will reassert immediately.
+	 */
+	qman_cb_dqrr_stop,
+	/* Like qman_cb_dqrr_stop, but consumes the current entry. */
+	qman_cb_dqrr_consume_stop
+};
+
+typedef enum qman_cb_dqrr_result (*qman_cb_dqrr)(struct qman_portal *qm,
+					struct qman_fq *fq,
+					const struct qm_dqrr_entry *dqrr);
+
+/*
+ * This callback type is used when handling ERNs, FQRNs and FQRLs via MR. They
+ * are always consumed after the callback returns.
+ */
+typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq,
+				const struct qm_mr_entry *msg);
+
+/* This callback type is used when handling DCP ERNs */
+typedef void (*qman_cb_dc_ern)(struct qman_portal *qm,
+				const struct qm_mr_entry *msg);
+/*
+ * s/w-visible states. Ie. tentatively scheduled + truly scheduled + active +
+ * held-active + held-suspended are just "sched". Things like "retired" will not
+ * be assumed until it is complete (ie. QMAN_FQ_STATE_CHANGING is set until
+ * then, to indicate it's completing and to gate attempts to retry the retire
+ * command). Note, park commands do not set QMAN_FQ_STATE_CHANGING because it's
+ * technically impossible in the case of enqueue DCAs (which refer to DQRR ring
+ * index rather than the FQ that ring entry corresponds to), so repeated park
+ * commands are allowed (if you're silly enough to try) but won't change FQ
+ * state, and the resulting park notifications move FQs from "sched" to
+ * "parked".
+ */
+enum qman_fq_state {
+	qman_fq_state_oos,
+	qman_fq_state_parked,
+	qman_fq_state_sched,
+	qman_fq_state_retired
+};
+
+
+/*
+ * Frame queue objects (struct qman_fq) are stored within memory passed to
+ * qman_create_fq(), as this allows stashing of caller-provided demux callback
+ * pointers at no extra cost to stashing of (driver-internal) FQ state. If the
+ * caller wishes to add per-FQ state and have it benefit from dequeue-stashing,
+ * they should;
+ *
+ * (a) extend the qman_fq structure with their state; eg.
+ *
+ *     // myfq is allocated and driver_fq callbacks filled in;
+ *     struct my_fq {
+ *	   struct qman_fq base;
+ *	   int an_extra_field;
+ *	   [ ... add other fields to be associated with each FQ ...]
+ *     } *myfq = some_my_fq_allocator();
+ *     struct qman_fq *fq = qman_create_fq(fqid, flags, &myfq->base);
+ *
+ *     // in a dequeue callback, access extra fields from 'fq' via a cast;
+ *     struct my_fq *myfq = (struct my_fq *)fq;
+ *     do_something_with(myfq->an_extra_field);
+ *     [...]
+ *
+ * (b) when and if configuring the FQ for context stashing, specify how ever
+ *     many cachelines are required to stash 'struct my_fq', to accelerate not
+ *     only the QMan driver but the callback as well.
+ */
+
+struct qman_fq_cb {
+	qman_cb_dqrr dqrr;	/* for dequeued frames */
+	qman_cb_mr ern;		/* for s/w ERNs */
+	qman_cb_mr fqs;		/* frame-queue state changes*/
+};
+
+struct qman_fq {
+	/* Caller of qman_create_fq() provides these demux callbacks */
+	struct qman_fq_cb cb;
+	/*
+	 * These are internal to the driver, don't touch. In particular, they
+	 * may change, be removed, or extended (so you shouldn't rely on
+	 * sizeof(qman_fq) being a constant).
+	 */
+	spinlock_t fqlock;
+	u32 fqid;
+	/* DPDK Interface */
+	void *dpaa_intf;
+
+	volatile unsigned long flags;
+	enum qman_fq_state state;
+	int cgr_groupid;
+	struct rb_node node;
+};
+
+/*
+ * This callback type is used when handling congestion group entry/exit.
+ * 'congested' is non-zero on congestion-entry, and zero on congestion-exit.
+ */
+typedef void (*qman_cb_cgr)(struct qman_portal *qm,
+			    struct qman_cgr *cgr, int congested);
+
+struct qman_cgr {
+	/* Set these prior to qman_create_cgr() */
+	u32 cgrid; /* 0..255, but u32 to allow specials like -1, 256, etc.*/
+	qman_cb_cgr cb;
+	/* These are private to the driver */
+	u16 chan; /* portal channel this object is created on */
+	struct list_head node;
+};
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_QMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index 4ff48c6..b0d953f 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -47,6 +47,10 @@
 extern "C" {
 #endif
 
+/* Thread-entry/exit hooks; */
+int qman_thread_init(void);
+int qman_thread_finish(void);
+
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
 /* Obtain and free raw (unitialized) portals */
@@ -81,6 +85,15 @@ int qman_free_raw_portal(struct dpaa_raw_portal *portal);
 int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
 int bman_free_raw_portal(struct dpaa_raw_portal *portal);
 
+/* Post-process interrupts. NB, the kernel IRQ handler disables the interrupt
+ * line before notifying us, and this post-processing re-enables it once
+ * processing is complete. As such, it is essential to call this before going
+ * into another blocking read/select/poll.
+ */
+void qman_thread_irq(void);
+
+/* Global setup */
+int qman_global_init(void);
 #ifdef __cplusplus
 }
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 11/41] bus/dpaa: add QMan driver core routines
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (9 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 10/41] bus/dpaa: add QMAN interface driver Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-18 14:53         ` Ferruh Yigit
  2017-09-09 11:21       ` [PATCH v4 12/41] bus/dpaa: add BMAN driver core Shreyansh Jain
                         ` (32 subsequent siblings)
  43 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |    2 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |   88 ++
 drivers/bus/dpaa/base/qbman/qman.c        | 2402 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h        |  888 +++++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c |   12 +
 drivers/bus/dpaa/include/fsl_qman.h       |  755 +++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |    1 +
 7 files changed, 4148 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 29f01df..ba87386 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -70,7 +70,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
+	base/qbman/dpaa_alloc.c \
 	base/qbman/dpaa_sys.c
 
 # Link Pthread
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
new file mode 100644
index 0000000..690576a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2009-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "dpaa_sys.h"
+#include <process.h>
+#include <fsl_qman.h>
+
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_fqid, result, count, align, partial);
+}
+
+void qman_release_fqid_range(u32 fqid, u32 count)
+{
+	process_release(dpaa_id_fqid, fqid, count);
+}
+
+int qman_reserve_fqid_range(u32 fqid, unsigned int count)
+{
+	return process_reserve(dpaa_id_fqid, fqid, count);
+}
+
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_qpool, result, count, align, partial);
+}
+
+void qman_release_pool_range(u32 pool, u32 count)
+{
+	process_release(dpaa_id_qpool, pool, count);
+}
+
+int qman_reserve_pool_range(u32 pool, u32 count)
+{
+	return process_reserve(dpaa_id_qpool, pool, count);
+}
+
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_cgrid, result, count, align, partial);
+}
+
+void qman_release_cgrid_range(u32 cgrid, u32 count)
+{
+	process_release(dpaa_id_cgrid, cgrid, count);
+}
+
+int qman_reserve_cgrid_range(u32 cgrid, u32 count)
+{
+	return process_reserve(dpaa_id_cgrid, cgrid, count);
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
new file mode 100644
index 0000000..494d54c
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -0,0 +1,2402 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define DQRR_MAXFILL	15
+#define EQCR_ITHRESH	4	/* if EQCR congests, interrupt threshold */
+#define IRQNAME		"QMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "QMan portal %d" */
+/* maximum number of DQRR entries to process in qman_poll() */
+#define FSL_QMAN_POLL_LIMIT 8
+
+/* Lock/unlock frame queues, subject to the "LOCKED" flag. This is about
+ * inter-processor locking only. Note, FQLOCK() is always called either under a
+ * local_irq_save() or from interrupt context - hence there's no need for irq
+ * protection (and indeed, attempting to nest irq-protection doesn't work, as
+ * the "irq en/disable" machinery isn't recursive...).
+ */
+#define FQLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_lock(&__fq478->fqlock); \
+	} while (0)
+#define FQUNLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_unlock(&__fq478->fqlock); \
+	} while (0)
+
+static inline void fq_set(struct qman_fq *fq, u32 mask)
+{
+	dpaa_set_bits(mask, &fq->flags);
+}
+
+static inline void fq_clear(struct qman_fq *fq, u32 mask)
+{
+	dpaa_clear_bits(mask, &fq->flags);
+}
+
+static inline int fq_isset(struct qman_fq *fq, u32 mask)
+{
+	return fq->flags & mask;
+}
+
+static inline int fq_isclear(struct qman_fq *fq, u32 mask)
+{
+	return !(fq->flags & mask);
+}
+
+struct qman_portal {
+	struct qm_portal p;
+	/* PORTAL_BITS_*** - dynamic, strictly internal */
+	unsigned long bits;
+	/* interrupt sources processed by portal_isr(), configurable */
+	unsigned long irq_sources;
+	u32 use_eqcr_ci_stashing;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* only 1 volatile dequeue at a time */
+	struct qman_fq *vdqcr_owned;
+	u32 sdqcr;
+	int dqrr_disable_ref;
+	/* A portal-specific handler for DCP ERNs. If this is NULL, the global
+	 * handler is called instead.
+	 */
+	qman_cb_dc_ern cb_dc_ern;
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct qm_portal_config *config;
+	struct dpa_rbtree retire_table;
+	char irqname[MAX_IRQNAME];
+	/* 2-element array. cgrs[0] is mask, cgrs[1] is snapshot. */
+	struct qman_cgrs *cgrs;
+	/* linked-list of CSCN handlers. */
+	struct list_head cgr_cbs;
+	/* list lock */
+	spinlock_t cgr_lock;
+	/* track if memory was allocated by the driver */
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* Keep a shadow copy of the DQRR on LE systems as the SW needs to
+	 * do byte swaps of DQRR read only memory.  First entry must be aligned
+	 * to 2 ** 10 to ensure DQRR index calculations based shadow copy
+	 * address (6 bits for address shift + 4 bits for the DQRR size).
+	 */
+	struct qm_dqrr_entry shadow_dqrr[QM_DQRR_SIZE]
+		    __attribute__((aligned(1024)));
+#endif
+};
+
+/* Global handler for DCP ERNs. Used when the portal receiving the message does
+ * not have a portal-specific handler.
+ */
+static qman_cb_dc_ern cb_dc_ern;
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static u16 affine_channels[NR_CPUS];
+static RTE_DEFINE_PER_LCORE(struct qman_portal, qman_affine_portal);
+
+static inline struct qman_portal *get_affine_portal(void)
+{
+	return &RTE_PER_LCORE(qman_affine_portal);
+}
+
+/* This gives a FQID->FQ lookup to cover the fact that we can't directly demux
+ * retirement notifications (the fact they are sometimes h/w-consumed means that
+ * contextB isn't always a s/w demux - and as we can't know which case it is
+ * when looking at the notification, we have to use the slow lookup for all of
+ * them). NB, it's possible to have multiple FQ objects refer to the same FQID
+ * (though at most one of them should be the consumer), so this table isn't for
+ * all FQs - FQs are added when retirement commands are issued, and removed when
+ * they complete, which also massively reduces the size of this table.
+ */
+IMPLEMENT_DPAA_RBTREE(fqtree, struct qman_fq, node, fqid);
+/*
+ * This is what everything can wait on, even if it migrates to a different cpu
+ * to the one whose affine portal it is waiting on.
+ */
+static DECLARE_WAIT_QUEUE_HEAD(affine_queue);
+
+static inline int table_push_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	int ret = fqtree_push(&p->retire_table, fq);
+
+	if (ret)
+		pr_err("ERROR: double FQ-retirement %d\n", fq->fqid);
+	return ret;
+}
+
+static inline void table_del_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	fqtree_del(&p->retire_table, fq);
+}
+
+static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
+{
+	return fqtree_find(&p->retire_table, fqid);
+}
+
+static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to HW format */
+	fqd->fq_ctrl = cpu_to_be16(fqd->fq_ctrl);
+	fqd->dest_wq = cpu_to_be16(fqd->dest_wq);
+	fqd->ics_cred = cpu_to_be16(fqd->ics_cred);
+	fqd->context_b = cpu_to_be32(fqd->context_b);
+	fqd->context_a.opaque = cpu_to_be64(fqd->context_a.opaque);
+	fqd->opaque_td = cpu_to_be16(fqd->opaque_td);
+}
+
+static inline void hw_fqd_to_cpu(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to CPU format */
+	fqd->fq_ctrl = be16_to_cpu(fqd->fq_ctrl);
+	fqd->dest_wq = be16_to_cpu(fqd->dest_wq);
+	fqd->ics_cred = be16_to_cpu(fqd->ics_cred);
+	fqd->context_b = be32_to_cpu(fqd->context_b);
+	fqd->context_a.opaque = be64_to_cpu(fqd->context_a.opaque);
+}
+
+static inline void cpu_to_hw_fd(struct qm_fd *fd)
+{
+	fd->addr = cpu_to_be40(fd->addr);
+	fd->status = cpu_to_be32(fd->status);
+	fd->opaque = cpu_to_be32(fd->opaque);
+}
+
+static inline void hw_fd_to_cpu(struct qm_fd *fd)
+{
+	fd->addr = be40_to_cpu(fd->addr);
+	fd->status = be32_to_cpu(fd->status);
+	fd->opaque = be32_to_cpu(fd->opaque);
+}
+
+/* In the case that slow- and fast-path handling are both done by qman_poll()
+ * (ie. because there is no interrupt handling), we ought to balance how often
+ * we do the fast-path poll versus the slow-path poll. We'll use two decrementer
+ * sources, so we call the fast poll 'n' times before calling the slow poll
+ * once. The idle decrementer constant is used when the last slow-poll detected
+ * no work to do, and the busy decrementer constant when the last slow-poll had
+ * work to do.
+ */
+#define SLOW_POLL_IDLE   1000
+#define SLOW_POLL_BUSY   10
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is);
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit);
+
+/* Portal interrupt handler */
+static irqreturn_t portal_isr(__always_unused int irq, void *ptr)
+{
+	struct qman_portal *p = ptr;
+	/*
+	 * The CSCI/CCSCI source is cleared inside __poll_portal_slow(), because
+	 * it could race against a Query Congestion State command also given
+	 * as part of the handling of this interrupt source. We mustn't
+	 * clear it a second time in this top-level function.
+	 */
+	u32 clear = QM_DQAVAIL_MASK | (p->irq_sources &
+		~(QM_PIRQ_CSCI | QM_PIRQ_CCSCI));
+	u32 is = qm_isr_status_read(&p->p) & p->irq_sources;
+	/* DQRR-handling if it's interrupt-driven */
+	if (is & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+	/* Handling of anything else that's interrupt-driven */
+	clear |= __poll_portal_slow(p, is);
+	qm_isr_status_clear(&p->p, clear);
+	return IRQ_HANDLED;
+}
+
+/* This inner version is used privately by qman_create_affine_portal(), as well
+ * as by the exported qman_stop_dequeues().
+ */
+static inline void qman_stop_dequeues_ex(struct qman_portal *p)
+{
+	if (!(p->dqrr_disable_ref++))
+		qm_dqrr_set_maxfill(&p->p, 0);
+}
+
+static int drain_mr_fqrni(struct qm_portal *p)
+{
+	const struct qm_mr_entry *msg;
+loop:
+	msg = qm_mr_current(p);
+	if (!msg) {
+		/*
+		 * if MR was full and h/w had other FQRNI entries to produce, we
+		 * need to allow it time to produce those entries once the
+		 * existing entries are consumed. A worst-case situation
+		 * (fully-loaded system) means h/w sequencers may have to do 3-4
+		 * other things before servicing the portal's MR pump, each of
+		 * which (if slow) may take ~50 qman cycles (which is ~200
+		 * processor cycles). So rounding up and then multiplying this
+		 * worst-case estimate by a factor of 10, just to be
+		 * ultra-paranoid, goes as high as 10,000 cycles. NB, we consume
+		 * one entry at a time, so h/w has an opportunity to produce new
+		 * entries well before the ring has been fully consumed, so
+		 * we're being *really* paranoid here.
+		 */
+		u64 now, then = mfatb();
+
+		do {
+			now = mfatb();
+		} while ((then + 10000) > now);
+		msg = qm_mr_current(p);
+		if (!msg)
+			return 0;
+	}
+	if ((msg->verb & QM_MR_VERB_TYPE_MASK) != QM_MR_VERB_FQRNI) {
+		/* We aren't draining anything but FQRNIs */
+		pr_err("Found verb 0x%x in MR\n", msg->verb);
+		return -1;
+	}
+	qm_mr_next(p);
+	qm_mr_cci_consume(p, 1);
+	goto loop;
+}
+
+static inline int qm_eqcr_init(struct qm_portal *portal,
+			       enum qm_eqcr_pmode pmode,
+			       unsigned int eq_stash_thresh,
+			       int eq_stash_prio)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u32 cfg;
+	u8 pi;
+
+	eqcr->ring = portal->addr.ce + QM_CL_EQCR;
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	eqcr->cursor = eqcr->ring + pi;
+	eqcr->vbit = (qm_in(EQCR_PI_CINH) & QM_EQCR_SIZE) ?
+			QM_EQCR_VERB_VBIT : 0;
+	eqcr->available = QM_EQCR_SIZE - 1 -
+			qm_cyc_diff(QM_EQCR_SIZE, eqcr->ci, pi);
+	eqcr->ithresh = qm_in(EQCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+	eqcr->pmode = pmode;
+#endif
+	cfg = (qm_in(CFG) & 0x00ffffff) |
+		(eq_stash_thresh << 28) | /* QCSP_CFG: EST */
+		(eq_stash_prio << 26)	| /* QCSP_CFG: EP */
+		((pmode & 0x3) << 24);	/* QCSP_CFG::EPM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_eqcr_finish(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 pi, ci;
+	u32 cfg;
+
+	/*
+	 * Disable EQCI stashing because the QMan only
+	 * presents the value it previously stashed to
+	 * maintain coherency.  Setting the stash threshold
+	 * to 1 then 0 ensures that QMan has resyncronized
+	 * its internal copy so that the portal is clean
+	 * when it is reinitialized in the future
+	 */
+	cfg = (qm_in(CFG) & 0x0fffffff) |
+		(1 << 28); /* QCSP_CFG: EST */
+	qm_out(CFG, cfg);
+	cfg &= 0x0fffffff; /* stash threshold = 0 */
+	qm_out(CFG, cfg);
+
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+
+	/* Refresh EQCR CI cache value */
+	qm_cl_invalidate(EQCR_CI);
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (pi != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("losing uncommitted EQCR entries\n");
+	if (ci != eqcr->ci)
+		pr_crit("missing existing EQCR completions\n");
+	if (eqcr->ci != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("EQCR destroyed unquiesced\n");
+}
+
+static inline int qm_dqrr_init(struct qm_portal *portal,
+			__maybe_unused const struct qm_portal_config *config,
+			enum qm_dqrr_dmode dmode,
+			__maybe_unused enum qm_dqrr_pmode pmode,
+			enum qm_dqrr_cmode cmode, u8 max_fill)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u32 cfg;
+
+	/* Make sure the DQRR will be idle when we enable */
+	qm_out(DQRR_SDQCR, 0);
+	qm_out(DQRR_VDQCR, 0);
+	qm_out(DQRR_PDQCR, 0);
+	dqrr->ring = portal->addr.ce + QM_CL_DQRR;
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->cursor = dqrr->ring + dqrr->ci;
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+	dqrr->vbit = (qm_in(DQRR_PI_CINH) & QM_DQRR_SIZE) ?
+			QM_DQRR_VERB_VBIT : 0;
+	dqrr->ithresh = qm_in(DQRR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	dqrr->dmode = dmode;
+	dqrr->pmode = pmode;
+	dqrr->cmode = cmode;
+#endif
+	/* Invalidate every ring entry before beginning */
+	for (cfg = 0; cfg < QM_DQRR_SIZE; cfg++)
+		dccivac(qm_cl(dqrr->ring, cfg));
+	cfg = (qm_in(CFG) & 0xff000f00) |
+		((max_fill & (QM_DQRR_SIZE - 1)) << 20) | /* DQRR_MF */
+		((dmode & 1) << 18) |			/* DP */
+		((cmode & 3) << 16) |			/* DCM */
+		0xa0 |					/* RE+SE */
+		(0 ? 0x40 : 0) |			/* Ignore RP */
+		(0 ? 0x10 : 0);				/* Ignore SP */
+	qm_out(CFG, cfg);
+	qm_dqrr_set_maxfill(portal, max_fill);
+	return 0;
+}
+
+static inline void qm_dqrr_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if ((dqrr->cmode != qm_dqrr_cdc) &&
+	    (dqrr->ci != DQRR_PTR2IDX(dqrr->cursor)))
+		pr_crit("Ignoring completed DQRR entries\n");
+#endif
+}
+
+static inline int qm_mr_init(struct qm_portal *portal,
+			     __maybe_unused enum qm_mr_pmode pmode,
+			     enum qm_mr_cmode cmode)
+{
+	register struct qm_mr *mr = &portal->mr;
+	u32 cfg;
+
+	mr->ring = portal->addr.ce + QM_CL_MR;
+	mr->pi = qm_in(MR_PI_CINH) & (QM_MR_SIZE - 1);
+	mr->ci = qm_in(MR_CI_CINH) & (QM_MR_SIZE - 1);
+	mr->cursor = mr->ring + mr->ci;
+	mr->fill = qm_cyc_diff(QM_MR_SIZE, mr->ci, mr->pi);
+	mr->vbit = (qm_in(MR_PI_CINH) & QM_MR_SIZE) ? QM_MR_VERB_VBIT : 0;
+	mr->ithresh = qm_in(MR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mr->pmode = pmode;
+	mr->cmode = cmode;
+#endif
+	cfg = (qm_in(CFG) & 0xfffff0ff) |
+		((cmode & 1) << 8);		/* QCSP_CFG:MM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+	const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+	DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+		mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+		if (!mr->pi)
+			mr->vbit ^= QM_MR_VERB_VBIT;
+		mr->fill++;
+		res = MR_INC(res);
+	}
+	dcbit_ro(res);
+}
+
+static inline
+struct qman_portal *qman_create_portal(
+			struct qman_portal *portal,
+			      const struct qm_portal_config *c,
+			      const struct qman_cgrs *cgrs)
+{
+	struct qm_portal *p;
+	char buf[16];
+	int ret;
+	u32 isdr;
+
+	p = &portal->p;
+
+	portal->use_eqcr_ci_stashing = ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	/*
+	 * If CI-stashing is used, the current defaults use a threshold of 3,
+	 * and stash with high-than-DQRR priority.
+	 */
+	if (qm_eqcr_init(p, qm_eqcr_pvb,
+			 portal->use_eqcr_ci_stashing ? 3 : 0, 1)) {
+		pr_err("Qman EQCR initialisation failed\n");
+		goto fail_eqcr;
+	}
+	if (qm_dqrr_init(p, c, qm_dqrr_dpush, qm_dqrr_pvb,
+			 qm_dqrr_cdc, DQRR_MAXFILL)) {
+		pr_err("Qman DQRR initialisation failed\n");
+		goto fail_dqrr;
+	}
+	if (qm_mr_init(p, qm_mr_pvb, qm_mr_cci)) {
+		pr_err("Qman MR initialisation failed\n");
+		goto fail_mr;
+	}
+	if (qm_mc_init(p)) {
+		pr_err("Qman MC initialisation failed\n");
+		goto fail_mc;
+	}
+
+	/* static interrupt-gating controls */
+	qm_dqrr_set_ithresh(p, 0);
+	qm_mr_set_ithresh(p, 0);
+	qm_isr_set_iperiod(p, 0);
+	portal->cgrs = kmalloc(2 * sizeof(*cgrs), GFP_KERNEL);
+	if (!portal->cgrs)
+		goto fail_cgrs;
+	/* initial snapshot is no-depletion */
+	qman_cgrs_init(&portal->cgrs[1]);
+	if (cgrs)
+		portal->cgrs[0] = *cgrs;
+	else
+		/* if the given mask is NULL, assume all CGRs can be seen */
+		qman_cgrs_fill(&portal->cgrs[0]);
+	INIT_LIST_HEAD(&portal->cgr_cbs);
+	spin_lock_init(&portal->cgr_lock);
+	portal->bits = 0;
+	portal->slowpoll = 0;
+	portal->sdqcr = QM_SDQCR_SOURCE_CHANNELS | QM_SDQCR_COUNT_UPTO3 |
+			QM_SDQCR_DEDICATED_PRECEDENCE | QM_SDQCR_TYPE_PRIO_QOS |
+			QM_SDQCR_TOKEN_SET(0xab) | QM_SDQCR_CHANNELS_DEDICATED;
+	portal->dqrr_disable_ref = 0;
+	portal->cb_dc_ern = NULL;
+	sprintf(buf, "qportal-%d", c->channel);
+	dpa_rbtree_init(&portal->retire_table);
+	isdr = 0xffffffff;
+	qm_isr_disable_write(p, isdr);
+	portal->irq_sources = 0;
+	qm_isr_enable_write(p, portal->irq_sources);
+	qm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, portal_isr, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need EQCR to be empty before continuing */
+	isdr &= ~QM_PIRQ_EQCI;
+	qm_isr_disable_write(p, isdr);
+	ret = qm_eqcr_get_fill(p);
+	if (ret) {
+		pr_err("Qman EQCR unclean\n");
+		goto fail_eqcr_empty;
+	}
+	isdr &= ~(QM_PIRQ_DQRI | QM_PIRQ_MRI);
+	qm_isr_disable_write(p, isdr);
+	if (qm_dqrr_current(p)) {
+		pr_err("Qman DQRR unclean\n");
+		qm_dqrr_cdc_consume_n(p, 0xffff);
+	}
+	if (qm_mr_current(p) && drain_mr_fqrni(p)) {
+		/* special handling, drain just in case it's a few FQRNIs */
+		if (drain_mr_fqrni(p))
+			goto fail_dqrr_mr_empty;
+	}
+	/* Success */
+	portal->config = c;
+	qm_isr_disable_write(p, 0);
+	qm_isr_uninhibit(p);
+	/* Write a sane SDQCR */
+	qm_dqrr_sdqcr_set(p, portal->sdqcr);
+	return portal;
+fail_dqrr_mr_empty:
+fail_eqcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->cgrs);
+	spin_lock_destroy(&portal->cgr_lock);
+fail_cgrs:
+	qm_mc_finish(p);
+fail_mc:
+	qm_mr_finish(p);
+fail_mr:
+	qm_dqrr_finish(p);
+fail_dqrr:
+	qm_eqcr_finish(p);
+fail_eqcr:
+	return NULL;
+}
+
+struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c,
+					      const struct qman_cgrs *cgrs)
+{
+	struct qman_portal *res;
+	struct qman_portal *portal = get_affine_portal();
+	/* A criteria for calling this function (from qman_driver.c) is that
+	 * we're already affine to the cpu and won't schedule onto another cpu.
+	 */
+
+	res = qman_create_portal(portal, c, cgrs);
+	if (res) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		affine_channels[c->cpu] =
+			c->channel;
+		spin_unlock(&affine_mask_lock);
+	}
+	return res;
+}
+
+static inline
+void qman_destroy_portal(struct qman_portal *qm)
+{
+	const struct qm_portal_config *pcfg;
+
+	/* Stop dequeues on the portal */
+	qm_dqrr_sdqcr_set(&qm->p, 0);
+
+	/*
+	 * NB we do this to "quiesce" EQCR. If we add enqueue-completions or
+	 * something related to QM_PIRQ_EQCI, this may need fixing.
+	 * Also, due to the prefetching model used for CI updates in the enqueue
+	 * path, this update will only invalidate the CI cacheline *after*
+	 * working on it, so we need to call this twice to ensure a full update
+	 * irrespective of where the enqueue processing was at when the teardown
+	 * began.
+	 */
+	qm_eqcr_cce_update(&qm->p);
+	qm_eqcr_cce_update(&qm->p);
+	pcfg = qm->config;
+
+	free_irq(pcfg->irq, qm);
+
+	kfree(qm->cgrs);
+	qm_mc_finish(&qm->p);
+	qm_mr_finish(&qm->p);
+	qm_dqrr_finish(&qm->p);
+	qm_eqcr_finish(&qm->p);
+
+	qm->config = NULL;
+
+	spin_lock_destroy(&qm->cgr_lock);
+}
+
+const struct qm_portal_config *qman_destroy_affine_portal(void)
+{
+	/* We don't want to redirect if we're a slave, use "raw" */
+	struct qman_portal *qm = get_affine_portal();
+	const struct qm_portal_config *pcfg;
+	int cpu;
+
+	pcfg = qm->config;
+	cpu = pcfg->cpu;
+
+	qman_destroy_portal(qm);
+
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int qman_get_portal_index(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+/* Inline helper to reduce nesting in __poll_portal_slow() */
+static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,
+				   const struct qm_mr_entry *msg, u8 verb)
+{
+	FQLOCK(fq);
+	switch (verb) {
+	case QM_MR_VERB_FQRL:
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_ORL));
+		fq_clear(fq, QMAN_FQ_STATE_ORL);
+		table_del_fq(p, fq);
+		break;
+	case QM_MR_VERB_FQRN:
+		DPAA_ASSERT((fq->state == qman_fq_state_parked) ||
+			    (fq->state == qman_fq_state_sched));
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_CHANGING));
+		fq_clear(fq, QMAN_FQ_STATE_CHANGING);
+		if (msg->fq.fqs & QM_MR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (msg->fq.fqs & QM_MR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		fq->state = qman_fq_state_retired;
+		break;
+	case QM_MR_VERB_FQPN:
+		DPAA_ASSERT(fq->state == qman_fq_state_sched);
+		DPAA_ASSERT(fq_isclear(fq, QMAN_FQ_STATE_CHANGING));
+		fq->state = qman_fq_state_parked;
+	}
+	FQUNLOCK(fq);
+}
+
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
+{
+	const struct qm_mr_entry *msg;
+	struct qm_mr_entry swapped_msg;
+
+	if (is & QM_PIRQ_CSCI) {
+		struct qman_cgrs rr, c;
+		struct qm_mc_result *mcr;
+		struct qman_cgr *cgr;
+
+		spin_lock(&p->cgr_lock);
+		/*
+		 * The CSCI bit must be cleared _before_ issuing the
+		 * Query Congestion State command, to ensure that a long
+		 * CGR State Change callback cannot miss an intervening
+		 * state change.
+		 */
+		qm_isr_status_clear(&p->p, QM_PIRQ_CSCI);
+		qm_mc_start(&p->p);
+		qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+		while (!(mcr = qm_mc_result(&p->p)))
+			cpu_relax();
+		/* mask out the ones I'm not interested in */
+		qman_cgrs_and(&rr, (const struct qman_cgrs *)
+			&mcr->querycongestion.state, &p->cgrs[0]);
+		/* check previous snapshot for delta, enter/exit congestion */
+		qman_cgrs_xor(&c, &rr, &p->cgrs[1]);
+		/* update snapshot */
+		qman_cgrs_cp(&p->cgrs[1], &rr);
+		/* Invoke callback */
+		list_for_each_entry(cgr, &p->cgr_cbs, node)
+			if (cgr->cb && qman_cgrs_get(&c, cgr->cgrid))
+				cgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid));
+		spin_unlock(&p->cgr_lock);
+	}
+
+	if (is & QM_PIRQ_EQRI) {
+		qm_eqcr_cce_update(&p->p);
+		qm_eqcr_set_ithresh(&p->p, 0);
+		wake_up(&affine_queue);
+	}
+
+	if (is & QM_PIRQ_MRI) {
+		struct qman_fq *fq;
+		u8 verb, num = 0;
+mr_loop:
+		qm_mr_pvb_update(&p->p);
+		msg = qm_mr_current(&p->p);
+		if (!msg)
+			goto mr_done;
+		swapped_msg = *msg;
+		hw_fd_to_cpu(&swapped_msg.ern.fd);
+		verb = msg->verb & QM_MR_VERB_TYPE_MASK;
+		/* The message is a software ERN iff the 0x20 bit is set */
+		if (verb & 0x20) {
+			switch (verb) {
+			case QM_MR_VERB_FQRNI:
+				/* nada, we drop FQRNIs on the floor */
+				break;
+			case QM_MR_VERB_FQRN:
+			case QM_MR_VERB_FQRL:
+				/* Lookup in the retirement table */
+				fq = table_find_fq(p,
+						   be32_to_cpu(msg->fq.fqid));
+				DPAA_BUG_ON(!fq);
+				fq_state_change(p, fq, &swapped_msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_FQPN:
+				/* Parked */
+				fq = (void *)(uintptr_t)
+					be32_to_cpu(msg->fq.contextB);
+				fq_state_change(p, fq, msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_DC_ERN:
+				/* DCP ERN */
+				if (p->cb_dc_ern)
+					p->cb_dc_ern(p, msg);
+				else if (cb_dc_ern)
+					cb_dc_ern(p, msg);
+				else {
+					static int warn_once;
+
+					if (!warn_once) {
+						pr_crit("Leaking DCP ERNs!\n");
+						warn_once = 1;
+					}
+				}
+				break;
+			default:
+				pr_crit("Invalid MR verb 0x%02x\n", verb);
+			}
+		} else {
+			/* Its a software ERN */
+			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+			fq->cb.ern(p, fq, &swapped_msg);
+		}
+		num++;
+		qm_mr_next(&p->p);
+		goto mr_loop;
+mr_done:
+		qm_mr_cci_consume(&p->p, num);
+	}
+	/*
+	 * QM_PIRQ_CSCI/CCSCI has already been cleared, as part of its specific
+	 * processing. If that interrupt source has meanwhile been re-asserted,
+	 * we mustn't clear it here (or in the top-level interrupt handler).
+	 */
+	return is & (QM_PIRQ_EQCI | QM_PIRQ_EQRI | QM_PIRQ_MRI);
+}
+
+/*
+ * remove some slowish-path stuff from the "fast path" and make sure it isn't
+ * inlined.
+ */
+static noinline void clear_vdqcr(struct qman_portal *p, struct qman_fq *fq)
+{
+	p->vdqcr_owned = NULL;
+	FQLOCK(fq);
+	fq_clear(fq, QMAN_FQ_STATE_VDQCR);
+	FQUNLOCK(fq);
+	wake_up(&affine_queue);
+}
+
+/*
+ * The only states that would conflict with other things if they ran at the
+ * same time on the same cpu are:
+ *
+ *   (i) setting/clearing vdqcr_owned, and
+ *  (ii) clearing the NE (Not Empty) flag.
+ *
+ * Both are safe. Because;
+ *
+ *   (i) this clearing can only occur after qman_set_vdq() has set the
+ *	 vdqcr_owned field (which it does before setting VDQCR), and
+ *	 qman_volatile_dequeue() blocks interrupts and preemption while this is
+ *	 done so that we can't interfere.
+ *  (ii) the NE flag is only cleared after qman_retire_fq() has set it, and as
+ *	 with (i) that API prevents us from interfering until it's safe.
+ *
+ * The good thing is that qman_set_vdq() and qman_retire_fq() run far
+ * less frequently (ie. per-FQ) than __poll_portal_fast() does, so the nett
+ * advantage comes from this function not having to "lock" anything at all.
+ *
+ * Note also that the callbacks are invoked at points which are safe against the
+ * above potential conflicts, but that this function itself is not re-entrant
+ * (this is because the function tracks one end of each FIFO in the portal and
+ * we do *not* want to lock that). So the consequence is that it is safe for
+ * user callbacks to call into any QMan API.
+ */
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit)
+{
+	const struct qm_dqrr_entry *dq;
+	struct qman_fq *fq;
+	enum qman_cb_dqrr_result res;
+	unsigned int limit = 0;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+	do {
+		qm_dqrr_pvb_update(&p->p);
+		dq = qm_dqrr_current(&p->p);
+		if (!dq)
+			break;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* If running on an LE system the fields of the
+	 * dequeue entry must be swapper.  Because the
+	 * QMan HW will ignore writes the DQRR entry is
+	 * copied and the index stored within the copy
+	 */
+		shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+		*shadow = *dq;
+		dq = shadow;
+		shadow->fqid = be32_to_cpu(shadow->fqid);
+		shadow->contextB = be32_to_cpu(shadow->contextB);
+		shadow->seqnum = be16_to_cpu(shadow->seqnum);
+		hw_fd_to_cpu(&shadow->fd);
+#endif
+
+		if (dq->stat & QM_DQRR_STAT_UNSCHEDULED) {
+			/*
+			 * VDQCR: don't trust context_b as the FQ may have
+			 * been configured for h/w consumption and we're
+			 * draining it post-retirement.
+			 */
+			fq = p->vdqcr_owned;
+			/*
+			 * We only set QMAN_FQ_STATE_NE when retiring, so we
+			 * only need to check for clearing it when doing
+			 * volatile dequeues.  It's one less thing to check
+			 * in the critical path (SDQCR).
+			 */
+			if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+				fq_clear(fq, QMAN_FQ_STATE_NE);
+			/*
+			 * This is duplicated from the SDQCR code, but we
+			 * have stuff to do before *and* after this callback,
+			 * and we don't want multiple if()s in the critical
+			 * path (SDQCR).
+			 */
+			res = fq->cb.dqrr(p, fq, dq);
+			if (res == qman_cb_dqrr_stop)
+				break;
+			/* Check for VDQCR completion */
+			if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+				clear_vdqcr(p, fq);
+		} else {
+			/* SDQCR: context_b points to the FQ */
+			fq = (void *)(uintptr_t)dq->contextB;
+			/* Now let the callback do its stuff */
+			res = fq->cb.dqrr(p, fq, dq);
+			/*
+			 * The callback can request that we exit without
+			 * consuming this entry nor advancing;
+			 */
+			if (res == qman_cb_dqrr_stop)
+				break;
+		}
+		/* Interpret 'dq' from a driver perspective. */
+		/*
+		 * Parking isn't possible unless HELDACTIVE was set. NB,
+		 * FORCEELIGIBLE implies HELDACTIVE, so we only need to
+		 * check for HELDACTIVE to cover both.
+		 */
+		DPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||
+			    (res != qman_cb_dqrr_park));
+		/* just means "skip it, I'll consume it myself later on" */
+		if (res != qman_cb_dqrr_defer)
+			qm_dqrr_cdc_consume_1ptr(&p->p, dq,
+						 res == qman_cb_dqrr_park);
+		/* Move forward */
+		qm_dqrr_next(&p->p);
+		/*
+		 * Entry processed and consumed, increment our counter.  The
+		 * callback can request that we exit after consuming the
+		 * entry, and we also exit if we reach our processing limit,
+		 * so loop back only if neither of these conditions is met.
+		 */
+	} while (++limit < poll_limit && res != qman_cb_dqrr_consume_stop);
+
+	return limit;
+}
+
+u16 qman_affine_channel(int cpu)
+{
+	if (cpu < 0) {
+		struct qman_portal *portal = get_affine_portal();
+
+		cpu = portal->config->cpu;
+	}
+	DPAA_BUG_ON(!CPU_ISSET(cpu, &affine_mask));
+	return affine_channels[cpu];
+}
+
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)
+{
+	struct qman_portal *p = get_affine_portal();
+	const struct qm_dqrr_entry *dq;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+
+	qm_dqrr_pvb_update(&p->p);
+	dq = qm_dqrr_current(&p->p);
+	if (!dq)
+		return NULL;
+
+	if (!(dq->stat & QM_DQRR_STAT_FD_VALID)) {
+		/* Invalid DQRR - put the portal and consume the DQRR.
+		 * Return NULL to user as no packet is seen.
+		 */
+		qman_dqrr_consume(fq, (struct qm_dqrr_entry *)dq);
+		return NULL;
+	}
+
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+	*shadow = *dq;
+	dq = shadow;
+	shadow->fqid = be32_to_cpu(shadow->fqid);
+	shadow->contextB = be32_to_cpu(shadow->contextB);
+	shadow->seqnum = be16_to_cpu(shadow->seqnum);
+	hw_fd_to_cpu(&shadow->fd);
+#endif
+
+	if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+		fq_clear(fq, QMAN_FQ_STATE_NE);
+
+	return (struct qm_dqrr_entry *)dq;
+}
+
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+		clear_vdqcr(p, fq);
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, 0);
+	qm_dqrr_next(&p->p);
+}
+
+int qman_poll_dqrr(unsigned int limit)
+{
+	struct qman_portal *p = get_affine_portal();
+	int ret;
+
+	ret = __poll_portal_fast(p, limit);
+	return ret;
+}
+
+void qman_poll(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if ((~p->irq_sources) & QM_PIRQ_SLOW) {
+		if (!(p->slowpoll--)) {
+			u32 is = qm_isr_status_read(&p->p) & ~p->irq_sources;
+			u32 active = __poll_portal_slow(p, is);
+
+			if (active) {
+				qm_isr_status_clear(&p->p, active);
+				p->slowpoll = SLOW_POLL_BUSY;
+			} else
+				p->slowpoll = SLOW_POLL_IDLE;
+		}
+	}
+	if ((~p->irq_sources) & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+}
+
+void qman_stop_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qman_stop_dequeues_ex(p);
+}
+
+void qman_start_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	DPAA_ASSERT(p->dqrr_disable_ref > 0);
+	if (!(--p->dqrr_disable_ref))
+		qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
+}
+
+void qman_static_dequeue_add(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr |= pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+void qman_static_dequeue_del(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr &= ~pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+u32 qman_static_dequeue_get(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->sdqcr;
+}
+
+void qman_dca(struct qm_dqrr_entry *dq, int park_request)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, park_request);
+}
+
+/* Frame queue API */
+static const char *mcr_result_str(u8 result)
+{
+	switch (result) {
+	case QM_MCR_RESULT_NULL:
+		return "QM_MCR_RESULT_NULL";
+	case QM_MCR_RESULT_OK:
+		return "QM_MCR_RESULT_OK";
+	case QM_MCR_RESULT_ERR_FQID:
+		return "QM_MCR_RESULT_ERR_FQID";
+	case QM_MCR_RESULT_ERR_FQSTATE:
+		return "QM_MCR_RESULT_ERR_FQSTATE";
+	case QM_MCR_RESULT_ERR_NOTEMPTY:
+		return "QM_MCR_RESULT_ERR_NOTEMPTY";
+	case QM_MCR_RESULT_PENDING:
+		return "QM_MCR_RESULT_PENDING";
+	case QM_MCR_RESULT_ERR_BADCOMMAND:
+		return "QM_MCR_RESULT_ERR_BADCOMMAND";
+	}
+	return "<unknown MCR result>";
+}
+
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
+{
+	struct qm_fqd fqd;
+	struct qm_mcr_queryfq_np np;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID) {
+		int ret = qman_alloc_fqid(&fqid);
+
+		if (ret)
+			return ret;
+	}
+	spin_lock_init(&fq->fqlock);
+	fq->fqid = fqid;
+	fq->flags = flags;
+	fq->state = qman_fq_state_oos;
+	fq->cgr_groupid = 0;
+
+	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
+		return 0;
+	/* Everything else is AS_IS support */
+	p = get_affine_portal();
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(&fqd);
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ_NP);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ_NP failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	np = mcr->queryfq_np;
+	/* Phew, have queryfq and queryfq_np results, stitch together
+	 * the FQ object from those.
+	 */
+	fq->cgr_groupid = fqd.cgid;
+	switch (np.state & QM_MCR_NP_STATE_MASK) {
+	case QM_MCR_NP_STATE_OOS:
+		break;
+	case QM_MCR_NP_STATE_RETIRED:
+		fq->state = qman_fq_state_retired;
+		if (np.frm_cnt)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		break;
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+		fq->state = qman_fq_state_sched;
+		if (np.state & QM_MCR_NP_STATE_R)
+			fq_set(fq, QMAN_FQ_STATE_CHANGING);
+		break;
+	case QM_MCR_NP_STATE_PARKED:
+		fq->state = qman_fq_state_parked;
+		break;
+	default:
+		DPAA_ASSERT(NULL == "invalid FQ state");
+	}
+	if (fqd.fq_ctrl & QM_FQCTRL_CGE)
+		fq->state |= QMAN_FQ_STATE_CGR_EN;
+	return 0;
+err:
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID)
+		qman_release_fqid(fqid);
+	return -EIO;
+}
+
+void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
+{
+	/*
+	 * We don't need to lock the FQ as it is a pre-condition that the FQ be
+	 * quiesced. Instead, run some checks.
+	 */
+	switch (fq->state) {
+	case qman_fq_state_parked:
+		DPAA_ASSERT(flags & QMAN_FQ_DESTROY_PARKED);
+	case qman_fq_state_oos:
+		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
+			qman_release_fqid(fq->fqid);
+
+		return;
+	default:
+		break;
+	}
+	DPAA_ASSERT(NULL == "qman_free_fq() on unquiesced FQ!");
+}
+
+u32 qman_fq_fqid(struct qman_fq *fq)
+{
+	return fq->fqid;
+}
+
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags)
+{
+	if (state)
+		*state = fq->state;
+	if (flags)
+		*flags = fq->flags;
+}
+
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	u8 res, myverb = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		QM_MCC_VERB_INITFQ_SCHED : QM_MCC_VERB_INITFQ_PARKED;
+
+	if ((fq->state != qman_fq_state_oos) &&
+	    (fq->state != qman_fq_state_parked))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	if (opts && (opts->we_mask & QM_INITFQ_WE_OAC)) {
+		/* And can't be set at the same time as TDTHRESH */
+		if (opts->we_mask & QM_INITFQ_WE_TDTHRESH)
+			return -EINVAL;
+	}
+	/* Issue an INITFQ_[PARKED|SCHED] management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     ((fq->state != qman_fq_state_oos) &&
+				(fq->state != qman_fq_state_parked)))) {
+		FQUNLOCK(fq);
+		return -EBUSY;
+	}
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initfq = *opts;
+	mcc->initfq.fqid = cpu_to_be32(fq->fqid);
+	mcc->initfq.count = 0;
+	/*
+	 * If the FQ does *not* have the TO_DCPORTAL flag, context_b is set as a
+	 * demux pointer. Otherwise, the caller-provided value is allowed to
+	 * stand, don't overwrite it.
+	 */
+	if (fq_isclear(fq, QMAN_FQ_FLAG_TO_DCPORTAL)) {
+		dma_addr_t phys_fq;
+
+		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+		/*
+		 *  and the physical address - NB, if the user wasn't trying to
+		 * set CONTEXTA, clear the stashing settings.
+		 */
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_CONTEXTA)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;
+			memset(&mcc->initfq.fqd.context_a, 0,
+			       sizeof(mcc->initfq.fqd.context_a));
+		} else {
+			phys_fq = rte_mem_virt2phy(fq);
+			qm_fqd_stashing_set64(&mcc->initfq.fqd, phys_fq);
+		}
+	}
+	if (flags & QMAN_INITFQ_FLAG_LOCAL) {
+		mcc->initfq.fqd.dest.channel = p->config->channel;
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_DESTWQ)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_DESTWQ;
+			mcc->initfq.fqd.dest.wq = 4;
+		}
+	}
+	mcc->initfq.we_mask = cpu_to_be16(mcc->initfq.we_mask);
+	cpu_to_hw_fqd(&mcc->initfq.fqd);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		FQUNLOCK(fq);
+		return -EIO;
+	}
+	if (opts) {
+		if (opts->we_mask & QM_INITFQ_WE_FQCTRL) {
+			if (opts->fqd.fq_ctrl & QM_FQCTRL_CGE)
+				fq_set(fq, QMAN_FQ_STATE_CGR_EN);
+			else
+				fq_clear(fq, QMAN_FQ_STATE_CGR_EN);
+		}
+		if (opts->we_mask & QM_INITFQ_WE_CGID)
+			fq->cgr_groupid = opts->fqd.cgid;
+	}
+	fq->state = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		qman_fq_state_sched : qman_fq_state_parked;
+	FQUNLOCK(fq);
+	return 0;
+}
+
+int qman_schedule_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_parked)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTERFQ_SCHED management command */
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state != qman_fq_state_parked))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_SCHED);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_SCHED);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_sched;
+out:
+	FQUNLOCK(fq);
+
+	return ret;
+}
+
+int qman_retire_fq(struct qman_fq *fq, u32 *flags)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int rval;
+	u8 res;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_sched))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_retired) ||
+				(fq->state == qman_fq_state_oos))) {
+		rval = -EBUSY;
+		goto out;
+	}
+	rval = table_push_fq(p, fq);
+	if (rval)
+		goto out;
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_RETIRE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_RETIRE);
+	res = mcr->result;
+	/*
+	 * "Elegant" would be to treat OK/PENDING the same way; set CHANGING,
+	 * and defer the flags until FQRNI or FQRN (respectively) show up. But
+	 * "Friendly" is to process OK immediately, and not set CHANGING. We do
+	 * friendly, otherwise the caller doesn't necessarily have a fully
+	 * "retired" FQ on return even if the retirement was immediate. However
+	 * this does mean some code duplication between here and
+	 * fq_state_change().
+	 */
+	if (likely(res == QM_MCR_RESULT_OK)) {
+		rval = 0;
+		/* Process 'fq' right away, we'll ignore FQRNI */
+		if (mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		if (flags)
+			*flags = fq->flags;
+		fq->state = qman_fq_state_retired;
+		if (fq->cb.fqs) {
+			/*
+			 * Another issue with supporting "immediate" retirement
+			 * is that we're forced to drop FQRNIs, because by the
+			 * time they're seen it may already be "too late" (the
+			 * fq may have been OOS'd and free()'d already). But if
+			 * the upper layer wants a callback whether it's
+			 * immediate or not, we have to fake a "MR" entry to
+			 * look like an FQRNI...
+			 */
+			struct qm_mr_entry msg;
+
+			msg.verb = QM_MR_VERB_FQRNI;
+			msg.fq.fqs = mcr->alterfq.fqs;
+			msg.fq.fqid = fq->fqid;
+			msg.fq.contextB = (u32)(uintptr_t)fq;
+			fq->cb.fqs(p, fq, &msg);
+		}
+	} else if (res == QM_MCR_RESULT_PENDING) {
+		rval = 1;
+		fq_set(fq, QMAN_FQ_STATE_CHANGING);
+	} else {
+		rval = -EIO;
+		table_del_fq(p, fq);
+	}
+out:
+	FQUNLOCK(fq);
+	return rval;
+}
+
+int qman_oos_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_retired)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_BLOCKOOS)) ||
+		     (fq->state != qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_OOS);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_OOS);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_oos;
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_fq_flow_control(struct qman_fq *fq, int xon)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+	u8 myverb;
+
+	if ((fq->state == qman_fq_state_oos) ||
+	    (fq->state == qman_fq_state_retired) ||
+		(fq->state == qman_fq_state_parked))
+		return -EINVAL;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTER_FQXON or ALTER_FQXOFF management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_parked) ||
+			(fq->state == qman_fq_state_oos) ||
+			(fq->state == qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = fq->fqid;
+	mcc->alterfq.count = 0;
+	myverb = xon ? QM_MCC_VERB_ALTER_FQXON : QM_MCC_VERB_ALTER_FQXOFF;
+
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(fqd);
+	if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_fq_has_pkts(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	int ret = 0;
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		ret = !!mcr->queryfq_np.frm_cnt;
+	return ret;
+}
+
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		*np = mcr->queryfq_np;
+		np->fqd_link = be24_to_cpu(np->fqd_link);
+		np->odp_seq = be16_to_cpu(np->odp_seq);
+		np->orp_nesn = be16_to_cpu(np->orp_nesn);
+		np->orp_ea_hseq  = be16_to_cpu(np->orp_ea_hseq);
+		np->orp_ea_tseq  = be16_to_cpu(np->orp_ea_tseq);
+		np->orp_ea_hptr = be24_to_cpu(np->orp_ea_hptr);
+		np->orp_ea_tptr = be24_to_cpu(np->orp_ea_tptr);
+		np->pfdr_hptr = be24_to_cpu(np->pfdr_hptr);
+		np->pfdr_tptr = be24_to_cpu(np->pfdr_tptr);
+		np->ics_surp = be16_to_cpu(np->ics_surp);
+		np->byte_cnt = be32_to_cpu(np->byte_cnt);
+		np->frm_cnt = be24_to_cpu(np->frm_cnt);
+		np->ra1_sfdr = be16_to_cpu(np->ra1_sfdr);
+		np->ra2_sfdr = be16_to_cpu(np->ra2_sfdr);
+		np->od1_sfdr = be16_to_cpu(np->od1_sfdr);
+		np->od2_sfdr = be16_to_cpu(np->od2_sfdr);
+		np->od3_sfdr = be16_to_cpu(np->od3_sfdr);
+	}
+	if (res == QM_MCR_RESULT_ERR_FQID)
+		return -ERANGE;
+	else if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res, myverb;
+
+	myverb = (query_dedicated) ? QM_MCR_VERB_QUERYWQ_DEDICATED :
+				 QM_MCR_VERB_QUERYWQ;
+	mcc = qm_mc_start(&p->p);
+	mcc->querywq.channel.id = cpu_to_be16(wq->channel.id);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		int i, array_len;
+
+		wq->channel.id = be16_to_cpu(mcr->querywq.channel.id);
+		array_len = ARRAY_SIZE(mcr->querywq.wq_len);
+		for (i = 0; i < array_len; i++)
+			wq->wq_len[i] = be32_to_cpu(mcr->querywq.wq_len[i]);
+	}
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERYWQ failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->cgrtestwrite.cgid = cgr->cgrid;
+	mcc->cgrtestwrite.i_bcnt_hi = (u8)(i_bcnt >> 32);
+	mcc->cgrtestwrite.i_bcnt_lo = (u32)i_bcnt;
+	qm_mc_commit(&p->p, QM_MCC_VERB_CGRTESTWRITE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_CGRTESTWRITE);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*result = mcr->cgrtestwrite;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("CGR TEST WRITE failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->querycgr.cgid = cgr->cgrid;
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCGR);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYCGR);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*cgrd = mcr->querycgr;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CGR failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	cgrd->cgr.wr_parm_g.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_g.word);
+	cgrd->cgr.wr_parm_y.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_y.word);
+	cgrd->cgr.wr_parm_r.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_r.word);
+	cgrd->cgr.cscn_targ =  be32_to_cpu(cgrd->cgr.cscn_targ);
+	cgrd->cgr.__cs_thres = be16_to_cpu(cgrd->cgr.__cs_thres);
+	for (i = 0; i < ARRAY_SIZE(cgrd->cscn_targ_swp); i++)
+		cgrd->cscn_targ_swp[i] =
+			be32_to_cpu(cgrd->cscn_targ_swp[i]);
+	return 0;
+}
+
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion)
+{
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	qm_mc_start(&p->p);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			QM_MCC_VERB_QUERYCONGESTION);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*congestion = mcr->querycongestion;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CONGESTION failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	for (i = 0; i < ARRAY_SIZE(congestion->state.state); i++)
+		congestion->state.state[i] =
+			be32_to_cpu(congestion->state.state[i]);
+	return 0;
+}
+
+int qman_set_vdq(struct qman_fq *fq, u16 num)
+{
+	struct qman_portal *p = get_affine_portal();
+	uint32_t vdqcr;
+	int ret = -EBUSY;
+
+	vdqcr = QM_VDQCR_EXACT;
+	vdqcr |= QM_VDQCR_NUMFRAMES_SET(num);
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired)) {
+		ret = -EINVAL;
+		goto out;
+	}
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR)) {
+		ret = -EBUSY;
+		goto out;
+	}
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (!ret)
+		qm_dqrr_vdqcr_set(&p->p, vdqcr);
+
+out:
+	return ret;
+}
+
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
+			  u32 vdqcr)
+{
+	struct qman_portal *p;
+	int ret = -EBUSY;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired))
+		return -EINVAL;
+	if (vdqcr & QM_VDQCR_FQID_MASK)
+		return -EINVAL;
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+		return -EBUSY;
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	p = get_affine_portal();
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (ret)
+		return ret;
+
+	/* VDQCR is set */
+	qm_dqrr_vdqcr_set(&p->p, vdqcr);
+	return 0;
+}
+
+static noinline void update_eqcr_ci(struct qman_portal *p, u8 avail)
+{
+	if (avail)
+		qm_eqcr_cce_prefetch(&p->p);
+	else
+		qm_eqcr_cce_update(&p->p);
+}
+
+int qman_eqcr_is_empty(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	u8 avail;
+
+	update_eqcr_ci(p, 0);
+	avail = qm_eqcr_get_fill(&p->p);
+	return (avail == 0);
+}
+
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine)
+{
+	if (affine) {
+		struct qman_portal *p = get_affine_portal();
+
+		p->cb_dc_ern = handler;
+	} else
+		cb_dc_ern = handler;
+}
+
+static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
+					struct qman_fq *fq,
+					const struct qm_fd *fd,
+					u32 flags)
+{
+	struct qm_eqcr_entry *eq;
+	u8 avail;
+
+	if (p->use_eqcr_ci_stashing) {
+		/*
+		 * The stashing case is easy, only update if we need to in
+		 * order to try and liberate ring entries.
+		 */
+		eq = qm_eqcr_start_stash(&p->p);
+	} else {
+		/*
+		 * The non-stashing case is harder, need to prefetch ahead of
+		 * time.
+		 */
+		avail = qm_eqcr_get_avail(&p->p);
+		if (avail < 2)
+			update_eqcr_ci(p, avail);
+		eq = qm_eqcr_start_no_stash(&p->p);
+	}
+
+	if (unlikely(!eq))
+		return NULL;
+
+	if (flags & QMAN_ENQUEUE_FLAG_DCA)
+		eq->dca = QM_EQCR_DCA_ENABLE |
+			((flags & QMAN_ENQUEUE_FLAG_DCA_PARK) ?
+					QM_EQCR_DCA_PARK : 0) |
+			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
+	eq->fqid = cpu_to_be32(fq->fqid);
+	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+	eq->fd = *fd;
+	cpu_to_hw_fd(&eq->fd);
+	return eq;
+}
+
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_CMD_ENQUEUE |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+	/* Factor the below out, it's used from qman_enqueue_orp() too */
+	return 0;
+}
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_portal *portal = &p->p;
+
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
+
+	u8 i, diff, old_ci, sent = 0;
+
+	/* Update the available entries if no entry is free */
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return 0;
+	}
+
+	/* try to send as many frames as possible */
+	while (eqcr->available && frames_to_send--) {
+		eq->fqid = cpu_to_be32(fq->fqid);
+		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+		eq->fd.opaque_addr = fd->opaque_addr;
+		eq->fd.addr = cpu_to_be40(fd->addr);
+		eq->fd.status = cpu_to_be32(fd->status);
+		eq->fd.opaque = cpu_to_be32(fd->opaque);
+
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		eqcr->available--;
+		sent++;
+		fd++;
+	}
+	lwsync();
+
+	/* In order for flushes to complete faster, all lines are recorded in
+	 * 32 bit word.
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		eq->__dont_write_directly__verb =
+			QM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;
+		prev_eq = eq;
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		if (unlikely((prev_eq + 1) != eq))
+			eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+	}
+
+	/* We need  to flush all the lines but without load/store operations
+	 * between them
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		dcbf(eq);
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+	}
+	/* Update cursor for the next call */
+	eqcr->cursor = eq;
+	return sent;
+}
+
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum)
+{
+	struct qman_portal *p  = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Process ORP-specifics here */
+	if (flags & QMAN_ENQUEUE_FLAG_NLIS)
+		orp_seqnum |= QM_EQCR_SEQNUM_NLIS;
+	else {
+		orp_seqnum &= ~QM_EQCR_SEQNUM_NLIS;
+		if (flags & QMAN_ENQUEUE_FLAG_NESN)
+			orp_seqnum |= QM_EQCR_SEQNUM_NESN;
+		else
+			/* No need to check 4 QMAN_ENQUEUE_FLAG_HOLE */
+			orp_seqnum &= ~QM_EQCR_SEQNUM_NESN;
+	}
+	eq->seqnum = cpu_to_be16(orp_seqnum);
+	eq->orp = cpu_to_be32(orp->fqid);
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_ORP |
+		((flags & (QMAN_ENQUEUE_FLAG_HOLE | QMAN_ENQUEUE_FLAG_NESN)) ?
+				0 : QM_EQCR_VERB_CMD_ENQUEUE) |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+
+	return 0;
+}
+
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+	u8 verb = QM_MCC_VERB_MODIFYCGR;
+
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initcgr = *opts;
+	mcc->initcgr.we_mask = cpu_to_be16(mcc->initcgr.we_mask);
+	mcc->initcgr.cgr.wr_parm_g.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_g.word);
+	mcc->initcgr.cgr.wr_parm_y.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_y.word);
+	mcc->initcgr.cgr.wr_parm_r.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_r.word);
+	mcc->initcgr.cgr.cscn_targ =  cpu_to_be32(mcc->initcgr.cgr.cscn_targ);
+	mcc->initcgr.cgr.__cs_thres = cpu_to_be16(mcc->initcgr.cgr.__cs_thres);
+
+	mcc->initcgr.cgid = cgr->cgrid;
+	if (flags & QMAN_CGR_FLAG_USE_INIT)
+		verb = QM_MCC_VERB_INITCGR;
+	qm_mc_commit(&p->p, verb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == verb);
+	res = mcr->result;
+	return (res == QM_MCR_RESULT_OK) ? 0 : -EIO;
+}
+
+#define TARG_MASK(n) (0x80000000 >> (n->config->channel - \
+					QM_CHANNEL_SWPORTAL0))
+#define TARG_DCP_MASK(n) (0x80000000 >> (10 + n))
+#define PORTAL_IDX(n) (n->config->channel - QM_CHANNEL_SWPORTAL0)
+
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret;
+	struct qman_portal *p;
+
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	p = get_affine_portal();
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	cgr->chan = p->config->channel;
+	spin_lock(&p->cgr_lock);
+
+	/* if no opts specified, just add it to the list */
+	if (!opts)
+		goto add_list;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		goto release_lock;
+	if (opts)
+		local_opts = *opts;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+			QM_CGR_TARG_UDP_CTRL_WRITE_BIT | PORTAL_IDX(p);
+	else
+		/* Overwrite TARG */
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+							TARG_MASK(p);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT, &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		goto release_lock;
+add_list:
+	list_add(&cgr->node, &p->cgr_cbs);
+
+	/* Determine if newly added object requires its callback to be called */
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret) {
+		/* we can't go back, so proceed and return success, but screen
+		 * and wail to the log file.
+		 */
+		pr_crit("CGR HW state partially modified\n");
+		ret = 0;
+		goto release_lock;
+	}
+	if (cgr->cb && cgr_state.cgr.cscn_en && qman_cgrs_get(&p->cgrs[1],
+							      cgr->cgrid))
+		cgr->cb(p, cgr, 1);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+	return ret;
+}
+
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcc_initcgr local_opts;
+	struct qm_mcr_querycgr cgr_state;
+	int ret;
+
+	if ((qman_ip_rev & 0xFF00) < QMAN_REV30) {
+		pr_warn("QMan version doesn't support CSCN => DCP portal\n");
+		return -EINVAL;
+	}
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		return ret;
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	if (opts)
+		local_opts = *opts;
+
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+				QM_CGR_TARG_UDP_CTRL_WRITE_BIT |
+				QM_CGR_TARG_UDP_CTRL_DCP | dcp_portal;
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+					TARG_DCP_MASK(dcp_portal);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT,
+				      &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+
+	return ret;
+}
+
+int qman_delete_cgr(struct qman_cgr *cgr)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret = 0;
+	struct qman_cgr *i;
+	struct qman_portal *p = get_affine_portal();
+
+	if (cgr->chan != p->config->channel) {
+		pr_crit("Attempting to delete cgr from different portal than"
+			" it was create: create 0x%x, delete 0x%x\n",
+			cgr->chan, p->config->channel);
+		ret = -EINVAL;
+		goto put_portal;
+	}
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	spin_lock(&p->cgr_lock);
+	list_del(&cgr->node);
+	/*
+	 * If there are no other CGR objects for this CGRID in the list,
+	 * update CSCN_TARG accordingly
+	 */
+	list_for_each_entry(i, &p->cgr_cbs, node)
+		if ((i->cgrid == cgr->cgrid) && i->cb)
+			goto release_lock;
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)  {
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+		goto release_lock;
+	}
+	/* Overwrite TARG */
+	local_opts.we_mask = QM_CGR_WE_CSCN_TARG;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl = PORTAL_IDX(p);
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ &
+							 ~(TARG_MASK(p));
+	ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+put_portal:
+	return ret;
+}
+
+int qman_shutdown_fq(u32 fqid)
+{
+	struct qman_portal *p;
+	struct qm_portal *low_p;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	u8 state;
+	int orl_empty, fq_empty, drain = 0;
+	u32 result;
+	u32 channel, wq;
+	u16 dest_wq;
+
+	p = get_affine_portal();
+	low_p = &p->p;
+
+	/* Determine the state of the FQID */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	state = mcr->queryfq_np.state & QM_MCR_NP_STATE_MASK;
+	if (state == QM_MCR_NP_STATE_OOS)
+		return 0; /* Already OOS, no need to do anymore checks */
+
+	/* Query which channel the FQ is using */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+
+	/* Need to store these since the MCR gets reused */
+	dest_wq = be16_to_cpu(mcr->queryfq.fqd.dest_wq);
+	channel = dest_wq & 0x7;
+	wq = dest_wq >> 3;
+
+	switch (state) {
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+	case QM_MCR_NP_STATE_PARKED:
+		orl_empty = 0;
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_RETIRE);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_RETIRE);
+		result = mcr->result; /* Make a copy as we reuse MCR below */
+
+		if (result == QM_MCR_RESULT_PENDING) {
+			/* Need to wait for the FQRN in the message ring, which
+			 * will only occur once the FQ has been drained.  In
+			 * order for the FQ to drain the portal needs to be set
+			 * to dequeue from the channel the FQ is scheduled on
+			 */
+			const struct qm_mr_entry *msg;
+			const struct qm_dqrr_entry *dqrr = NULL;
+			int found_fqrn = 0;
+			__maybe_unused u16 dequeue_wq = 0;
+
+			/* Flag that we need to drain FQ */
+			drain = 1;
+
+			if (channel >= qm_channel_pool1 &&
+			    channel < (u16)(qm_channel_pool1 + 15)) {
+				/* Pool channel, enable the bit in the portal */
+				dequeue_wq = (channel -
+					      qm_channel_pool1 + 1) << 4 | wq;
+			} else if (channel < qm_channel_pool1) {
+				/* Dedicated channel */
+				dequeue_wq = wq;
+			} else {
+				pr_info("Cannot recover FQ 0x%x,"
+					" it is scheduled on channel 0x%x",
+					fqid, channel);
+				return -EBUSY;
+			}
+			/* Set the sdqcr to drain this channel */
+			if (channel < qm_channel_pool1)
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+					  QM_SDQCR_CHANNELS_DEDICATED);
+			else
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+						  QM_SDQCR_CHANNELS_POOL_CONV
+						  (channel));
+			while (!found_fqrn) {
+				/* Keep draining DQRR while checking the MR*/
+				qm_dqrr_pvb_update(low_p);
+				dqrr = qm_dqrr_current(low_p);
+				while (dqrr) {
+					qm_dqrr_cdc_consume_1ptr(
+						low_p, dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+				/* Process message ring too */
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+				while (msg) {
+					if ((msg->verb &
+					     QM_MR_VERB_TYPE_MASK)
+					    == QM_MR_VERB_FQRN)
+						found_fqrn = 1;
+					qm_mr_next(low_p);
+					qm_mr_cci_consume_to_current(low_p);
+					qm_mr_pvb_update(low_p);
+					msg = qm_mr_current(low_p);
+				}
+				cpu_relax();
+			}
+		}
+		if (result != QM_MCR_RESULT_OK &&
+		    result !=  QM_MCR_RESULT_PENDING) {
+			/* error */
+			pr_err("qman_retire_fq failed on FQ 0x%x,"
+			       " result=0x%x\n", fqid, result);
+			return -1;
+		}
+		if (!(mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)) {
+			/* ORL had no entries, no need to wait until the
+			 * ERNs come in.
+			 */
+			orl_empty = 1;
+		}
+		/* Retirement succeeded, check to see if FQ needs
+		 * to be drained.
+		 */
+		if (drain || mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY) {
+			/* FQ is Not Empty, drain using volatile DQ commands */
+			fq_empty = 0;
+			do {
+				const struct qm_dqrr_entry *dqrr = NULL;
+				u32 vdqcr = fqid | QM_VDQCR_NUMFRAMES_SET(3);
+
+				qm_dqrr_vdqcr_set(low_p, vdqcr);
+
+				/* Wait for a dequeue to occur */
+				while (dqrr == NULL) {
+					qm_dqrr_pvb_update(low_p);
+					dqrr = qm_dqrr_current(low_p);
+					if (!dqrr)
+						cpu_relax();
+				}
+				/* Process the dequeues, making sure to
+				 * empty the ring completely.
+				 */
+				while (dqrr) {
+					if (dqrr->fqid == fqid &&
+					    dqrr->stat & QM_DQRR_STAT_FQ_EMPTY)
+						fq_empty = 1;
+					qm_dqrr_cdc_consume_1ptr(low_p,
+								 dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+			} while (fq_empty == 0);
+		}
+		qm_dqrr_sdqcr_set(low_p, 0);
+
+		/* Wait for the ORL to have been completely drained */
+		while (orl_empty == 0) {
+			const struct qm_mr_entry *msg;
+
+			qm_mr_pvb_update(low_p);
+			msg = qm_mr_current(low_p);
+			while (msg) {
+				if ((msg->verb & QM_MR_VERB_TYPE_MASK) ==
+				    QM_MR_VERB_FQRL)
+					orl_empty = 1;
+				qm_mr_next(low_p);
+				qm_mr_cci_consume_to_current(low_p);
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+			}
+			cpu_relax();
+		}
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result != QM_MCR_RESULT_OK) {
+			pr_err(
+			"OOS after drain Failed on FQID 0x%x, result 0x%x\n",
+			       fqid, mcr->result);
+			return -1;
+		}
+		return 0;
+
+	case QM_MCR_NP_STATE_RETIRED:
+		/* Send OOS Command */
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result) {
+			pr_err("OOS Failed on FQID 0x%x\n", fqid);
+			return -1;
+		}
+		return 0;
+
+	}
+	return -1;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.h b/drivers/bus/dpaa/base/qbman/qman.h
new file mode 100644
index 0000000..ee78d31
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.h
@@ -0,0 +1,888 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman_priv.h"
+
+/***************************/
+/* Portal register assists */
+/***************************/
+#define QM_REG_EQCR_PI_CINH	0x3000
+#define QM_REG_EQCR_CI_CINH	0x3040
+#define QM_REG_EQCR_ITR		0x3080
+#define QM_REG_DQRR_PI_CINH	0x3100
+#define QM_REG_DQRR_CI_CINH	0x3140
+#define QM_REG_DQRR_ITR		0x3180
+#define QM_REG_DQRR_DCAP	0x31C0
+#define QM_REG_DQRR_SDQCR	0x3200
+#define QM_REG_DQRR_VDQCR	0x3240
+#define QM_REG_DQRR_PDQCR	0x3280
+#define QM_REG_MR_PI_CINH	0x3300
+#define QM_REG_MR_CI_CINH	0x3340
+#define QM_REG_MR_ITR		0x3380
+#define QM_REG_CFG		0x3500
+#define QM_REG_ISR		0x3600
+#define QM_REG_IIR              0x36C0
+#define QM_REG_ITPR		0x3740
+
+/* Cache-enabled register offsets */
+#define QM_CL_EQCR		0x0000
+#define QM_CL_DQRR		0x1000
+#define QM_CL_MR		0x2000
+#define QM_CL_EQCR_PI_CENA	0x3000
+#define QM_CL_EQCR_CI_CENA	0x3040
+#define QM_CL_DQRR_PI_CENA	0x3100
+#define QM_CL_DQRR_CI_CENA	0x3140
+#define QM_CL_MR_PI_CENA	0x3300
+#define QM_CL_MR_CI_CENA	0x3340
+#define QM_CL_CR		0x3800
+#define QM_CL_RR0		0x3900
+#define QM_CL_RR1		0x3940
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __qm_in(qm, o)		be32_to_cpu(__raw_readl((qm)->ci  + (o)))
+#define __qm_out(qm, o, val)	__raw_writel((cpu_to_be32(val)), \
+					     (qm)->ci + (o))
+#define qm_in(reg)		__qm_in(&portal->addr, QM_REG_##reg)
+#define qm_out(reg, val)	__qm_out(&portal->addr, QM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __qm_cl_touch_ro(qm, o) dcbt_ro((qm)->ce + (o))
+#define __qm_cl_touch_rw(qm, o) dcbt_rw((qm)->ce + (o))
+#define __qm_cl_in(qm, o)	be32_to_cpu(__raw_readl((qm)->ce + (o)))
+#define __qm_cl_out(qm, o, val) \
+	do { \
+		u32 *__tmpclout = (qm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __qm_cl_invalidate(qm, o) dccivac((qm)->ce + (o))
+#define qm_cl_touch_ro(reg) __qm_cl_touch_ro(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_touch_rw(reg) __qm_cl_touch_rw(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_in(reg)	    __qm_cl_in(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_out(reg, val) __qm_cl_out(&portal->addr, QM_CL_##reg##_CENA, val)
+#define qm_cl_invalidate(reg)\
+	__qm_cl_invalidate(&portal->addr, QM_CL_##reg##_CENA)
+
+/* Cache-enabled ring access */
+#define qm_cl(base, idx)	((void *)base + ((idx) << 6))
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 qm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *     dmode == h/w dequeue mode.
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ *     dc == DCA (Discrete Consumption Acknowledgment), DQRR-only
+ *   As for "enum qm_dqrr_dmode", it should be self-explanatory.
+ */
+enum qm_eqcr_pmode {		/* matches QCSP_CFG::EPM */
+	qm_eqcr_pci = 0,	/* PI index, cache-inhibited */
+	qm_eqcr_pce = 1,	/* PI index, cache-enabled */
+	qm_eqcr_pvb = 2		/* valid-bit */
+};
+
+enum qm_dqrr_dmode {		/* matches QCSP_CFG::DP */
+	qm_dqrr_dpush = 0,	/* SDQCR  + VDQCR */
+	qm_dqrr_dpull = 1	/* PDQCR */
+};
+
+enum qm_dqrr_pmode {		/* s/w-only */
+	qm_dqrr_pci,		/* reads DQRR_PI_CINH */
+	qm_dqrr_pce,		/* reads DQRR_PI_CENA */
+	qm_dqrr_pvb		/* reads valid-bit */
+};
+
+enum qm_dqrr_cmode {		/* matches QCSP_CFG::DCM */
+	qm_dqrr_cci = 0,	/* CI index, cache-inhibited */
+	qm_dqrr_cce = 1,	/* CI index, cache-enabled */
+	qm_dqrr_cdc = 2		/* Discrete Consumption Acknowledgment */
+};
+
+enum qm_mr_pmode {		/* s/w-only */
+	qm_mr_pci,		/* reads MR_PI_CINH */
+	qm_mr_pce,		/* reads MR_PI_CENA */
+	qm_mr_pvb		/* reads valid-bit */
+};
+
+enum qm_mr_cmode {		/* matches QCSP_CFG::MM */
+	qm_mr_cci = 0,		/* CI index, cache-inhibited */
+	qm_mr_cce = 1		/* CI index, cache-enabled */
+};
+
+/* ------------------------- */
+/* --- Portal structures --- */
+
+#define QM_EQCR_SIZE		8
+#define QM_DQRR_SIZE		16
+#define QM_MR_SIZE		8
+
+struct qm_eqcr {
+	struct qm_eqcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	u32 busy;
+	enum qm_eqcr_pmode pmode;
+#endif
+};
+
+struct qm_dqrr {
+	const struct qm_dqrr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum qm_dqrr_dmode dmode;
+	enum qm_dqrr_pmode pmode;
+	enum qm_dqrr_cmode cmode;
+#endif
+};
+
+struct qm_mr {
+	const struct qm_mr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum qm_mr_pmode pmode;
+	enum qm_mr_cmode cmode;
+#endif
+};
+
+struct qm_mc {
+	struct qm_mc_command *cr;
+	struct qm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum {
+		/* Can be _mc_start()ed */
+		qman_mc_idle,
+		/* Can be _mc_commit()ed or _mc_abort()ed */
+		qman_mc_user,
+		/* Can only be _mc_retry()ed */
+		qman_mc_hw
+	} state;
+#endif
+};
+
+#define QM_PORTAL_ALIGNMENT ____cacheline_aligned
+
+struct qm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct qm_portal {
+	struct qm_addr addr;
+	struct qm_eqcr eqcr;
+	struct qm_dqrr dqrr;
+	struct qm_mr mr;
+	struct qm_mc mc;
+} QM_PORTAL_ALIGNMENT;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define EQCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_EQCR_SIZE << 6)))
+
+extern dma_addr_t rte_mem_virt2phy(const void *addr);
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 EQCR_PTR2IDX(struct qm_eqcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_EQCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void EQCR_INC(struct qm_eqcr *eqcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates fast
+	 * code with essentially no branching overheads. We increment to the
+	 * next EQCR pointer and handle overflow and 'vbit'.
+	 */
+	struct qm_eqcr_entry *partial = eqcr->cursor + 1;
+
+	eqcr->cursor = EQCR_CARRYCLEAR(partial);
+	if (partial != eqcr->cursor)
+		eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_no_stash(struct qm_portal
+								 *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available)
+		return NULL;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 1;
+#endif
+
+	return eqcr->cursor;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_stash(struct qm_portal
+								*portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return NULL;
+	}
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 1;
+#endif
+	return eqcr->cursor;
+}
+
+static inline void qm_eqcr_abort(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_pend_and_next(
+					struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+	DPAA_ASSERT(eqcr->pmode != qm_eqcr_pvb);
+	if (eqcr->available == 1)
+		return NULL;
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcr->cursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	return eqcr->cursor;
+}
+
+#define EQCR_COMMIT_CHECKS(eqcr) \
+do { \
+	DPAA_ASSERT(eqcr->busy); \
+	DPAA_ASSERT(eqcr->cursor->orp == (eqcr->cursor->orp & 0x00ffffff)); \
+	DPAA_ASSERT(eqcr->cursor->fqid == (eqcr->cursor->fqid & 0x00ffffff)); \
+} while (0)
+
+static inline void qm_eqcr_pci_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pci);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	hwsync();
+	qm_out(EQCR_PI_CINH, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	qm_cl_invalidate(EQCR_PI);
+	qm_cl_touch_rw(EQCR_PI);
+}
+
+static inline void qm_eqcr_pce_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	lwsync();
+	qm_cl_out(EQCR_PI, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pvb_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eqcursor;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pvb);
+	lwsync();
+	eqcursor = eqcr->cursor;
+	eqcursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline u8 qm_eqcr_cci_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline void qm_eqcr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	qm_cl_touch_ro(EQCR_CI);
+}
+
+static inline u8 qm_eqcr_cce_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline u8 qm_eqcr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->ithresh;
+}
+
+static inline void qm_eqcr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	eqcr->ithresh = ithresh;
+	qm_out(EQCR_ITR, ithresh);
+}
+
+static inline u8 qm_eqcr_get_avail(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->available;
+}
+
+static inline u8 qm_eqcr_get_fill(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return QM_EQCR_SIZE - 1 - eqcr->available;
+}
+
+#define DQRR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_DQRR_SIZE << 6)))
+
+static inline u8 DQRR_PTR2IDX(const struct qm_dqrr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_DQRR_SIZE - 1);
+}
+
+static inline const struct qm_dqrr_entry *DQRR_INC(
+						const struct qm_dqrr_entry *e)
+{
+	return DQRR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_dqrr_set_maxfill(struct qm_portal *portal, u8 mf)
+{
+	qm_out(CFG, (qm_in(CFG) & 0xff0fffff) |
+		((mf & (QM_DQRR_SIZE - 1)) << 20));
+}
+
+static inline const struct qm_dqrr_entry *qm_dqrr_current(
+						struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	if (!dqrr->fill)
+		return NULL;
+	return dqrr->cursor;
+}
+
+static inline u8 qm_dqrr_cursor(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return DQRR_PTR2IDX(dqrr->cursor);
+}
+
+static inline u8 qm_dqrr_next(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->fill);
+	dqrr->cursor = DQRR_INC(dqrr->cursor);
+	return --dqrr->fill;
+}
+
+static inline u8 qm_dqrr_pci_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pci);
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	qm_cl_invalidate(DQRR_PI);
+	qm_cl_touch_ro(DQRR_PI);
+}
+
+static inline u8 qm_dqrr_pce_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	dqrr->pi = qm_cl_in(DQRR_PI) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	const struct qm_dqrr_entry *res = qm_cl(dqrr->ring, dqrr->pi);
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_DQRR_VERB_VBIT) == dqrr->vbit) {
+		dqrr->pi = (dqrr->pi + 1) & (QM_DQRR_SIZE - 1);
+		if (!dqrr->pi)
+			dqrr->vbit ^= QM_DQRR_VERB_VBIT;
+		dqrr->fill++;
+	}
+}
+
+static inline void qm_dqrr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_rw(DQRR_CI);
+}
+
+static inline void qm_dqrr_cce_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cdc_consume_1(struct qm_portal *portal, u8 idx,
+					 int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |	/* S */
+		((park ? 1 : 0) << 6) |	/* PK */
+		idx);			/* DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_1ptr(struct qm_portal *portal,
+					    const struct qm_dqrr_entry *dq,
+					int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 idx = DQRR_PTR2IDX(dq);
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* DQRR_DCAP::S */
+		((park ? 1 : 0) << 6) |		/* DQRR_DCAP::PK */
+		idx);				/* DQRR_DCAP::DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_n(struct qm_portal *portal, u16 bitmask)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (1 << 8) |		/* DQRR_DCAP::S */
+		((u32)bitmask << 16));		/* DQRR_DCAP::DCAP_CI */
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+}
+
+static inline u8 qm_dqrr_cdc_cci(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+}
+
+static inline void qm_dqrr_cdc_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_ro(DQRR_CI);
+}
+
+static inline u8 qm_dqrr_cdc_cce(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_cl_in(DQRR_CI) & (QM_DQRR_SIZE - 1);
+}
+
+static inline u8 qm_dqrr_get_ci(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	return dqrr->ci;
+}
+
+static inline void qm_dqrr_park(struct qm_portal *portal, u8 idx)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		(idx & (QM_DQRR_SIZE - 1)));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_park_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		DQRR_PTR2IDX(dqrr->cursor));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_sdqcr_set(struct qm_portal *portal, u32 sdqcr)
+{
+	qm_out(DQRR_SDQCR, sdqcr);
+}
+
+static inline u32 qm_dqrr_sdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_SDQCR);
+}
+
+static inline void qm_dqrr_vdqcr_set(struct qm_portal *portal, u32 vdqcr)
+{
+	qm_out(DQRR_VDQCR, vdqcr);
+}
+
+static inline u32 qm_dqrr_vdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_VDQCR);
+}
+
+static inline u8 qm_dqrr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return dqrr->ithresh;
+}
+
+static inline void qm_dqrr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(DQRR_ITR, ithresh);
+}
+
+static inline u8 qm_dqrr_get_maxfill(struct qm_portal *portal)
+{
+	return (qm_in(CFG) & 0x00f00000) >> 20;
+}
+
+/* -------------- */
+/* --- MR API --- */
+
+#define MR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_MR_SIZE << 6)))
+
+static inline u8 MR_PTR2IDX(const struct qm_mr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_MR_SIZE - 1);
+}
+
+static inline const struct qm_mr_entry *MR_INC(const struct qm_mr_entry *e)
+{
+	return MR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_mr_finish(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (mr->ci != MR_PTR2IDX(mr->cursor))
+		pr_crit("Ignoring completed MR entries\n");
+}
+
+static inline const struct qm_mr_entry *qm_mr_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (!mr->fill)
+		return NULL;
+	return mr->cursor;
+}
+
+static inline u8 qm_mr_next(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->fill);
+	mr->cursor = MR_INC(mr->cursor);
+	return --mr->fill;
+}
+
+static inline void qm_mr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = (mr->ci + num) & (QM_MR_SIZE - 1);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = MR_PTR2IDX(mr->cursor);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(MR_ITR, ithresh);
+}
+
+/* ------------------------------ */
+/* --- Management command API --- */
+static inline int qm_mc_init(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + QM_CL_CR;
+	mc->rr = portal->addr.ce + QM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			QM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? QM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_idle;
+#endif
+	return 0;
+}
+
+static inline void qm_mc_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (mc->state != qman_mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct qm_mc_command *qm_mc_start(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void qm_mc_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_hw;
+#endif
+}
+
+static inline struct qm_mc_result *qm_mc_result(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= QM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_idle;
+#endif
+	return rr;
+}
+
+/* Portal interrupt register API */
+static inline void qm_isr_set_iperiod(struct qm_portal *portal, u16 iperiod)
+{
+	qm_out(ITPR, iperiod);
+}
+
+static inline u32 __qm_isr_read(struct qm_portal *portal, enum qm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 6));
+#else
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __qm_isr_write(struct qm_portal *portal, enum qm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 6), val);
+#else
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 2), val);
+#endif
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 80dde20..90fb130 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -66,6 +66,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_qman_portal_init(uint32_t index, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct qman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -116,6 +117,14 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 	pcfg.node = NULL;
 	pcfg.irq = fd;
 
+	portal = qman_create_affine_portal(&pcfg, NULL);
+	if (!portal) {
+		pr_err("Qman portal initialisation failed (%d)\n",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	irq_map.type = dpaa_portal_qman;
 	irq_map.portal_cinh = map.addr.cinh;
 	process_portal_irq_map(fd, &irq_map);
@@ -124,10 +133,13 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 
 static int fsl_qman_portal_finish(void)
 {
+	__maybe_unused const struct qm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = qman_destroy_affine_portal();
+	DPAA_BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 784fe60..85ae13b 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1246,6 +1246,761 @@ struct qman_cgr {
 	struct list_head node;
 };
 
+/* Flags to qman_create_fq() */
+#define QMAN_FQ_FLAG_NO_ENQUEUE      0x00000001 /* can't enqueue */
+#define QMAN_FQ_FLAG_NO_MODIFY       0x00000002 /* can only enqueue */
+#define QMAN_FQ_FLAG_TO_DCPORTAL     0x00000004 /* consumed by CAAM/PME/Fman */
+#define QMAN_FQ_FLAG_LOCKED          0x00000008 /* multi-core locking */
+#define QMAN_FQ_FLAG_AS_IS           0x00000010 /* query h/w state */
+#define QMAN_FQ_FLAG_DYNAMIC_FQID    0x00000020 /* (de)allocate fqid */
+
+/* Flags to qman_destroy_fq() */
+#define QMAN_FQ_DESTROY_PARKED       0x00000001 /* FQ can be parked or OOS */
+
+/* Flags from qman_fq_state() */
+#define QMAN_FQ_STATE_CHANGING       0x80000000 /* 'state' is changing */
+#define QMAN_FQ_STATE_NE             0x40000000 /* retired FQ isn't empty */
+#define QMAN_FQ_STATE_ORL            0x20000000 /* retired FQ has ORL */
+#define QMAN_FQ_STATE_BLOCKOOS       0xe0000000 /* if any are set, no OOS */
+#define QMAN_FQ_STATE_CGR_EN         0x10000000 /* CGR enabled */
+#define QMAN_FQ_STATE_VDQCR          0x08000000 /* being volatile dequeued */
+
+/* Flags to qman_init_fq() */
+#define QMAN_INITFQ_FLAG_SCHED       0x00000001 /* schedule rather than park */
+#define QMAN_INITFQ_FLAG_LOCAL       0x00000004 /* set dest portal */
+
+/* Flags to qman_enqueue(). NB, the strange numbering is to align with hardware,
+ * bit-wise. (NB: the PME API is sensitive to these precise numberings too, so
+ * any change here should be audited in PME.)
+ */
+#define QMAN_ENQUEUE_FLAG_WATCH_CGR  0x00080000 /* watch congestion state */
+#define QMAN_ENQUEUE_FLAG_DCA        0x00008000 /* perform enqueue-DCA */
+#define QMAN_ENQUEUE_FLAG_DCA_PARK   0x00004000 /* If DCA, requests park */
+#define QMAN_ENQUEUE_FLAG_DCA_PTR(p)		/* If DCA, p is DQRR entry */ \
+		(((u32)(p) << 2) & 0x00000f00)
+#define QMAN_ENQUEUE_FLAG_C_GREEN    0x00000000 /* choose one C_*** flag */
+#define QMAN_ENQUEUE_FLAG_C_YELLOW   0x00000008
+#define QMAN_ENQUEUE_FLAG_C_RED      0x00000010
+#define QMAN_ENQUEUE_FLAG_C_OVERRIDE 0x00000018
+/* For the ORP-specific qman_enqueue_orp() variant;
+ * - this flag indicates "Not Last In Sequence", ie. all but the final fragment
+ *   of a frame.
+ */
+#define QMAN_ENQUEUE_FLAG_NLIS       0x01000000
+/* - this flag performs no enqueue but fills in an ORP sequence number that
+ *   would otherwise block it (eg. if a frame has been dropped).
+ */
+#define QMAN_ENQUEUE_FLAG_HOLE       0x02000000
+/* - this flag performs no enqueue but advances NESN to the given sequence
+ *   number.
+ */
+#define QMAN_ENQUEUE_FLAG_NESN       0x04000000
+
+/* Flags to qman_modify_cgr() */
+#define QMAN_CGR_FLAG_USE_INIT       0x00000001
+#define QMAN_CGR_MODE_FRAME          0x00000001
+
+/**
+ * qman_get_portal_index - get portal configuration index
+ */
+int qman_get_portal_index(void);
+
+/**
+ * qman_affine_channel - return the channel ID of an portal
+ * @cpu: the cpu whose affine portal is the subject of the query
+ *
+ * If @cpu is -1, the affine portal for the current CPU will be used. It is a
+ * bug to call this function for any value of @cpu (other than -1) that is not a
+ * member of the cpu mask.
+ */
+u16 qman_affine_channel(int cpu);
+
+/**
+ * qman_set_vdq - Issue a volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @num: Number of Frames requested for volatile dequeue
+ *
+ * This function will issue a volatile dequeue command to the QMAN.
+ */
+int qman_set_vdq(struct qman_fq *fq, u16 num);
+
+/**
+ * qman_dequeue - Get the DQRR entry after volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ *
+ * This function will return the DQRR entry after a volatile dequeue command
+ * is issued. It will keep returning NULL until there is no packet available on
+ * the DQRR.
+ */
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
+
+/**
+ * qman_dqrr_consume - Consume the DQRR entriy after volatile dequeue
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @dq: DQRR entry to consume. This is the one which is provided by the
+ *    'qbman_dequeue' command.
+ *
+ * This will consume the DQRR enrey and make it available for next volatile
+ * dequeue.
+ */
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq);
+
+/**
+ * qman_poll_dqrr - process DQRR (fast-path) entries
+ * @limit: the maximum number of DQRR entries to process
+ *
+ * Use of this function requires that DQRR processing not be interrupt-driven.
+ * Ie. the value returned by qman_irqsource_get() should not include
+ * QM_PIRQ_DQRI. If the current CPU is sharing a portal hosted on another CPU,
+ * this function will return -EINVAL, otherwise the return value is >=0 and
+ * represents the number of DQRR entries processed.
+ */
+int qman_poll_dqrr(unsigned int limit);
+
+/**
+ * qman_poll
+ *
+ * Dispatcher logic on a cpu can use this to trigger any maintenance of the
+ * affine portal. There are two classes of portal processing in question;
+ * fast-path (which involves demuxing dequeue ring (DQRR) entries and tracking
+ * enqueue ring (EQCR) consumption), and slow-path (which involves EQCR
+ * thresholds, congestion state changes, etc). This function does whatever
+ * processing is not triggered by interrupts.
+ *
+ * Note, if DQRR and some slow-path processing are poll-driven (rather than
+ * interrupt-driven) then this function uses a heuristic to determine how often
+ * to run slow-path processing - as slow-path processing introduces at least a
+ * minimum latency each time it is run, whereas fast-path (DQRR) processing is
+ * close to zero-cost if there is no work to be done.
+ */
+void qman_poll(void);
+
+/**
+ * qman_stop_dequeues - Stop h/w dequeuing to the s/w portal
+ *
+ * Disables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_stop_dequeues(void);
+
+/**
+ * qman_start_dequeues - (Re)start h/w dequeuing to the s/w portal
+ *
+ * Enables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_start_dequeues(void);
+
+/**
+ * qman_static_dequeue_add - Add pool channels to the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Adds a set of pool channels to the portal's static dequeue command register
+ * (SDQCR). The requested pools are limited to those the portal has dequeue
+ * access to.
+ */
+void qman_static_dequeue_add(u32 pools);
+
+/**
+ * qman_static_dequeue_del - Remove pool channels from the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Removes a set of pool channels from the portal's static dequeue command
+ * register (SDQCR). The requested pools are limited to those the portal has
+ * dequeue access to.
+ */
+void qman_static_dequeue_del(u32 pools);
+
+/**
+ * qman_static_dequeue_get - return the portal's current SDQCR
+ *
+ * Returns the portal's current static dequeue command register (SDQCR). The
+ * entire register is returned, so if only the currently-enabled pool channels
+ * are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
+ */
+u32 qman_static_dequeue_get(void);
+
+/**
+ * qman_dca - Perform a Discrete Consumption Acknowledgment
+ * @dq: the DQRR entry to be consumed
+ * @park_request: indicates whether the held-active @fq should be parked
+ *
+ * Only allowed in DCA-mode portals, for DQRR entries whose handler callback had
+ * previously returned 'qman_cb_dqrr_defer'. NB, as with the other APIs, this
+ * does not take a 'portal' argument but implies the core affine portal from the
+ * cpu that is currently executing the function. For reasons of locking, this
+ * function must be called from the same CPU as that which processed the DQRR
+ * entry in the first place.
+ */
+void qman_dca(struct qm_dqrr_entry *dq, int park_request);
+
+/**
+ * qman_eqcr_is_empty - Determine if portal's EQCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * enqueues for the local portal have been processed by Qman but can't use the
+ * QMAN_ENQUEUE_FLAG_WAIT_SYNC flag to do this from the final qman_enqueue().
+ * The function forces tracking of EQCR consumption (which normally doesn't
+ * happen until enqueue processing needs to find space to put new enqueue
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int qman_eqcr_is_empty(void);
+
+/**
+ * qman_set_dc_ern - Set the handler for DCP enqueue rejection notifications
+ * @handler: callback for processing DCP ERNs
+ * @affine: whether this handler is specific to the locally affine portal
+ *
+ * If a hardware block's interface to Qman (ie. its direct-connect portal, or
+ * DCP) is configured not to receive enqueue rejections, then any enqueues
+ * through that DCP that are rejected will be sent to a given software portal.
+ * If @affine is non-zero, then this handler will only be used for DCP ERNs
+ * received on the portal affine to the current CPU. If multiple CPUs share a
+ * portal and they all call this function, they will be setting the handler for
+ * the same portal! If @affine is zero, then this handler will be global to all
+ * portals handled by this instance of the driver. Only those portals that do
+ * not have their own affine handler will use the global handler.
+ */
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
+
+	/* FQ management */
+	/* ------------- */
+/**
+ * qman_create_fq - Allocates a FQ
+ * @fqid: the index of the FQD to encapsulate, must be "Out of Service"
+ * @flags: bit-mask of QMAN_FQ_FLAG_*** options
+ * @fq: memory for storing the 'fq', with callbacks filled in
+ *
+ * Creates a frame queue object for the given @fqid, unless the
+ * QMAN_FQ_FLAG_DYNAMIC_FQID flag is set in @flags, in which case a FQID is
+ * dynamically allocated (or the function fails if none are available). Once
+ * created, the caller should not touch the memory at 'fq' except as extended to
+ * adjacent memory for user-defined fields (see the definition of "struct
+ * qman_fq" for more info). NO_MODIFY is only intended for enqueuing to
+ * pre-existing frame-queues that aren't to be otherwise interfered with, it
+ * prevents all other modifications to the frame queue. The TO_DCPORTAL flag
+ * causes the driver to honour any contextB modifications requested in the
+ * qm_init_fq() API, as this indicates the frame queue will be consumed by a
+ * direct-connect portal (PME, CAAM, or Fman). When frame queues are consumed by
+ * software portals, the contextB field is controlled by the driver and can't be
+ * modified by the caller. If the AS_IS flag is specified, management commands
+ * will be used on portal @p to query state for frame queue @fqid and construct
+ * a frame queue object based on that, rather than assuming/requiring that it be
+ * Out of Service.
+ */
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
+
+/**
+ * qman_destroy_fq - Deallocates a FQ
+ * @fq: the frame queue object to release
+ * @flags: bit-mask of QMAN_FQ_FREE_*** options
+ *
+ * The memory for this frame queue object ('fq' provided in qman_create_fq()) is
+ * not deallocated but the caller regains ownership, to do with as desired. The
+ * FQ must be in the 'out-of-service' state unless the QMAN_FQ_FREE_PARKED flag
+ * is specified, in which case it may also be in the 'parked' state.
+ */
+void qman_destroy_fq(struct qman_fq *fq, u32 flags);
+
+/**
+ * qman_fq_fqid - Queries the frame queue ID of a FQ object
+ * @fq: the frame queue object to query
+ */
+u32 qman_fq_fqid(struct qman_fq *fq);
+
+/**
+ * qman_fq_state - Queries the state of a FQ object
+ * @fq: the frame queue object to query
+ * @state: pointer to state enum to return the FQ scheduling state
+ * @flags: pointer to state flags to receive QMAN_FQ_STATE_*** bitmask
+ *
+ * Queries the state of the FQ object, without performing any h/w commands.
+ * This captures the state, as seen by the driver, at the time the function
+ * executes.
+ */
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
+
+/**
+ * qman_init_fq - Initialises FQ fields, leaves the FQ "parked" or "scheduled"
+ * @fq: the frame queue object to modify, must be 'parked' or new.
+ * @flags: bit-mask of QMAN_INITFQ_FLAG_*** options
+ * @opts: the FQ-modification settings, as defined in the low-level API
+ *
+ * The @opts parameter comes from the low-level portal API. Select
+ * QMAN_INITFQ_FLAG_SCHED in @flags to cause the frame queue to be scheduled
+ * rather than parked. NB, @opts can be NULL.
+ *
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver;
+ * 1. the 'count' and 'fqid' fields are always ignored (this operation only
+ * affects one frame queue: @fq).
+ * 2. the QM_INITFQ_WE_CONTEXTB option of the 'we_mask' field and the associated
+ * 'fqd' structure's 'context_b' field are sometimes overwritten;
+ *   - if @fq was not created with QMAN_FQ_FLAG_TO_DCPORTAL, then context_b is
+ *     initialised to a value used by the driver for demux.
+ *   - if context_b is initialised for demux, so is context_a in case stashing
+ *     is requested (see item 4).
+ * (So caller control of context_b is only possible for TO_DCPORTAL frame queue
+ * objects.)
+ * 3. if @flags contains QMAN_INITFQ_FLAG_LOCAL, the 'fqd' structure's
+ * 'dest::channel' field will be overwritten to match the portal used to issue
+ * the command. If the WE_DESTWQ write-enable bit had already been set by the
+ * caller, the channel workqueue will be left as-is, otherwise the write-enable
+ * bit is set and the workqueue is set to a default of 4. If the "LOCAL" flag
+ * isn't set, the destination channel/workqueue fields and the write-enable bit
+ * are left as-is.
+ * 4. if the driver overwrites context_a/b for demux, then if
+ * QM_INITFQ_WE_CONTEXTA is set, the driver will only overwrite
+ * context_a.address fields and will leave the stashing fields provided by the
+ * user alone, otherwise it will zero out the context_a.stashing fields.
+ */
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
+
+/**
+ * qman_schedule_fq - Schedules a FQ
+ * @fq: the frame queue object to schedule, must be 'parked'
+ *
+ * Schedules the frame queue, which must be Parked, which takes it to
+ * Tentatively-Scheduled or Truly-Scheduled depending on its fill-level.
+ */
+int qman_schedule_fq(struct qman_fq *fq);
+
+/**
+ * qman_retire_fq - Retires a FQ
+ * @fq: the frame queue object to retire
+ * @flags: FQ flags (as per qman_fq_state) if retirement completes immediately
+ *
+ * Retires the frame queue. This returns zero if it succeeds immediately, +1 if
+ * the retirement was started asynchronously, otherwise it returns negative for
+ * failure. When this function returns zero, @flags is set to indicate whether
+ * the retired FQ is empty and/or whether it has any ORL fragments (to show up
+ * as ERNs). Otherwise the corresponding flags will be known when a subsequent
+ * FQRN message shows up on the portal's message ring.
+ *
+ * NB, if the retirement is asynchronous (the FQ was in the Truly Scheduled or
+ * Active state), the completion will be via the message ring as a FQRN - but
+ * the corresponding callback may occur before this function returns!! Ie. the
+ * caller should be prepared to accept the callback as the function is called,
+ * not only once it has returned.
+ */
+int qman_retire_fq(struct qman_fq *fq, u32 *flags);
+
+/**
+ * qman_oos_fq - Puts a FQ "out of service"
+ * @fq: the frame queue object to be put out-of-service, must be 'retired'
+ *
+ * The frame queue must be retired and empty, and if any order restoration list
+ * was released as ERNs at the time of retirement, they must all be consumed.
+ */
+int qman_oos_fq(struct qman_fq *fq);
+
+/**
+ * qman_fq_flow_control - Set the XON/XOFF state of a FQ
+ * @fq: the frame queue object to be set to XON/XOFF state, must not be 'oos',
+ * or 'retired' or 'parked' state
+ * @xon: boolean to set fq in XON or XOFF state
+ *
+ * The frame should be in Tentatively Scheduled state or Truly Schedule sate,
+ * otherwise the IFSI interrupt will be asserted.
+ */
+int qman_fq_flow_control(struct qman_fq *fq, int xon);
+
+/**
+ * qman_query_fq - Queries FQD fields (via h/w query command)
+ * @fq: the frame queue object to be queried
+ * @fqd: storage for the queried FQD fields
+ */
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd);
+
+/**
+ * qman_query_fq_has_pkts - Queries non-programmable FQD fields and returns '1'
+ * if packets are in the frame queue. If there are no packets on frame
+ * queue '0' is returned.
+ * @fq: the frame queue object to be queried
+ */
+int qman_query_fq_has_pkts(struct qman_fq *fq);
+
+/**
+ * qman_query_fq_np - Queries non-programmable FQD fields
+ * @fq: the frame queue object to be queried
+ * @np: storage for the queried FQD fields
+ */
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
+
+/**
+ * qman_query_wq - Queries work queue lengths
+ * @query_dedicated: If non-zero, query length of WQs in the channel dedicated
+ *		to this software portal. Otherwise, query length of WQs in a
+ *		channel  specified in wq.
+ * @wq: storage for the queried WQs lengths. Also specified the channel to
+ *	to query if query_dedicated is zero.
+ */
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
+
+/**
+ * qman_volatile_dequeue - Issue a volatile dequeue command
+ * @fq: the frame queue object to dequeue from
+ * @flags: a bit-mask of QMAN_VOLATILE_FLAG_*** options
+ * @vdqcr: bit mask of QM_VDQCR_*** options, as per qm_dqrr_vdqcr_set()
+ *
+ * Attempts to lock access to the portal's VDQCR volatile dequeue functionality.
+ * The function will block and sleep if QMAN_VOLATILE_FLAG_WAIT is specified and
+ * the VDQCR is already in use, otherwise returns non-zero for failure. If
+ * QMAN_VOLATILE_FLAG_FINISH is specified, the function will only return once
+ * the VDQCR command has finished executing (ie. once the callback for the last
+ * DQRR entry resulting from the VDQCR command has been called). If not using
+ * the FINISH flag, completion can be determined either by detecting the
+ * presence of the QM_DQRR_STAT_UNSCHEDULED and QM_DQRR_STAT_DQCR_EXPIRED bits
+ * in the "stat" field of the "struct qm_dqrr_entry" passed to the FQ's dequeue
+ * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
+ * "flags" retrieved from qman_fq_state().
+ */
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
+
+/**
+ * qman_enqueue - Enqueue a frame to a frame queue
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ *
+ * Fills an entry in the EQCR of portal @qm to enqueue the frame described by
+ * @fd. The descriptor details are copied from @fd to the EQCR entry, the 'pid'
+ * field is ignored. The return value is non-zero on error, such as ring full
+ * (and FLAG_WAIT not specified), congestion avoidance (FLAG_WATCH_CGR
+ * specified), etc. If the ring is full and FLAG_WAIT is specified, this
+ * function will block. If FLAG_INTERRUPT is set, the EQCI bit of the portal
+ * interrupt will assert when Qman consumes the EQCR entry (subject to "status
+ * disable", "enable", and "inhibit" registers). If FLAG_DCA is set, Qman will
+ * perform an implied "discrete consumption acknowledgment" on the dequeue
+ * ring's (DQRR) entry, at the ring index specified by the FLAG_DCA_IDX(x)
+ * macro. (As an alternative to issuing explicit DCA actions on DQRR entries,
+ * this implicit DCA can delay the release of a "held active" frame queue
+ * corresponding to a DQRR entry until Qman consumes the EQCR entry - providing
+ * order-preservation semantics in packet-forwarding scenarios.) If FLAG_DCA is
+ * set, then FLAG_DCA_PARK can also be set to imply that the DQRR consumption
+ * acknowledgment should "park request" the "held active" frame queue. Ie.
+ * when the portal eventually releases that frame queue, it will be left in the
+ * Parked state rather than Tentatively Scheduled or Truly Scheduled. If the
+ * portal is watching congestion groups, the QMAN_ENQUEUE_FLAG_WATCH_CGR flag
+ * is requested, and the FQ is a member of a congestion group, then this
+ * function returns -EAGAIN if the congestion group is currently congested.
+ * Note, this does not eliminate ERNs, as the async interface means we can be
+ * sending enqueue commands to an un-congested FQ that becomes congested before
+ * the enqueue commands are processed, but it does minimise needless thrashing
+ * of an already busy hardware resource by throttling many of the to-be-dropped
+ * enqueues "at the source".
+ */
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send);
+
+typedef int (*qman_cb_precommit) (void *arg);
+
+/**
+ * qman_enqueue_orp - Enqueue a frame to a frame queue using an ORP
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ * @orp: the frame queue object used as an order restoration point.
+ * @orp_seqnum: the sequence number of this frame in the order restoration path
+ *
+ * Similar to qman_enqueue(), but with the addition of an Order Restoration
+ * Point (@orp) and corresponding sequence number (@orp_seqnum) for this
+ * enqueue operation to employ order restoration. Each frame queue object acts
+ * as an Order Definition Point (ODP) by providing each frame dequeued from it
+ * with an incrementing sequence number, this value is generally ignored unless
+ * that sequence of dequeued frames will need order restoration later. Each
+ * frame queue object also encapsulates an Order Restoration Point (ORP), which
+ * is a re-assembly context for re-ordering frames relative to their sequence
+ * numbers as they are enqueued. The ORP does not have to be within the frame
+ * queue that receives the enqueued frame, in fact it is usually the frame
+ * queue from which the frames were originally dequeued. For the purposes of
+ * order restoration, multiple frames (or "fragments") can be enqueued for a
+ * single sequence number by setting the QMAN_ENQUEUE_FLAG_NLIS flag for all
+ * enqueues except the final fragment of a given sequence number. Ordering
+ * between sequence numbers is guaranteed, even if fragments of different
+ * sequence numbers are interlaced with one another. Fragments of the same
+ * sequence number will retain the order in which they are enqueued. If no
+ * enqueue is to performed, QMAN_ENQUEUE_FLAG_HOLE indicates that the given
+ * sequence number is to be "skipped" by the ORP logic (eg. if a frame has been
+ * dropped from a sequence), or QMAN_ENQUEUE_FLAG_NESN indicates that the given
+ * sequence number should become the ORP's "Next Expected Sequence Number".
+ *
+ * Side note: a frame queue object can be used purely as an ORP, without
+ * carrying any frames at all. Care should be taken not to deallocate a frame
+ * queue object that is being actively used as an ORP, as a future allocation
+ * of the frame queue object may start using the internal ORP before the
+ * previous use has finished.
+ */
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum);
+
+/**
+ * qman_alloc_fqid_range - Allocate a contiguous range of FQIDs
+ * @result: is set by the API to the base FQID of the allocated range
+ * @count: the number of FQIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count FQIDs
+ *
+ * Returns the number of frame queues allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * FQs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_fqid(u32 *result)
+{
+	int ret = qman_alloc_fqid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_fqid_range - Release the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of FQIDs
+ * that it can subsequently allocate from.
+ */
+void qman_release_fqid_range(u32 fqid, unsigned int count);
+static inline void qman_release_fqid(u32 fqid)
+{
+	qman_release_fqid_range(fqid, 1);
+}
+
+void qman_seed_fqid_range(u32 fqid, unsigned int count);
+
+int qman_shutdown_fq(u32 fqid);
+
+/**
+ * qman_reserve_fqid_range - Reserve the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ */
+int qman_reserve_fqid_range(u32 fqid, unsigned int count);
+static inline int qman_reserve_fqid(u32 fqid)
+{
+	return qman_reserve_fqid_range(fqid, 1);
+}
+
+/* Pool-channel management */
+/**
+ * qman_alloc_pool_range - Allocate a contiguous range of pool-channel IDs
+ * @result: is set by the API to the base pool-channel ID of the allocated range
+ * @count: the number of pool-channel IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of pool-channel IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_pool(u32 *result)
+{
+	int ret = qman_alloc_pool_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_pool_range - Release the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to deallocate
+ * @count: the number of pool-channel IDs in the range
+ */
+void qman_release_pool_range(u32 id, unsigned int count);
+static inline void qman_release_pool(u32 id)
+{
+	qman_release_pool_range(id, 1);
+}
+
+/**
+ * qman_reserve_pool_range - Reserve the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to reserve
+ * @count: the number of pool-channel IDs in the range
+ */
+int qman_reserve_pool_range(u32 id, unsigned int count);
+static inline int qman_reserve_pool(u32 id)
+{
+	return qman_reserve_pool_range(id, 1);
+}
+
+void qman_seed_pool_range(u32 id, unsigned int count);
+
+	/* CGR management */
+	/* -------------- */
+/**
+ * qman_create_cgr - Register a congestion group object
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: optional state of CGR settings
+ *
+ * Registers this object to receiving congestion entry/exit callbacks on the
+ * portal affine to the cpu portal on which this API is executed. If opts is
+ * NULL then only the callback (cgr->cb) function is registered. If @flags
+ * contains QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset
+ * any unspecified parameters) will be used rather than a modify hw hardware
+ * (which only modifies the specified parameters).
+ */
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_create_cgr_to_dcp - Register a congestion group object to DCP portal
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @dcp_portal: the DCP portal to which the cgr object is registered.
+ * @opts: optional state of CGR settings
+ *
+ */
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_delete_cgr - Deregisters a congestion group object
+ * @cgr: the 'cgr' object to deregister
+ *
+ * "Unplugs" this CGR object from the portal affine to the cpu on which this API
+ * is executed. This must be excuted on the same affine portal on which it was
+ * created.
+ */
+int qman_delete_cgr(struct qman_cgr *cgr);
+
+/**
+ * qman_modify_cgr - Modify CGR fields
+ * @cgr: the 'cgr' object to modify
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: the CGR-modification settings
+ *
+ * The @opts parameter comes from the low-level portal API, and can be NULL.
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver, in particular the 'cgrid' field is ignored (this operation
+ * only affects the given CGR object). If @flags contains
+ * QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset any
+ * unspecified parameters) will be used rather than a modify hw hardware (which
+ * only modifies the specified parameters).
+ */
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_query_cgr - Queries CGR fields
+ * @cgr: the 'cgr' object to query
+ * @result: storage for the queried congestion group record
+ */
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *result);
+
+/**
+ * qman_query_congestion - Queries the state of all congestion groups
+ * @congestion: storage for the queried state of all congestion groups
+ */
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
+
+/**
+ * qman_alloc_cgrid_range - Allocate a contiguous range of CGR IDs
+ * @result: is set by the API to the base CGR ID of the allocated range
+ * @count: the number of CGR IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of CGR IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_cgrid(u32 *result)
+{
+	int ret = qman_alloc_cgrid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_cgrid_range - Release the specified range of CGR IDs
+ * @id: the base CGR ID of the range to deallocate
+ * @count: the number of CGR IDs in the range
+ */
+void qman_release_cgrid_range(u32 id, unsigned int count);
+static inline void qman_release_cgrid(u32 id)
+{
+	qman_release_cgrid_range(id, 1);
+}
+
+/**
+ * qman_reserve_cgrid_range - Reserve the specified range of CGR ID
+ * @id: the base CGR ID of the range to reserve
+ * @count: the number of CGR IDs in the range
+ */
+int qman_reserve_cgrid_range(u32 id, unsigned int count);
+static inline int qman_reserve_cgrid(u32 id)
+{
+	return qman_reserve_cgrid_range(id, 1);
+}
+
+void qman_seed_cgrid_range(u32 id, unsigned int count);
+
+	/* Helpers */
+	/* ------- */
+/**
+ * qman_poll_fq_for_init - Check if an FQ has been initialised from OOS
+ * @fqid: the FQID that will be initialised by other s/w
+ *
+ * In many situations, a FQID is provided for communication between s/w
+ * entities, and whilst the consumer is responsible for initialising and
+ * scheduling the FQ, the producer(s) generally create a wrapper FQ object using
+ * and only call qman_enqueue() (no FQ initialisation, scheduling, etc). Ie;
+ *     qman_create_fq(..., QMAN_FQ_FLAG_NO_MODIFY, ...);
+ * However, data can not be enqueued to the FQ until it is initialised out of
+ * the OOS state - this function polls for that condition. It is particularly
+ * useful for users of IPC functions - each endpoint's Rx FQ is the other
+ * endpoint's Tx FQ, so each side can initialise and schedule their Rx FQ object
+ * and then use this API on the (NO_MODIFY) Tx FQ object in order to
+ * synchronise. The function returns zero for success, +1 if the FQ is still in
+ * the OOS state, or negative if there was an error.
+ */
+static inline int qman_poll_fq_for_init(struct qman_fq *fq)
+{
+	struct qm_mcr_queryfq_np np;
+	int err;
+
+	err = qman_query_fq_np(fq, &np);
+	if (err)
+		return err;
+	if ((np.state & QM_MCR_NP_STATE_MASK) == QM_MCR_NP_STATE_OOS)
+		return 1;
+	return 0;
+}
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+#define cpu_to_hw_sg(x) (x)
+#define hw_sg_to_cpu(x) (x)
+#else
+#define cpu_to_hw_sg(x)  __cpu_to_hw_sg(x)
+#define hw_sg_to_cpu(x)  __hw_sg_to_cpu(x)
+
+static inline void __cpu_to_hw_sg(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = cpu_to_be64(sgentry->opaque);
+	sgentry->val = cpu_to_be32(sgentry->val);
+	sgentry->val_off = cpu_to_be16(sgentry->val_off);
+}
+
+static inline void __hw_sg_to_cpu(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = be64_to_cpu(sgentry->opaque);
+	sgentry->val = be32_to_cpu(sgentry->val);
+	sgentry->val_off = be16_to_cpu(sgentry->val_off);
+}
+#endif
 
 #ifdef __cplusplus
 }
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index b0d953f..a4897b0 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -42,6 +42,7 @@
 #define __FSL_USD_H
 
 #include <compat.h>
+#include <fsl_qman.h>
 
 #ifdef __cplusplus
 extern "C" {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 12/41] bus/dpaa: add BMAN driver core
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (10 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 11/41] bus/dpaa: add QMan driver core routines Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 13/41] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
                         ` (31 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Buffer Manager (BMan) is a hardware buffer pool management block that
allows software and accelerators on the datapath to acquire and release
buffers in order to build frames.

This patch adds the core routines.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman_driver.c | 311 +++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_priv.h   | 125 ++++++++++
 drivers/bus/dpaa/include/fsl_bman.h       | 375 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   5 +
 5 files changed, 817 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index ba87386..2d626b2 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -70,6 +70,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
 	base/qbman/dpaa_alloc.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
new file mode 100644
index 0000000..fb3c50e
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -0,0 +1,311 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_branch_prediction.h>
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "bman_priv.h"
+#include <sys/ioctl.h>
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+u16 bman_ip_rev;
+u16 bman_pool_max;
+void *bman_ccsr_map;
+
+/*****************/
+/* Portal driver */
+/*****************/
+
+static __thread int fd = -1;
+static __thread struct bm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_bman
+};
+
+static int fsl_bman_portal_init(uint32_t idx, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!");
+		return -EINVAL;
+	}
+	/* Allocate and map a bman portal */
+	map.index = idx;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+	pcfg.is_shared = is_shared;
+	pcfg.index = map.index;
+	bman_depletion_fill(&pcfg.mask);
+
+	fd = open(BMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("BMan irq init failed");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+	/* Use the IRQ FD as a unique IRQ number */
+	pcfg.irq = fd;
+
+	/* Set the IRQ number */
+	irq_map.type = dpaa_portal_bman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_bman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int bman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_bman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int bman_thread_finish(void)
+{
+	return fsl_bman_portal_finish();
+}
+
+void bman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int bman_init_ccsr(const struct device_node *node)
+{
+	static int ccsr_map_fd;
+	uint64_t phys_addr;
+	const uint32_t *bman_addr;
+	uint64_t regs_size;
+
+	bman_addr = of_get_address(node, 0, &regs_size, NULL);
+	if (!bman_addr) {
+		pr_err("of_get_address cannot return BMan address");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(node, bman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open(BMAN_CCSR_MAP, O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for BMan CCSR map");
+		return ccsr_map_fd;
+	}
+
+	bman_ccsr_map = mmap(NULL, regs_size, PROT_READ |
+			     PROT_WRITE, MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (bman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map BMan CCSR base Bman: "
+		       "0x%x Phys: 0x%lx size 0x%lx",
+		       *bman_addr, phys_addr, regs_size);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int bman_global_init(void)
+{
+	const struct device_node *dt_node;
+	static int done;
+
+	if (done)
+		return -EBUSY;
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,bman-portal");
+	if (!dt_node) {
+		pr_err("No bman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,bman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,bman-portal-1.0.0")) {
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.0.8")) {
+		bman_ip_rev = BMAN_REV20;
+		bman_pool_max = 8;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.3")) {
+		bman_ip_rev = BMAN_REV21;
+		bman_pool_max = 64;
+	} else {
+		pr_warn("unknown BMan version in portal node,default "
+			"to rev1.0");
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	}
+
+	if (!bman_ip_rev) {
+		pr_err("Unknown bman portal version\n");
+		return -ENODEV;
+	}
+	{
+		const struct device_node *dn = of_find_compatible_node(NULL,
+							NULL, "fsl,bman");
+		if (!dn)
+			pr_err("No bman device node available");
+
+		if (bman_init_ccsr(dn))
+			pr_err("BMan CCSR map failed.");
+	}
+
+	done = 1;
+	return 0;
+}
+
+#define BMAN_POOL_CONTENT(n) (0x0600 + ((n) * 0x04))
+u32 bm_pool_free_buffers(u32 bpid)
+{
+	return in_be32(bman_ccsr_map + BMAN_POOL_CONTENT(bpid));
+}
+
+static u32 __generate_thresh(u32 val, int roundup)
+{
+	u32 e = 0;      /* co-efficient, exponent */
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	DPAA_ASSERT(e < 0x10);
+	return (val | (e << 8));
+}
+
+#define POOL_SWDET(n)       (0x0000 + ((n) * 0x04))
+#define POOL_HWDET(n)       (0x0100 + ((n) * 0x04))
+#define POOL_SWDXT(n)       (0x0200 + ((n) * 0x04))
+#define POOL_HWDXT(n)       (0x0300 + ((n) * 0x04))
+int bm_pool_set(u32 bpid, const u32 *thresholds)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	out_be32(bman_ccsr_map + POOL_SWDET(bpid),
+		 __generate_thresh(thresholds[0], 0));
+	out_be32(bman_ccsr_map + POOL_SWDXT(bpid),
+		 __generate_thresh(thresholds[1], 1));
+	out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+		 __generate_thresh(thresholds[2], 0));
+	out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+		 __generate_thresh(thresholds[3], 1));
+	return 0;
+}
+
+#define BMAN_LOW_DEFAULT_THRESH		0x40
+#define BMAN_HIGH_DEFAULT_THRESH		0x80
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	if (low_thresh && high_thresh) {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(low_thresh, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(high_thresh, 1));
+	} else {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(BMAN_LOW_DEFAULT_THRESH, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(BMAN_HIGH_DEFAULT_THRESH, 1));
+	}
+	return 0;
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman_priv.h b/drivers/bus/dpaa/base/qbman/bman_priv.h
new file mode 100644
index 0000000..07d9cec
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_priv.h
@@ -0,0 +1,125 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_PRIV_H
+#define __BMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_bman.h>
+
+/* Revision info (for errata and feature handling) */
+#define BMAN_REV10 0x0100
+#define BMAN_REV20 0x0200
+#define BMAN_REV21 0x0201
+
+#define BMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+#define BMAN_CCSR_MAP "/dev/mem"
+
+/* This mask contains all the "irqsource" bits visible to API users */
+#define BM_PIRQ_VISIBLE	(BM_PIRQ_RCRI | BM_PIRQ_BSCN)
+
+/* These are bm_<reg>_<verb>(). So for example, bm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define bm_isr_status_read(bm)		__bm_isr_read(bm, bm_isr_status)
+#define bm_isr_status_clear(bm, m)	__bm_isr_write(bm, bm_isr_status, m)
+#define bm_isr_enable_read(bm)		__bm_isr_read(bm, bm_isr_enable)
+#define bm_isr_enable_write(bm, v)	__bm_isr_write(bm, bm_isr_enable, v)
+#define bm_isr_disable_read(bm)		__bm_isr_read(bm, bm_isr_disable)
+#define bm_isr_disable_write(bm, v)	__bm_isr_write(bm, bm_isr_disable, v)
+#define bm_isr_inhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 1)
+#define bm_isr_uninhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 0)
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+extern u16 bman_pool_max;
+
+/* used by CCSR and portal interrupt code */
+enum bm_isr_reg {
+	bm_isr_status = 0,
+	bm_isr_enable = 1,
+	bm_isr_disable = 2,
+	bm_isr_inhibit = 3
+};
+
+struct bm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* This is used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* These are the buffer pool IDs that may be used via this portal. */
+	struct bman_depletion mask;
+
+};
+
+int bman_init_ccsr(const struct device_node *node);
+
+struct bman_portal *bman_create_affine_portal(
+			const struct bm_portal_config *config);
+const struct bm_portal_config *bman_destroy_affine_portal(void);
+
+/* Set depletion thresholds associated with a buffer pool. Requires that the
+ * operating system have access to Bman CCSR (ie. compiled in support and
+ * run-time access courtesy of the device-tree).
+ */
+int bm_pool_set(u32 bpid, const u32 *thresholds);
+
+/* Read the free buffer count for a given buffer */
+u32 bm_pool_free_buffers(u32 bpid);
+
+#endif /* __BMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
new file mode 100644
index 0000000..383106b
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -0,0 +1,375 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_BMAN_H
+#define __FSL_BMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* This wrapper represents a bit-array for the depletion state of the 64 Bman
+ * buffer pools.
+ */
+struct bman_depletion {
+	u32 state[2];
+};
+
+static inline void bman_depletion_init(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = 0;
+}
+
+static inline void bman_depletion_fill(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = ~0;
+}
+
+/* --- Bman data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct bm_rcr_entry;	/* RCR (Release Command Ring) entries */
+struct bm_mc_command;	/* MC (Management Command) command */
+struct bm_mc_result;	/* MC result */
+
+/* Code-reduction, define a wrapper for 48-bit buffers. In cases where a buffer
+ * pool id specific to this buffer is needed (BM_RCR_VERB_CMD_BPID_MULTI,
+ * BM_MCC_VERB_ACQUIRE), the 'bpid' field is used.
+ */
+struct bm_buffer {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1;
+			u8 bpid;
+			u16 hi; /* High 16-bits of 48-bit address */
+			u32 lo; /* Low 32-bits of 48-bit address */
+#else
+			u32 lo;
+			u16 hi;
+			u8 bpid;
+			u8 __reserved;
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:16;
+			u64 addr:48;
+#else
+			u64 addr:48;
+			u64 __notaddress:16;
+#endif
+		};
+		u64 opaque;
+	};
+} __attribute__((aligned(8)));
+static inline u64 bm_buffer_get64(const struct bm_buffer *buf)
+{
+	return buf->addr;
+}
+
+static inline dma_addr_t bm_buf_addr(const struct bm_buffer *buf)
+{
+	return (dma_addr_t)buf->addr;
+}
+
+#define bm_buffer_set64(buf, v) \
+	do { \
+		struct bm_buffer *__buf931 = (buf); \
+		__buf931->hi = upper_32_bits(v); \
+		__buf931->lo = lower_32_bits(v); \
+	} while (0)
+
+/* See 1.5.3.5.4: "Release Command" */
+struct bm_rcr_entry {
+	union {
+		struct {
+			u8 __dont_write_directly__verb;
+			u8 bpid; /* used with BM_RCR_VERB_CMD_BPID_SINGLE */
+			u8 __reserved1[62];
+		};
+		struct bm_buffer bufs[8];
+	};
+} __packed;
+#define BM_RCR_VERB_VBIT		0x80
+#define BM_RCR_VERB_CMD_MASK		0x70	/* one of two values; */
+#define BM_RCR_VERB_CMD_BPID_SINGLE	0x20
+#define BM_RCR_VERB_CMD_BPID_MULTI	0x30
+#define BM_RCR_VERB_BUFCOUNT_MASK	0x0f	/* values 1..8 */
+
+/* See 1.5.3.1: "Acquire Command" */
+/* See 1.5.3.2: "Query Command" */
+struct bm_mcc_acquire {
+	u8 bpid;
+	u8 __reserved1[62];
+} __packed;
+struct bm_mcc_query {
+	u8 __reserved2[63];
+} __packed;
+struct bm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct bm_mcc_acquire acquire;
+		struct bm_mcc_query query;
+	};
+} __packed;
+#define BM_MCC_VERB_VBIT		0x80
+#define BM_MCC_VERB_CMD_MASK		0x70	/* where the verb contains; */
+#define BM_MCC_VERB_CMD_ACQUIRE		0x10
+#define BM_MCC_VERB_CMD_QUERY		0x40
+#define BM_MCC_VERB_ACQUIRE_BUFCOUNT	0x0f	/* values 1..8 go here */
+
+/* See 1.5.3.3: "Acquire Response" */
+/* See 1.5.3.4: "Query Response" */
+struct bm_pool_state {
+	u8 __reserved1[32];
+	/* "availability state" and "depletion state" */
+	struct {
+		u8 __reserved1[8];
+		/* Access using bman_depletion_***() */
+		struct bman_depletion state;
+	} as, ds;
+};
+
+struct bm_mc_result {
+	union {
+		struct {
+			u8 verb;
+			u8 __reserved1[63];
+		};
+		union {
+			struct {
+				u8 __reserved1;
+				u8 bpid;
+				u8 __reserved2[62];
+			};
+			struct bm_buffer bufs[8];
+		} acquire;
+		struct bm_pool_state query;
+	};
+} __packed;
+#define BM_MCR_VERB_VBIT		0x80
+#define BM_MCR_VERB_CMD_MASK		BM_MCC_VERB_CMD_MASK
+#define BM_MCR_VERB_CMD_ACQUIRE		BM_MCC_VERB_CMD_ACQUIRE
+#define BM_MCR_VERB_CMD_QUERY		BM_MCC_VERB_CMD_QUERY
+#define BM_MCR_VERB_CMD_ERR_INVALID	0x60
+#define BM_MCR_VERB_CMD_ERR_ECC		0x70
+#define BM_MCR_VERB_ACQUIRE_BUFCOUNT	BM_MCC_VERB_ACQUIRE_BUFCOUNT /* 0..8 */
+
+/* Portal and Buffer Pools */
+/* Represents a managed portal */
+struct bman_portal;
+
+/* This object type represents Bman buffer pools. */
+struct bman_pool;
+
+/* This struct specifies parameters for a bman_pool object. */
+struct bman_pool_params {
+	/* index of the buffer pool to encapsulate (0-63), ignored if
+	 * BMAN_POOL_FLAG_DYNAMIC_BPID is set.
+	 */
+	u32 bpid;
+	/* bit-mask of BMAN_POOL_FLAG_*** options */
+	u32 flags;
+	/* depletion-entry/exit thresholds, if BMAN_POOL_FLAG_THRESH is set. NB:
+	 * this is only allowed if BMAN_POOL_FLAG_DYNAMIC_BPID is used *and*
+	 * when run in the control plane (which controls Bman CCSR). This array
+	 * matches the definition of bm_pool_set().
+	 */
+	u32 thresholds[4];
+};
+
+/* Flags to bman_new_pool() */
+#define BMAN_POOL_FLAG_NO_RELEASE    0x00000001 /* can't release to pool */
+#define BMAN_POOL_FLAG_ONLY_RELEASE  0x00000002 /* can only release to pool */
+#define BMAN_POOL_FLAG_DYNAMIC_BPID  0x00000008 /* (de)allocate bpid */
+#define BMAN_POOL_FLAG_THRESH        0x00000010 /* set depletion thresholds */
+
+/* Flags to bman_release() */
+#define BMAN_RELEASE_FLAG_NOW        0x00000008 /* issue immediate release */
+
+
+/**
+ * bman_get_portal_index - get portal configuration index
+ */
+int bman_get_portal_index(void);
+
+/**
+ * bman_rcr_is_empty - Determine if portal's RCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * releases for the local portal have been processed by Bman but can't use the
+ * BMAN_RELEASE_FLAG_WAIT_SYNC flag to do this from the final bman_release().
+ * The function forces tracking of RCR consumption (which normally doesn't
+ * happen until release processing needs to find space to put new release
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int bman_rcr_is_empty(void);
+
+/**
+ * bman_alloc_bpid_range - Allocate a contiguous range of BPIDs
+ * @result: is set by the API to the base BPID of the allocated range
+ * @count: the number of BPIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count BPIDs
+ *
+ * Returns the number of buffer pools allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * BPs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int bman_alloc_bpid(u32 *result)
+{
+	int ret = bman_alloc_bpid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * bman_release_bpid_range - Release the specified range of buffer pool IDs
+ * @bpid: the base BPID of the range to deallocate
+ * @count: the number of BPIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of BPIDs
+ * that it can subsequently allocate from.
+ */
+void bman_release_bpid_range(u32 bpid, unsigned int count);
+static inline void bman_release_bpid(u32 bpid)
+{
+	bman_release_bpid_range(bpid, 1);
+}
+
+int bman_reserve_bpid_range(u32 bpid, unsigned int count);
+static inline int bman_reserve_bpid(u32 bpid)
+{
+	return bman_reserve_bpid_range(bpid, 1);
+}
+
+void bman_seed_bpid_range(u32 bpid, unsigned int count);
+
+int bman_shutdown_pool(u32 bpid);
+
+/**
+ * bman_new_pool - Allocates a Buffer Pool object
+ * @params: parameters specifying the buffer pool ID and behaviour
+ *
+ * Creates a pool object for the given @params. A portal and the depletion
+ * callback field of @params are only used if the BMAN_POOL_FLAG_DEPLETION flag
+ * is set. NB, the fields from @params are copied into the new pool object, so
+ * the structure provided by the caller can be released or reused after the
+ * function returns.
+ */
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params);
+
+/**
+ * bman_free_pool - Deallocates a Buffer Pool object
+ * @pool: the pool object to release
+ */
+void bman_free_pool(struct bman_pool *pool);
+
+/**
+ * bman_get_params - Returns a pool object's parameters.
+ * @pool: the pool object
+ *
+ * The returned pointer refers to state within the pool object so must not be
+ * modified and can no longer be read once the pool object is destroyed.
+ */
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
+
+/**
+ * bman_release - Release buffer(s) to the buffer pool
+ * @pool: the buffer pool object to release to
+ * @bufs: an array of buffers to release
+ * @num: the number of buffers in @bufs (1-8)
+ * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options
+ *
+ */
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_acquire - Acquire buffer(s) from a buffer pool
+ * @pool: the buffer pool object to acquire from
+ * @bufs: array for storing the acquired buffers
+ * @num: the number of buffers desired (@bufs is at least this big)
+ *
+ * Issues an "Acquire" command via the portal's management command interface.
+ * The return value will be the number of buffers obtained from the pool, or a
+ * negative error code if a h/w error or pool starvation was encountered.
+ */
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_query_pools - Query all buffer pool states
+ * @state: storage for the queried availability and depletion states
+ */
+int bman_query_pools(struct bm_pool_state *state);
+
+/**
+ * bman_query_free_buffers - Query how many free buffers are in buffer pool
+ * @pool: the buffer pool object to query
+ *
+ * Return the number of the free buffers
+ */
+u32 bman_query_free_buffers(struct bman_pool *pool);
+
+/**
+ * bman_update_pool_thresholds - Change the buffer pool's depletion thresholds
+ * @pool: the buffer pool object to which the thresholds will be set
+ * @thresholds: the new thresholds
+ */
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds);
+
+/**
+ * bm_pool_set_hw_threshold - Change the buffer pool's thresholds
+ * @pool: Pool id
+ * @low_thresh: low threshold
+ * @high_thresh: high threshold
+ */
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_BMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index a4897b0..a3243af 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -50,7 +50,9 @@ extern "C" {
 
 /* Thread-entry/exit hooks; */
 int qman_thread_init(void);
+int bman_thread_init(void);
 int qman_thread_finish(void);
+int bman_thread_finish(void);
 
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
@@ -92,9 +94,12 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal);
  * into another blocking read/select/poll.
  */
 void qman_thread_irq(void);
+void bman_thread_irq(void);
 
 /* Global setup */
 int qman_global_init(void);
+int bman_global_init(void);
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 13/41] bus/dpaa: add support for FMAN frame queue lookup
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (11 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 12/41] bus/dpaa: add BMAN driver core Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-18 14:51         ` Ferruh Yigit
  2017-09-09 11:21       ` [PATCH v4 14/41] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
                         ` (30 subsequent siblings)
  43 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/qbman/qman.c        | 99 ++++++++++++++++++++++++++++++-
 drivers/bus/dpaa/base/qbman/qman_driver.c |  7 ++-
 drivers/bus/dpaa/base/qbman/qman_priv.h   | 11 ++++
 drivers/bus/dpaa/include/fsl_qman.h       | 12 ++++
 4 files changed, 126 insertions(+), 3 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 494d54c..837e46c 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -176,6 +176,65 @@ static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
 	return fqtree_find(&p->retire_table, fqid);
 }
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+static void **qman_fq_lookup_table;
+static size_t qman_fq_lookup_table_size;
+
+int qman_setup_fq_lookup_table(size_t num_entries)
+{
+	num_entries++;
+	/* Allocate 1 more entry since the first entry is not used */
+	qman_fq_lookup_table = vmalloc((num_entries * sizeof(void *)));
+	if (!qman_fq_lookup_table) {
+		pr_err("QMan: Could not allocate fq lookup table\n");
+		return -ENOMEM;
+	}
+	memset(qman_fq_lookup_table, 0, num_entries * sizeof(void *));
+	qman_fq_lookup_table_size = num_entries;
+	pr_debug("QMan: Allocated lookup table at %p, entry count %lu\n",
+		qman_fq_lookup_table,
+			(unsigned long)qman_fq_lookup_table_size);
+	return 0;
+}
+
+/* global structure that maintains fq object mapping */
+static DEFINE_SPINLOCK(fq_hash_table_lock);
+
+static int find_empty_fq_table_entry(u32 *entry, struct qman_fq *fq)
+{
+	u32 i;
+
+	spin_lock(&fq_hash_table_lock);
+	/* Can't use index zero because this has special meaning
+	 * in context_b field.
+	 */
+	for (i = 1; i < qman_fq_lookup_table_size; i++) {
+		if (qman_fq_lookup_table[i] == NULL) {
+			*entry = i;
+			qman_fq_lookup_table[i] = fq;
+			spin_unlock(&fq_hash_table_lock);
+			return 0;
+		}
+	}
+	spin_unlock(&fq_hash_table_lock);
+	return -ENOMEM;
+}
+
+static void clear_fq_table_entry(u32 entry)
+{
+	spin_lock(&fq_hash_table_lock);
+	DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
+	qman_fq_lookup_table[entry] = NULL;
+	spin_unlock(&fq_hash_table_lock);
+}
+
+static inline struct qman_fq *get_fq_table_entry(u32 entry)
+{
+	DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
+	return qman_fq_lookup_table[entry];
+}
+#endif
+
 static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
 {
 	/* Byteswap the FQD to HW format */
@@ -766,8 +825,13 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 				break;
 			case QM_MR_VERB_FQPN:
 				/* Parked */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+				fq = get_fq_table_entry(
+					be32_to_cpu(msg->fq.contextB));
+#else
 				fq = (void *)(uintptr_t)
 					be32_to_cpu(msg->fq.contextB);
+#endif
 				fq_state_change(p, fq, msg, verb);
 				if (fq->cb.fqs)
 					fq->cb.fqs(p, fq, &swapped_msg);
@@ -792,7 +856,11 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 			}
 		} else {
 			/* Its a software ERN */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(be32_to_cpu(msg->ern.tag));
+#else
 			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+#endif
 			fq->cb.ern(p, fq, &swapped_msg);
 		}
 		num++;
@@ -907,7 +975,11 @@ static inline unsigned int __poll_portal_fast(struct qman_portal *p,
 				clear_vdqcr(p, fq);
 		} else {
 			/* SDQCR: context_b points to the FQ */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(dq->contextB);
+#else
 			fq = (void *)(uintptr_t)dq->contextB;
+#endif
 			/* Now let the callback do its stuff */
 			res = fq->cb.dqrr(p, fq, dq);
 			/*
@@ -1119,7 +1191,12 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
 	fq->flags = flags;
 	fq->state = qman_fq_state_oos;
 	fq->cgr_groupid = 0;
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	if (unlikely(find_empty_fq_table_entry(&fq->key, fq))) {
+		pr_info("Find empty table entry failed\n");
+		return -ENOMEM;
+	}
+#endif
 	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
 		return 0;
 	/* Everything else is AS_IS support */
@@ -1193,7 +1270,9 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
 	case qman_fq_state_oos:
 		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
 			qman_release_fqid(fq->fqid);
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		clear_fq_table_entry(fq->key);
+#endif
 		return;
 	default:
 		break;
@@ -1258,7 +1337,11 @@ int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
 		dma_addr_t phys_fq;
 
 		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		mcc->initfq.fqd.context_b = fq->key;
+#else
 		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+#endif
 		/*
 		 *  and the physical address - NB, if the user wasn't trying to
 		 * set CONTEXTA, clear the stashing settings.
@@ -1419,7 +1502,11 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
 			msg.verb = QM_MR_VERB_FQRNI;
 			msg.fq.fqs = mcr->alterfq.fqs;
 			msg.fq.fqid = fq->fqid;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			msg.fq.contextB = fq->key;
+#else
 			msg.fq.contextB = (u32)(uintptr_t)fq;
+#endif
 			fq->cb.fqs(p, fq, &msg);
 		}
 	} else if (res == QM_MCR_RESULT_PENDING) {
@@ -1861,7 +1948,11 @@ static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
 					QM_EQCR_DCA_PARK : 0) |
 			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
 	eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	eq->tag = cpu_to_be32(fq->key);
+#else
 	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 	eq->fd = *fd;
 	cpu_to_hw_fd(&eq->fd);
 	return eq;
@@ -1907,7 +1998,11 @@ int qman_enqueue_multi(struct qman_fq *fq,
 	/* try to send as many frames as possible */
 	while (eqcr->available && frames_to_send--) {
 		eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		eq->tag = cpu_to_be32(fq->key);
+#else
 		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 		eq->fd.opaque_addr = fd->opaque_addr;
 		eq->fd.addr = cpu_to_be40(fd->addr);
 		eq->fd.status = cpu_to_be32(fd->status);
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 90fb130..7a68896 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -279,5 +279,10 @@ int qman_global_init(void)
 	else
 		qman_clk = be32_to_cpu(*clk);
 
-	return ret;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	ret = qman_setup_fq_lookup_table(CONFIG_FSL_QMAN_FQ_LOOKUP_MAX);
+	if (ret)
+		return ret;
+#endif
+	return 0;
 }
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index 4a11e40..4b6c13c 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -44,6 +44,10 @@
 #include "dpaa_sys.h"
 #include <fsl_qman.h>
 
+#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)
+#error "_ARM64 requires _FSL_QMAN_FQ_LOOKUP"
+#endif
+
 /* Congestion Groups */
 /*
  * This wrapper represents a bit-array for the state of the 256 QMan congestion
@@ -197,6 +201,13 @@ void qm_set_liodns(struct qm_portal_config *pcfg);
 int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
 		       struct qm_mcr_cgrtestwrite *result);
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+/* If the fq object pointer is greater than the size of context_b field,
+ * than a lookup table is required.
+ */
+int qman_setup_fq_lookup_table(size_t num_entries);
+#endif
+
 /*   QMan s/w corenet portal, low-level i/face	 */
 
 /*
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 85ae13b..eedfd7e 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -46,6 +46,15 @@ extern "C" {
 
 #include <dpaa_rbtree.h>
 
+/* FQ lookups (turn this on for 64bit user-space) */
+#if (__WORDSIZE == 64)
+#define CONFIG_FSL_QMAN_FQ_LOOKUP
+/* if FQ lookups are supported, this controls the number of initialised,
+ * s/w-consumed FQs that can be supported at any one time.
+ */
+#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)
+#endif
+
 /* Last updated for v00.800 of the BG */
 
 /* Hardware constants */
@@ -1228,6 +1237,9 @@ struct qman_fq {
 	enum qman_fq_state state;
 	int cgr_groupid;
 	struct rb_node node;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	u32 key;
+#endif
 };
 
 /*
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 14/41] bus/dpaa: add BMan hardware interfaces
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (12 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 13/41] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 15/41] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
                         ` (29 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman.c        | 394 +++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman.h        | 550 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_driver.c |  12 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |  16 +
 5 files changed, 973 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 2d626b2..6675e53 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -70,6 +70,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman.c \
 	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman.c b/drivers/bus/dpaa/base/qbman/bman.c
new file mode 100644
index 0000000..be2d970
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.c
@@ -0,0 +1,394 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "bman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define RCR_THRESH	2	/* reread h/w CI when running out of space */
+#define IRQNAME		"BMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "BMan portal %d" */
+
+struct bman_portal {
+	struct bm_portal p;
+	/* 2-element array. pools[0] is mask, pools[1] is snapshot. */
+	struct bman_depletion *pools;
+	int thresh_set;
+	unsigned long irq_sources;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct bm_portal_config *config;
+	char irqname[MAX_IRQNAME];
+};
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static RTE_DEFINE_PER_LCORE(struct bman_portal, bman_affine_portal);
+
+static inline struct bman_portal *get_affine_portal(void)
+{
+	return &RTE_PER_LCORE(bman_affine_portal);
+}
+
+/*
+ * This object type refers to a pool, it isn't *the* pool. There may be
+ * more than one such object per BMan buffer pool, eg. if different users of
+ * the pool are operating via different portals.
+ */
+struct bman_pool {
+	struct bman_pool_params params;
+	/* Used for hash-table admin when using depletion notifications. */
+	struct bman_portal *portal;
+	struct bman_pool *next;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	atomic_t in_use;
+#endif
+};
+
+static inline
+struct bman_portal *bman_create_portal(struct bman_portal *portal,
+				       const struct bm_portal_config *c)
+{
+	struct bm_portal *p;
+	const struct bman_depletion *pools = &c->mask;
+	int ret;
+	u8 bpid = 0;
+
+	p = &portal->p;
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference...
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	if (bm_rcr_init(p, bm_rcr_pvb, bm_rcr_cce)) {
+		pr_err("Bman RCR initialisation failed\n");
+		return NULL;
+	}
+	if (bm_mc_init(p)) {
+		pr_err("Bman MC initialisation failed\n");
+		goto fail_mc;
+	}
+	portal->pools = kmalloc(2 * sizeof(*pools), GFP_KERNEL);
+	if (!portal->pools)
+		goto fail_pools;
+	portal->pools[0] = *pools;
+	bman_depletion_init(portal->pools + 1);
+	while (bpid < bman_pool_max) {
+		/*
+		 * Default to all BPIDs disabled, we enable as required at
+		 * run-time.
+		 */
+		bm_isr_bscn_mask(p, bpid, 0);
+		bpid++;
+	}
+	portal->slowpoll = 0;
+	/* Write-to-clear any stale interrupt status bits */
+	bm_isr_disable_write(p, 0xffffffff);
+	portal->irq_sources = 0;
+	bm_isr_enable_write(p, portal->irq_sources);
+	bm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, NULL, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need RCR to be empty before continuing */
+	ret = bm_rcr_get_fill(p);
+	if (ret) {
+		pr_err("Bman RCR unclean\n");
+		goto fail_rcr_empty;
+	}
+	/* Success */
+	portal->config = c;
+
+	bm_isr_disable_write(p, 0);
+	bm_isr_uninhibit(p);
+	return portal;
+fail_rcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->pools);
+fail_pools:
+	bm_mc_finish(p);
+fail_mc:
+	bm_rcr_finish(p);
+	return NULL;
+}
+
+struct bman_portal *
+bman_create_affine_portal(const struct bm_portal_config *c)
+{
+	struct bman_portal *portal = get_affine_portal();
+
+	/*This function is called from the context which is already affine to
+	 *CPU or in other words this in non-migratable to other CPUs.
+	 */
+	portal = bman_create_portal(portal, c);
+	if (portal) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		spin_unlock(&affine_mask_lock);
+	}
+	return portal;
+}
+
+static inline
+void bman_destroy_portal(struct bman_portal *bm)
+{
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bm_rcr_cce_update(&bm->p);
+	bm_rcr_cce_update(&bm->p);
+
+	free_irq(pcfg->irq, bm);
+
+	kfree(bm->pools);
+	bm_mc_finish(&bm->p);
+	bm_rcr_finish(&bm->p);
+	bm->config = NULL;
+}
+
+const struct
+bm_portal_config *bman_destroy_affine_portal(void)
+{
+	struct bman_portal *bm = get_affine_portal();
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bman_destroy_portal(bm);
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(pcfg->cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int
+bman_get_portal_index(void)
+{
+	struct bman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+static const u32 zero_thresholds[4] = {0, 0, 0, 0};
+
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params)
+{
+	struct bman_pool *pool = NULL;
+	u32 bpid;
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID) {
+		int ret = bman_alloc_bpid(&bpid);
+
+		if (ret)
+			return NULL;
+	} else {
+		if (params->bpid >= bman_pool_max)
+			return NULL;
+		bpid = params->bpid;
+	}
+	if (params->flags & BMAN_POOL_FLAG_THRESH) {
+		int ret = bm_pool_set(bpid, params->thresholds);
+
+		if (ret)
+			goto err;
+	}
+
+	pool = kmalloc(sizeof(*pool), GFP_KERNEL);
+	if (!pool)
+		goto err;
+	pool->params = *params;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	atomic_set(&pool->in_use, 1);
+#endif
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		pool->params.bpid = bpid;
+
+	return pool;
+err:
+	if (params->flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(bpid, zero_thresholds);
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(bpid);
+	kfree(pool);
+
+	return NULL;
+}
+
+void bman_free_pool(struct bman_pool *pool)
+{
+	if (pool->params.flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(pool->params.bpid, zero_thresholds);
+	if (pool->params.flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(pool->params.bpid);
+	kfree(pool);
+}
+
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool)
+{
+	return &pool->params;
+}
+
+static void update_rcr_ci(struct bman_portal *p, int avail)
+{
+	if (avail)
+		bm_rcr_cce_prefetch(&p->p);
+	else
+		bm_rcr_cce_update(&p->p);
+}
+
+#define BMAN_BUF_MASK 0x0000fffffffffffful
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p;
+	struct bm_rcr_entry *r;
+	u32 i = num - 1;
+	u8 avail;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_NO_RELEASE)
+		return -EINVAL;
+#endif
+
+	p = get_affine_portal();
+	avail = bm_rcr_get_avail(&p->p);
+	if (avail < 2)
+		update_rcr_ci(p, avail);
+	r = bm_rcr_start(&p->p);
+	if (unlikely(!r))
+		return -EBUSY;
+
+	/*
+	 * we can copy all but the first entry, as this can trigger badness
+	 * with the valid-bit
+	 */
+	r->bufs[0].opaque =
+		cpu_to_be64(((u64)pool->params.bpid << 48) |
+			    (bufs[0].opaque & BMAN_BUF_MASK));
+	if (i) {
+		for (i = 1; i < num; i++)
+			r->bufs[i].opaque =
+				cpu_to_be64(bufs[i].opaque & BMAN_BUF_MASK);
+	}
+
+	bm_rcr_pvb_commit(&p->p, BM_RCR_VERB_CMD_BPID_SINGLE |
+			  (num & BM_RCR_VERB_BUFCOUNT_MASK));
+
+	return 0;
+}
+
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_command *mcc;
+	struct bm_mc_result *mcr;
+	int ret, i;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_ONLY_RELEASE)
+		return -EINVAL;
+#endif
+
+	mcc = bm_mc_start(&p->p);
+	mcc->acquire.bpid = pool->params.bpid;
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_ACQUIRE |
+			(num & BM_MCC_VERB_ACQUIRE_BUFCOUNT));
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	ret = mcr->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT;
+	if (bufs) {
+		for (i = 0; i < num; i++)
+			bufs[i].opaque =
+				be64_to_cpu(mcr->acquire.bufs[i].opaque);
+	}
+	if (ret != num)
+		ret = -ENOMEM;
+	return ret;
+}
+
+int bman_query_pools(struct bm_pool_state *state)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_result *mcr;
+
+	bm_mc_start(&p->p);
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_QUERY);
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & BM_MCR_VERB_CMD_MASK) ==
+		    BM_MCR_VERB_CMD_QUERY);
+	*state = mcr->query;
+	state->as.state.state[0] = be32_to_cpu(state->as.state.state[0]);
+	state->as.state.state[1] = be32_to_cpu(state->as.state.state[1]);
+	state->ds.state.state[0] = be32_to_cpu(state->ds.state.state[0]);
+	state->ds.state.state[1] = be32_to_cpu(state->ds.state.state[1]);
+	return 0;
+}
+
+u32 bman_query_free_buffers(struct bman_pool *pool)
+{
+	return bm_pool_free_buffers(pool->params.bpid);
+}
+
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds)
+{
+	u32 bpid;
+
+	bpid = bman_get_params(pool)->bpid;
+
+	return bm_pool_set(bpid, thresholds);
+}
+
+int bman_shutdown_pool(u32 bpid)
+{
+	struct bman_portal *p = get_affine_portal();
+	return bm_shutdown_pool(&p->p, bpid);
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman.h b/drivers/bus/dpaa/base/qbman/bman.h
new file mode 100644
index 0000000..9c66797
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.h
@@ -0,0 +1,550 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_H
+#define __BMAN_H
+
+#include "bman_priv.h"
+
+/* Cache-inhibited register offsets */
+#define BM_REG_RCR_PI_CINH	0x3000
+#define BM_REG_RCR_CI_CINH	0x3100
+#define BM_REG_RCR_ITR		0x3200
+#define BM_REG_CFG		0x3300
+#define BM_REG_SCN(n)		(0x3400 + ((n) << 6))
+#define BM_REG_ISR		0x3e00
+#define BM_REG_IIR              0x3ec0
+
+/* Cache-enabled register offsets */
+#define BM_CL_CR		0x0000
+#define BM_CL_RR0		0x0100
+#define BM_CL_RR1		0x0140
+#define BM_CL_RCR		0x1000
+#define BM_CL_RCR_PI_CENA	0x3000
+#define BM_CL_RCR_CI_CENA	0x3100
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __bm_in(bm, o)		be32_to_cpu(__raw_readl((bm)->ci + (o)))
+#define __bm_out(bm, o, val)    __raw_writel(cpu_to_be32(val), \
+					     (bm)->ci + (o))
+#define bm_in(reg)		__bm_in(&portal->addr, BM_REG_##reg)
+#define bm_out(reg, val)	__bm_out(&portal->addr, BM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __bm_cl_touch_ro(bm, o) dcbt_ro((bm)->ce + (o))
+#define __bm_cl_touch_rw(bm, o) dcbt_rw((bm)->ce + (o))
+#define __bm_cl_in(bm, o)	be32_to_cpu(__raw_readl((bm)->ce + (o)))
+#define __bm_cl_out(bm, o, val) \
+	do { \
+		u32 *__tmpclout = (bm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __bm_cl_invalidate(bm, o) dccivac((bm)->ce + (o))
+#define bm_cl_touch_ro(reg) __bm_cl_touch_ro(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_touch_rw(reg) __bm_cl_touch_rw(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_in(reg)	    __bm_cl_in(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_out(reg, val) __bm_cl_out(&portal->addr, BM_CL_##reg##_CENA, val)
+#define bm_cl_invalidate(reg)\
+	__bm_cl_invalidate(&portal->addr, BM_CL_##reg##_CENA)
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 bm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ */
+enum bm_rcr_pmode {		/* matches BCSP_CFG::RPM */
+	bm_rcr_pci = 0,		/* PI index, cache-inhibited */
+	bm_rcr_pce = 1,		/* PI index, cache-enabled */
+	bm_rcr_pvb = 2		/* valid-bit */
+};
+
+enum bm_rcr_cmode {		/* s/w-only */
+	bm_rcr_cci,		/* CI index, cache-inhibited */
+	bm_rcr_cce		/* CI index, cache-enabled */
+};
+
+/* --- Portal structures --- */
+
+#define BM_RCR_SIZE		8
+
+struct bm_rcr {
+	struct bm_rcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	u32 busy;
+	enum bm_rcr_pmode pmode;
+	enum bm_rcr_cmode cmode;
+#endif
+};
+
+struct bm_mc {
+	struct bm_mc_command *cr;
+	struct bm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum {
+		/* Can only be _mc_start()ed */
+		mc_idle,
+		/* Can only be _mc_commit()ed or _mc_abort()ed */
+		mc_user,
+		/* Can only be _mc_retry()ed */
+		mc_hw
+	} state;
+#endif
+};
+
+struct bm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct bm_portal {
+	struct bm_addr addr;
+	struct bm_rcr rcr;
+	struct bm_mc mc;
+	struct bm_portal_config config;
+} ____cacheline_aligned;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define RCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(BM_RCR_SIZE << 6)))
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 RCR_PTR2IDX(struct bm_rcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (BM_RCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void RCR_INC(struct bm_rcr *rcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates
+	 * fast code with essentially no branching overheads. We increment to
+	 * the next RCR pointer and handle overflow and 'vbit'.
+	 */
+	struct bm_rcr_entry *partial = rcr->cursor + 1;
+
+	rcr->cursor = RCR_CARRYCLEAR(partial);
+	if (partial != rcr->cursor)
+		rcr->vbit ^= BM_RCR_VERB_VBIT;
+}
+
+static inline int bm_rcr_init(struct bm_portal *portal, enum bm_rcr_pmode pmode,
+			      __maybe_unused enum bm_rcr_cmode cmode)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct bm_rcr *rcr = &portal->rcr;
+	u32 cfg;
+	u8 pi;
+
+	rcr->ring = portal->addr.ce + BM_CL_RCR;
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	rcr->cursor = rcr->ring + pi;
+	rcr->vbit = (bm_in(RCR_PI_CINH) & BM_RCR_SIZE) ?  BM_RCR_VERB_VBIT : 0;
+	rcr->available = BM_RCR_SIZE - 1
+		- bm_cyc_diff(BM_RCR_SIZE, rcr->ci, pi);
+	rcr->ithresh = bm_in(RCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+	rcr->pmode = pmode;
+	rcr->cmode = cmode;
+#endif
+	cfg = (bm_in(CFG) & 0xffffffe0) | (pmode & 0x3); /* BCSP_CFG::RPM */
+	bm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void bm_rcr_finish(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	u8 ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	DPAA_ASSERT(!rcr->busy);
+	if (pi != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("losing uncommitted RCR entries\n");
+	if (ci != rcr->ci)
+		pr_crit("missing existing RCR completions\n");
+	if (rcr->ci != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("RCR destroyed unquiesced\n");
+}
+
+static inline struct bm_rcr_entry *bm_rcr_start(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(!rcr->busy);
+	if (!rcr->available)
+		return NULL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 1;
+#endif
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline struct bm_rcr_entry *bm_rcr_pend_and_next(
+					struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode != bm_rcr_pvb);
+	if (rcr->available == 1)
+		return NULL;
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcr->cursor);
+	RCR_INC(rcr);
+	rcr->available--;
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_pci_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pci);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	hwsync();
+	bm_out(RCR_PI_CINH, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	bm_cl_invalidate(RCR_PI);
+	bm_cl_touch_rw(RCR_PI);
+}
+
+static inline void bm_rcr_pce_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	lwsync();
+	bm_cl_out(RCR_PI, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pvb_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	struct bm_rcr_entry *rcursor;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pvb);
+	lwsync();
+	rcursor = rcr->cursor;
+	rcursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcursor);
+	RCR_INC(rcr);
+	rcr->available--;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	rcr->busy = 0;
+#endif
+}
+
+static inline u8 bm_rcr_cci_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cci);
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline void bm_rcr_cce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	bm_cl_touch_ro(RCR_CI);
+}
+
+static inline u8 bm_rcr_cce_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	rcr->ci = bm_cl_in(RCR_CI) & (BM_RCR_SIZE - 1);
+	bm_cl_invalidate(RCR_CI);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline u8 bm_rcr_get_ithresh(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->ithresh;
+}
+
+static inline void bm_rcr_set_ithresh(struct bm_portal *portal, u8 ithresh)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	rcr->ithresh = ithresh;
+	bm_out(RCR_ITR, ithresh);
+}
+
+static inline u8 bm_rcr_get_avail(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->available;
+}
+
+static inline u8 bm_rcr_get_fill(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return BM_RCR_SIZE - 1 - rcr->available;
+}
+
+/* --- Management command API --- */
+
+static inline int bm_mc_init(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + BM_CL_CR;
+	mc->rr = portal->addr.ce + BM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			BM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? BM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+	return 0;
+}
+
+static inline void bm_mc_finish(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (mc->state != mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct bm_mc_command *bm_mc_start(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void bm_mc_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_user);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+}
+
+static inline void bm_mc_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_hw;
+#endif
+}
+
+static inline struct bm_mc_result *bm_mc_result(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= BM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = mc_idle;
+#endif
+	return rr;
+}
+
+#define SCN_REG(bpid) BM_REG_SCN((bpid) / 32)
+#define SCN_BIT(bpid) (0x80000000 >> (bpid & 31))
+static inline void bm_isr_bscn_mask(struct bm_portal *portal, u8 bpid,
+				    int enable)
+{
+	u32 val;
+
+	DPAA_ASSERT(bpid < bman_pool_max);
+	/* REG_SCN for bpid=0..31, REG_SCN+4 for bpid=32..63 */
+	val = __bm_in(&portal->addr, SCN_REG(bpid));
+	if (enable)
+		val |= SCN_BIT(bpid);
+	else
+		val &= ~SCN_BIT(bpid);
+	__bm_out(&portal->addr, SCN_REG(bpid), val);
+}
+
+static inline u32 __bm_isr_read(struct bm_portal *portal, enum bm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 6));
+#else
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __bm_isr_write(struct bm_portal *portal, enum bm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 6), val);
+#else
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 2), val);
+#endif
+}
+
+/* Buffer Pool Cleanup */
+static inline int bm_shutdown_pool(struct bm_portal *p, u32 bpid)
+{
+	struct bm_mc_command *bm_cmd;
+	struct bm_mc_result *bm_res;
+
+	int aq_count = 0;
+	bool stop = false;
+
+	while (!stop) {
+		/* Acquire buffers until empty */
+		bm_cmd = bm_mc_start(p);
+		bm_cmd->acquire.bpid = bpid;
+		bm_mc_commit(p, BM_MCC_VERB_CMD_ACQUIRE |  1);
+		while (!(bm_res = bm_mc_result(p)))
+			cpu_relax();
+		if (!(bm_res->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT)) {
+			/* Pool is empty */
+			stop = true;
+		} else
+			++aq_count;
+	};
+	return 0;
+}
+
+#endif /* __BMAN_H */
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
index fb3c50e..5c13a80 100644
--- a/drivers/bus/dpaa/base/qbman/bman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -65,6 +65,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct bman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -111,6 +112,14 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 	/* Use the IRQ FD as a unique IRQ number */
 	pcfg.irq = fd;
 
+	portal = bman_create_affine_portal(&pcfg);
+	if (!portal) {
+		pr_err("Bman portal initialisation failed (%d)",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	/* Set the IRQ number */
 	irq_map.type = dpaa_portal_bman;
 	irq_map.portal_cinh = map.addr.cinh;
@@ -120,10 +129,13 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 
 static int fsl_bman_portal_finish(void)
 {
+	__maybe_unused const struct bm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = bman_destroy_affine_portal();
+	DPAA_BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
index 690576a..35dba7f 100644
--- a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -41,6 +41,22 @@
 #include "dpaa_sys.h"
 #include <process.h>
 #include <fsl_qman.h>
+#include <fsl_bman.h>
+
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_bpid, result, count, align, partial);
+}
+
+void bman_release_bpid_range(u32 bpid, u32 count)
+{
+	process_release(dpaa_id_bpid, bpid, count);
+}
+
+int bman_reserve_bpid_range(u32 bpid, u32 count)
+{
+	return process_reserve(dpaa_id_bpid, bpid, count);
+}
 
 int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
 {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 15/41] bus/dpaa: add fman flow control threshold setting
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (13 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 14/41] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 16/41] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
                         ` (28 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/fman/fman_hw.c | 28 ++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_fman.h  |  7 +++++++
 2 files changed, 35 insertions(+)

diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index a7ca661..077c17c 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -37,6 +37,7 @@
  */
 #include <fsl_fman.h>
 #include <fsl_fman_crc64.h>
+#include <fsl_bman.h>
 
 /* Instantiate the global variable that the inline CRC64 implementation (in
  * <fsl_fman.h>) depends on.
@@ -393,6 +394,33 @@ fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
 }
 
 int
+fman_if_get_fc_threshold(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	return in_be32(fmbm_mpd);
+}
+
+int
+fman_if_set_fc_threshold(struct fman_if *fm_if, u32 high_water,
+			 u32 low_water, u32 bpid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	out_be32(fmbm_mpd, FMAN_ENABLE_BPOOL_DEPLETION);
+	return bm_pool_set_hw_threshold(bpid, low_water, high_water);
+
+}
+
+int
 fman_if_get_fc_quanta(struct fman_if *fm_if)
 {
 	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index ac38082..95aee67 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -112,6 +112,13 @@ void fman_if_loopback_disable(struct fman_if *p);
 void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
 		    size_t bufsize);
 
+/* Get Flow Control threshold parameters on specific interface */
+int fman_if_get_fc_threshold(struct fman_if *fm_if);
+
+/* Enable and Set Flow Control threshold parameters on specific interface */
+int fman_if_set_fc_threshold(struct fman_if *fm_if,
+			u32 high_water, u32 low_water, u32 bpid);
+
 /* Get Flow Control pause quanta on specific interface */
 int fman_if_get_fc_quanta(struct fman_if *fm_if);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 16/41] bus/dpaa: integrate DPAA Bus with hardware blocks
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (14 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 15/41] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 17/41] doc: add NXP DPAA PMD documentation Shreyansh Jain
                         ` (27 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Now that QBMAN (QMAN, BMAN) and FMAN drivers are available, this patch
integrates the DPAA Bus driver for using the drivers for scanning
devices and calling the PMD registered probe callbacks.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c               | 248 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  41 +++++
 drivers/bus/dpaa/rte_dpaa_bus.h           |   9 ++
 3 files changed, 298 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index cc343b3..8017df3 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -63,9 +63,21 @@
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
 int dpaa_logtype_bus;
 
 struct rte_dpaa_bus rte_dpaa_bus;
+struct netcfg_info *dpaa_netcfg;
+
+/* define a variable to hold the portal_key, once created.*/
+pthread_key_t dpaa_portal_key;
+
+RTE_DEFINE_PER_LCORE(bool, _dpaa_io);
 
 static inline void
 dpaa_add_to_device_list(struct rte_dpaa_device *dev)
@@ -79,11 +91,247 @@ dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
 	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
 }
 
+static void dpaa_clean_device_list(void);
+
+static int
+dpaa_create_device_list(void)
+{
+	int i;
+	int ret;
+	struct rte_dpaa_device *dev;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+
+	/* Creating Ethernet Devices */
+	for (i = 0; i < dpaa_netcfg->num_ethports; i++) {
+		dev = calloc(1, sizeof(struct rte_dpaa_device));
+		if (!dev) {
+			DPAA_BUS_LOG(ERR, "Failed to allocate ETH devices");
+			ret = -ENOMEM;
+			goto cleanup;
+		}
+
+		cfg = &dpaa_netcfg->port_cfg[i];
+		fman_intf = cfg->fman_if;
+
+		/* Device identifiers */
+		dev->id.fman_id = fman_intf->fman_idx + 1;
+		dev->id.mac_id = fman_intf->mac_idx;
+		dev->device_type = FSL_DPAA_ETH;
+		dev->id.dev_id = i;
+
+		/* Create device name */
+		memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
+		sprintf(dev->name, "fm%d-mac%d", (fman_intf->fman_idx + 1),
+			fman_intf->mac_idx);
+		DPAA_BUS_LOG(DEBUG, "Device added: %s", dev->name);
+		dev->device.name = dev->name;
+
+		dpaa_add_to_device_list(dev);
+	}
+
+	rte_dpaa_bus.device_count = i;
+
+	return 0;
+
+cleanup:
+	dpaa_clean_device_list();
+	return ret;
+}
+
+static void
+dpaa_clean_device_list(void)
+{
+	struct rte_dpaa_device *dev = NULL;
+	struct rte_dpaa_device *tdev = NULL;
+
+	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+		free(dev);
+		dev = NULL;
+	}
+}
+
+/** XXX move this function into a separate file */
+static int
+_dpaa_portal_init(void *arg)
+{
+	cpu_set_t cpuset;
+	pthread_t id;
+	uint32_t cpu = rte_lcore_id();
+	int ret;
+	struct dpaa_portal *dpaa_io_portal;
+
+	BUS_INIT_FUNC_TRACE();
+
+	if ((uint64_t)arg == 1 || cpu == LCORE_ID_ANY)
+		cpu = rte_get_master_lcore();
+	/* if the core id is not supported */
+	else
+		if (cpu >= RTE_MAX_LCORE)
+			return -1;
+
+	/* Set CPU affinity for this thread */
+	CPU_ZERO(&cpuset);
+	CPU_SET(cpu, &cpuset);
+	id = pthread_self();
+	ret = pthread_setaffinity_np(id, sizeof(cpu_set_t), &cpuset);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "pthread_setaffinity_np failed on "
+			"core :%d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	/* Initialise bman thread portals */
+	ret = bman_thread_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "BMAN thread initialized");
+
+	/* Initialise qman thread portals */
+	ret = qman_thread_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		bman_thread_finish();
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
+
+	dpaa_io_portal = rte_malloc(NULL, sizeof(struct dpaa_portal),
+				    RTE_CACHE_LINE_SIZE);
+	if (!dpaa_io_portal) {
+		DPAA_BUS_LOG(ERR, "Unable to allocate memory");
+		bman_thread_finish();
+		qman_thread_finish();
+		return -ENOMEM;
+	}
+
+	dpaa_io_portal->qman_idx = qman_get_portal_index();
+	dpaa_io_portal->bman_idx = bman_get_portal_index();
+	dpaa_io_portal->tid = syscall(SYS_gettid);
+
+	ret = pthread_setspecific(dpaa_portal_key, (void *)dpaa_io_portal);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "pthread_setspecific failed on "
+			    "core %d with ret: %d", cpu, ret);
+		dpaa_portal_finish(NULL);
+
+		return ret;
+	}
+
+	RTE_PER_LCORE(_dpaa_io) = true;
+
+	DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
+
+	return 0;
+}
+
+/*
+ * rte_dpaa_portal_init - Wrapper over _dpaa_portal_init with thread level check
+ * XXX Complete this
+ */
+int
+rte_dpaa_portal_init(void *arg)
+{
+	if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
+		return _dpaa_portal_init(arg);
+
+	return 0;
+}
+
+void
+dpaa_portal_finish(void *arg)
+{
+	struct dpaa_portal *dpaa_io_portal = (struct dpaa_portal *)arg;
+
+	if (!dpaa_io_portal) {
+		DPAA_BUS_LOG(DEBUG, "Portal already cleaned");
+		return;
+	}
+
+	bman_thread_finish();
+	qman_thread_finish();
+
+	pthread_setspecific(dpaa_portal_key, NULL);
+
+	rte_free(dpaa_io_portal);
+	dpaa_io_portal = NULL;
+
+	RTE_PER_LCORE(_dpaa_io) = false;
+}
+
+#define DPAA_DEV_PATH1 "/sys/devices/platform/soc/soc:fsl,dpaa"
+#define DPAA_DEV_PATH2 "/sys/devices/platform/fsl,dpaa"
+
 static int
 rte_dpaa_bus_scan(void)
 {
+	int ret;
+
 	BUS_INIT_FUNC_TRACE();
 
+	if ((access(DPAA_DEV_PATH1, F_OK) != 0) &&
+	    (access(DPAA_DEV_PATH2, F_OK) != 0)) {
+		RTE_LOG(DEBUG, EAL, "DPAA Bus not present. Skipping.\n");
+		return 0;
+	}
+
+	/* Load the device-tree driver */
+	ret = of_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "of_init failed with ret: %d", ret);
+		return -1;
+	}
+
+	/* Get the interface configurations from device-tree */
+	dpaa_netcfg = netcfg_acquire();
+	if (!dpaa_netcfg) {
+		DPAA_BUS_LOG(ERR, "netcfg_acquire failed");
+		return -EINVAL;
+	}
+
+	RTE_LOG(NOTICE, EAL, "DPAA Bus Detected\n");
+
+	if (!dpaa_netcfg->num_ethports) {
+		DPAA_BUS_LOG(INFO, "no network interfaces available");
+		/* This is not an error */
+		return 0;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "Bus: Address of netcfg=%p, Ethports=%d",
+		     dpaa_netcfg, dpaa_netcfg->num_ethports);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dump_netcfg(dpaa_netcfg);
+#endif
+
+	DPAA_BUS_LOG(DEBUG, "Number of devices = %d\n",
+		     dpaa_netcfg->num_ethports);
+	ret = dpaa_create_device_list();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to create device list. (%d)", ret);
+		return ret;
+	}
+
+	/* create the key, supplying a function that'll be invoked
+	 * when a portal affined thread will be deleted.
+	 */
+	ret = pthread_key_create(&dpaa_portal_key, dpaa_portal_finish);
+	if (ret) {
+		DPAA_BUS_LOG(DEBUG, "Unable to create pthread key. (%d)", ret);
+		dpaa_clean_device_list();
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "dpaa_portal_key=%u, ret=%d\n",
+		    (unsigned int)dpaa_portal_key, ret);
+
 	return 0;
 }
 
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index d97a009..f82643e 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,7 +1,48 @@
 DPDK_17.11 {
 	global:
 
+	bman_acquire;
+	bman_free_pool;
+	bman_get_params;
+	bman_new_pool;
+	bman_release;
+	dpaa_netcfg;
+	fman_ccsr_map_fd;
+	fman_dealloc_bufs_mask_hi;
+	fman_dealloc_bufs_mask_lo;
+	fman_if_add_mac_addr;
+	fman_if_clear_mac_addr;
+	fman_if_disable_rx;
+	fman_if_enable_rx;
+	fman_if_discard_rx_errors;
+	fman_if_get_fc_threshold;
+	fman_if_get_fc_quanta;
+	fman_if_loopback_disable;
+	fman_if_loopback_enable;
+	fman_if_promiscuous_disable;
+	fman_if_promiscuous_enable;
+	fman_if_reset_mcast_filter_table;
+	fman_if_set_bp;
+	fman_if_set_fc_threshold;
+	fman_if_set_fc_quanta;
+	fman_if_set_fdoff;
+	fman_if_set_ic_params;
+	fman_if_set_maxfrm;
+	fman_if_set_mcast_filter_table;
+	fman_if_stats_get;
+	fman_if_stats_reset;
+	netcfg_acquire;
+	netcfg_release;
+	qman_create_fq;
+	qman_dequeue;
+	qman_dqrr_consume;
+	qman_enqueue_multi;
+	qman_init_fq;
+	qman_set_vdq;
+	qman_reserve_fqid_range;
 	rte_dpaa_driver_register;
 	rte_dpaa_driver_unregister;
+	rte_dpaa_mem_ptov;
+	rte_dpaa_portal_init;
 
 };
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 8a1e192..eafc944 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -35,6 +35,12 @@
 #include <rte_bus.h>
 #include <rte_mempool.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
 #define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
 
 #define DEV_TO_DPAA_DEVICE(ptr)	\
@@ -47,6 +53,9 @@ struct rte_dpaa_driver;
 TAILQ_HEAD(rte_dpaa_device_list, rte_dpaa_device);
 TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
 
+/* Configuration variables exported from DPAA bus */
+extern struct netcfg_info *dpaa_netcfg;
+
 enum rte_dpaa_type {
 	FSL_DPAA_ETH = 1,
 	FSL_DPAA_CRYPTO,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 17/41] doc: add NXP DPAA PMD documentation
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (15 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 16/41] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-18 14:53         ` Ferruh Yigit
  2017-09-18 18:33         ` Mcnamara, John
  2017-09-09 11:21       ` [PATCH v4 18/41] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
                         ` (26 subsequent siblings)
  43 siblings, 2 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                       |   2 +
 doc/guides/nics/dpaa.rst          | 374 ++++++++++++++++++++++++++++++++++++++
 doc/guides/nics/features/dpaa.ini |   8 +
 doc/guides/nics/index.rst         |   1 +
 4 files changed, 385 insertions(+)
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index 6ee20ce..10646a4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -409,6 +409,8 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: doc/guides/nics/dpaa.rst
+F: doc/guides/nics/features/dpaa.ini
 
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
new file mode 100644
index 0000000..404efcb
--- /dev/null
+++ b/doc/guides/nics/dpaa.rst
@@ -0,0 +1,374 @@
+..  BSD LICENSE
+    Copyright 2017 NXP.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+DPAA Poll Mode Driver
+=====================
+
+The DPAA NIC PMD (**librte_pmd_dpaa**) provides poll mode driver
+support for the inbuilt NIC found in the **NXP DPAA** SoC family.
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+NXP DPAA (Data Path Acceleration Architecture - Gen 1)
+------------------------------------------------------
+
+This section provides an overview of the NXP DPAA architecture
+and how it is integrated into the DPDK.
+
+Contents summary
+
+- DPAA overview
+- DPAA driver architecture overview
+
+.. _dpaa_overview:
+
+DPAA Overview
+~~~~~~~~~~~~~
+
+Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
+
+The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
+components on specific QorIQ series multicore processors. This architecture
+provides the infrastructure to support simplified sharing of networking
+interfaces and accelerators by multiple CPU cores, and the accelerators
+themselves.
+
+DPAA includes:
+
+- Cores
+- Network and packet I/O
+- Hardware offload accelerators
+- Infrastructure required to facilitate flow of packets between the components above
+
+Infrastructure components are:
+
+- The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
+  It allows  CPUs and other accelerators connected to the SoC datapath to
+  enqueue and dequeue ethernet frames, thus providing the infrastructure for
+  data exchange among CPUs and datapath accelerators.
+- The Buffer Manager (BMan) is a hardware buffer pool management block that
+  allows software and accelerators on the datapath to acquire and release
+  buffers in order to build frames.
+
+Hardware accelerators are:
+
+- SEC - Cryptographic accelerator
+- PME - Pattern matching engine
+
+The Network and packet I/O component:
+
+- The Frame Manager (FMan) is a key component in the DPAA and makes use of the
+  DPAA infrastructure (QMan and BMan). FMan  is responsible for packet
+  distribution and policing. Each frame can be parsed, classified and results
+  may be attached to the frame. This meta data can be used to select
+  particular QMan queue, which the packet is forwarded to.
+
+
+DPAA DPDK - Poll Mode Driver Overview
+-------------------------------------
+
+This section provides an overview of the drivers for DPAA:
+
+* Bus driver and associated "DPAA infrastructure" drivers
+* Functional object drivers (such as Ethernet).
+
+Brief description of each driver is provided in layout below as well as
+in the following sections.
+
+.. code-block:: console
+
+                                       +------------+
+                                       | DPDK DPAA  |
+                                       |    PMD     |
+                                       +-----+------+
+                                             |
+                                       +-----+------+       +---------------+
+                                       :  Ethernet  :.......| DPDK DPAA     |
+                    . . . . . . . . .  :   (FMAN)   :       | Mempool driver|
+                   .                   +---+---+----+       |  (BMAN)       |
+                  .                        ^   |            +-----+---------+
+                 .                         |   |<enqueue,         .
+                .                          |   | dequeue>         .
+               .                           |   |                  .
+              .                        +---+---V----+             .
+             .      . . . . . . . . . .: Portal drv :             .
+            .      .                   :            :             .
+           .      .                    +-----+------+             .
+          .      .                     :   QMAN     :             .
+         .      .                      :  Driver    :             .
+    +----+------+-------+              +-----+------+             .
+    |   DPDK DPAA Bus   |                    |                    .
+    |   driver          |....................|.....................
+    |   /bus/dpaa       |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|========================
+                                            PHY
+    =========================================|========================
+
+In the above representation, solid lines represent components which interface
+with DPDK RTE Framework and dotted lines represent DPAA internal components.
+
+DPAA Bus driver
+~~~~~~~~~~~~~~~
+
+The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
+Key functions include:
+
+- Scanning and parsing the various objects and adding them to their respective
+  device list.
+- Performing probe for available drivers against each scanned device
+- Creating necessary ethernet instance before passing control to the PMD
+
+DPAA NIC Driver (PMD)
+~~~~~~~~~~~~~~~~~~~~~
+
+DPAA PMD is traditional DPDK PMD which provides necessary interface between
+RTE framework and DPAA internal components/drivers.
+
+- Once devices have been identified by DPAA Bus, each device is associated
+  with the PMD
+- PMD is responsible for implementing necessary glue layer between RTE APIs
+  and lower level QMan and FMan blocks.
+  The Ethernet driver is bound to a FMAN port and implements the interfaces
+  needed to connect the DPAA network interface to the network stack.
+  Each FMAN Port corresponds to a DPDK network interface.
+
+
+Features
+^^^^^^^^
+
+  Features of the DPAA PMD are:
+
+  - Multiple queues for TX and RX
+  - Receive Side Scaling (RSS)
+  - Packet type information
+  - Checksum offload
+  - Promiscuous mode
+
+DPAA Mempool Driver
+~~~~~~~~~~~~~~~~~~~
+
+DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
+Manager.
+
+- Using standard Mempools operations RTE API, the mempool driver interfaces
+  with RTE to service each mempool creation, deletion, buffer allocation and
+  deallocation requests.
+- Each FMAN instance has a BMan pool attached to it during initialization.
+  Each Tx frame can be automatically released by hardware, if allocated from
+  this pool.
+
+
+Supported DPAA SoCs
+-------------------
+
+- LS1043A/LS1023A
+- LS1046A/LS1026A
+
+Prerequisites
+-------------
+
+There are three main pre-requisities for executing DPAA PMD on a DPAA
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+   For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
+
+2. **Linux Kernel**
+
+   It can be obtained from `NXP's Github hosting <https://github.com/qoriq-open-source/linux>`_.
+
+3. **Rootfile system**
+
+   Any *aarch64* supporting filesystem can be used. For example,
+   Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
+   from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
+
+4. **FMC Tool**
+
+   Before any DPDK application can be executed, the Frame Manager Configuration
+   Tool (FMC) need to be executed to set the configurations of the queues. This
+   includes the queue state, RSS and other policies.
+   This tool can be obtained from `NXP (Freescale) Public Git Repository <http://git.freescale.com/git/cgit.cgi/ppc/sdk/fmc.git>`_.
+   This tool needs configuration files which are available in the
+   :ref:`DPDK Extra Scripts <extra_scripts>`, described below.
+
+As an alternative method, DPAA PMD can also be executed using images provided
+as part of SDK from NXP. The SDK includes all the above prerequisites necessary
+to bring up a DPAA board.
+
+The following dependencies are not part of DPDK and must be installed
+separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+
+.. _extra_scripts:
+
+- **DPDK Extra Scripts**
+
+  DPAA based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK Extra repository.
+
+  `DPDK Extras Scripts <https://github.com/qoriq-open-source/dpdk-extras>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+**.
+- Supported architectures:  **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
+  to setup the basic DPDK environment.
+
+.. note::
+
+   Some part of dpaa bus code (qbman and fman - library) routines are
+   dual licensed (BSD & GPLv2).
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_bus_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_pmd_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
+
+  Toggle display of generic debugging messages
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT`` (default ``n``)
+
+  Toggle display of initialization related messages.
+
+- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` (default ``dpaa``)
+
+  This is not a DPAA specific configuration - it is a generic RTE config.
+  For optimal performance and hardware utilization, it is expected that DPAA
+  Mempool driver is used for mempools. For that, this configuration needs to
+  enabled.
+
+Environment Variables
+~~~~~~~~~~~~~~~~~~~~~
+
+DPAA drivers uses the following environment variables to configure its
+state during application initialization:
+
+- ``DPAA_NUM_RX_QUEUES`` (default 1)
+
+  This defines the number of Rx queues configured for an application, per
+  port. Hardware would distribute across these many number of queues on Rx
+  of packets.
+  In case the application is configured to use lesser number of queues than
+  configured above, it might result in packet loss (because of distribution).
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+#. Running testpmd:
+
+   Follow instructions available in the document
+   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+   to run testpmd.
+
+   Example output:
+
+   .. code-block:: console
+
+      ./arm64-dpaa-linuxapp-gcc/testpmd -c 0xff -n 1 \
+        -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
+
+      .....
+      EAL: Registered [pci] bus.
+      EAL: Registered [dpaa] bus.
+      EAL: Detected 4 lcore(s)
+      .....
+      EAL: dpaa: Bus scan completed
+      .....
+      Configuring Port 0 (socket 0)
+      Port 0: 00:00:00:00:00:01
+      Configuring Port 1 (socket 0)
+      Port 1: 00:00:00:00:00:02
+      .....
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Port 1 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+Limitations
+-----------
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA drivers for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
+up to 10240 bytes can still reach the host interface.
+
+Multiprocess Support
+~~~~~~~~~~~~~~~~~~~~
+
+Current version of DPAA driver doesn't support multi-process applications
+where I/O is performed using secondary processes. This feature would be
+implemented in subsequent versions.
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
new file mode 100644
index 0000000..9e8befc
--- /dev/null
+++ b/doc/guides/nics/features/dpaa.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'dpaa' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+ARMv8                = Y
+Usage doc            = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 36f4f3f..4115141 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -43,6 +43,7 @@ Network Interface Controller Drivers
     bnx2x
     bnxt
     cxgbe
+    dpaa
     dpaa2
     e1000em
     ena
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 18/41] bus/dpaa: add DPAA mempool logging macros
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (16 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 17/41] doc: add NXP DPAA PMD documentation Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 19/41] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
                         ` (25 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c      |  5 +++++
 drivers/bus/dpaa/rte_dpaa_logs.h | 28 ++++++++++++++++++++++++++++
 2 files changed, 33 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 8017df3..dc2b3ad 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -70,6 +70,7 @@
 #include <netcfg.h>
 
 int dpaa_logtype_bus;
+int dpaa_logtype_mempool;
 
 struct rte_dpaa_bus rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
@@ -452,4 +453,8 @@ dpaa_init_log(void)
 	dpaa_logtype_bus = rte_log_register("bus.dpaa");
 	if (dpaa_logtype_bus >= 0)
 		rte_log_set_level(dpaa_logtype_bus, RTE_LOG_NOTICE);
+
+	dpaa_logtype_mempool = rte_log_register("mempool.dpaa");
+	if (dpaa_logtype_mempool >= 0)
+		rte_log_set_level(dpaa_logtype_mempool, RTE_LOG_NOTICE);
 }
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
index 3ca3f9b..253962f 100644
--- a/drivers/bus/dpaa/rte_dpaa_logs.h
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -36,6 +36,7 @@
 #include <rte_log.h>
 
 extern int dpaa_logtype_bus;
+extern int dpaa_logtype_mempool;
 
 #define DPAA_BUS_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
@@ -63,4 +64,31 @@ extern int dpaa_logtype_bus;
 #define DPAA_BUS_WARN(fmt, args...) \
 	DPAA_BUS_LOG(WARNING, fmt, ## args)
 
+/* Mempool related logs */
+
+#define DPAA_MEMPOOL_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_mempool, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define MEMPOOL_INIT_FUNC_TRACE() DPAA_MEMPOOL_LOG(DEBUG, " >>")
+
+/* DEBUG is conditional to compiled configuration */
+#ifdef RTE_LIBRTE_DPAA_MEMPOOL_DEBUG
+#define DPAA_MEMPOOL_DEBUG(fmt, args...) \
+	DPAA_MEMPOOL_LOG(DEBUG, fmt, ## args)
+
+#else /* RTE_LIBRTE_DPAA_MEMPOOL_DEBUG */
+#define DPAA_MEMPOOL_DEBUG(fmt, args...) do { } while (0)
+#endif /* RTE_LIBRTE_DPAA_MEMPOOL_DEBUG */
+
+/* WARNING, ERR and INFO are unconditional */
+#define DPAA_MEMPOOL_ERR(fmt, args...) \
+	DPAA_MEMPOOL_LOG(ERR, fmt, ## args)
+
+#define DPAA_MEMPOOL_INFO(fmt, args...) \
+	DPAA_MEMPOOL_LOG(INFO, fmt, ## args)
+
+#define DPAA_MEMPOOL_WARN(fmt, args...) \
+	DPAA_MEMPOOL_LOG(WARNING, fmt, ## args)
+
 #endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 19/41] mempool/dpaa: add support for NXP DPAA Mempool
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (17 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 18/41] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 20/41] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
                         ` (24 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This Mempool driver works with DPAA BMan hardware block. This block
manages data buffers in memory, and provides efficient interface with
other hardware and software components for buffer requests.

This patch adds support for BMan. Compilation would be enabled in
subsequent patches.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/mempool/dpaa/Makefile                     |  64 +++++
 drivers/mempool/dpaa/dpaa_mempool.c               | 285 ++++++++++++++++++++++
 drivers/mempool/dpaa/dpaa_mempool.h               |  77 ++++++
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map |   6 +
 4 files changed, 432 insertions(+)
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map

diff --git a/drivers/mempool/dpaa/Makefile b/drivers/mempool/dpaa/Makefile
new file mode 100644
index 0000000..4b3be6b
--- /dev/null
+++ b/drivers/mempool/dpaa/Makefile
@@ -0,0 +1,64 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+CFLAGS += -D _GNU_SOURCE
+
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
+CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
+
+# versioning export map
+EXPORT_MAP := rte_mempool_dpaa_version.map
+
+# Lbrary version
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa_mempool.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
new file mode 100644
index 0000000..c76c3bc
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -0,0 +1,285 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <dpaa_mempool.h>
+
+struct dpaa_bp_info rte_dpaa_bpid_info[DPAA_MAX_BPOOLS];
+
+static int
+dpaa_mbuf_create_pool(struct rte_mempool *mp)
+{
+	struct bman_pool *bp;
+	struct bm_buffer bufs[8];
+	struct dpaa_bp_info *bp_info;
+	uint8_t bpid;
+	int num_bufs = 0, ret = 0;
+	struct bman_pool_params params = {
+		.flags = BMAN_POOL_FLAG_DYNAMIC_BPID
+	};
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	bp = bman_new_pool(&params);
+	if (!bp) {
+		DPAA_MEMPOOL_ERR("bman_new_pool() failed");
+		return -ENODEV;
+	}
+	bpid = bman_get_params(bp)->bpid;
+
+	/* Drain the pool of anything already in it. */
+	do {
+		/* Acquire is all-or-nothing, so we drain in 8s,
+		 * then in 1s for the remainder.
+		 */
+		if (ret != 1)
+			ret = bman_acquire(bp, bufs, 8, 0);
+		if (ret < 8)
+			ret = bman_acquire(bp, bufs, 1, 0);
+		if (ret > 0)
+			num_bufs += ret;
+	} while (ret > 0);
+	if (num_bufs)
+		DPAA_MEMPOOL_WARN("drained %u bufs from BPID %d",
+				  num_bufs, bpid);
+
+	rte_dpaa_bpid_info[bpid].mp = mp;
+	rte_dpaa_bpid_info[bpid].bpid = bpid;
+	rte_dpaa_bpid_info[bpid].size = mp->elt_size;
+	rte_dpaa_bpid_info[bpid].bp = bp;
+	rte_dpaa_bpid_info[bpid].meta_data_size =
+		sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(mp);
+	rte_dpaa_bpid_info[bpid].dpaa_ops_index = mp->ops_index;
+
+	bp_info = rte_malloc(NULL,
+			     sizeof(struct dpaa_bp_info),
+			     RTE_CACHE_LINE_SIZE);
+	if (!bp_info) {
+		DPAA_MEMPOOL_WARN("Memory allocation failed for bp_info");
+		bman_free_pool(bp);
+		return -ENOMEM;
+	}
+
+	rte_memcpy(bp_info, (void *)&rte_dpaa_bpid_info[bpid],
+		   sizeof(struct dpaa_bp_info));
+	mp->pool_data = (void *)bp_info;
+
+	DPAA_MEMPOOL_INFO("BMAN pool created for bpid =%d", bpid);
+	return 0;
+}
+
+static void
+dpaa_mbuf_free_pool(struct rte_mempool *mp)
+{
+	struct dpaa_bp_info *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	if (bp_info) {
+		bman_free_pool(bp_info->bp);
+		DPAA_MEMPOOL_INFO("BMAN pool freed for bpid =%d",
+				  bp_info->bpid);
+		rte_free(mp->pool_data);
+		mp->pool_data = NULL;
+	}
+}
+
+static void
+dpaa_buf_free(struct dpaa_bp_info *bp_info, uint64_t addr)
+{
+	struct bm_buffer buf;
+	int ret;
+
+	DPAA_MEMPOOL_DEBUG("Free 0x%lx to bpid: %d", addr, bp_info->bpid);
+
+	bm_buffer_set64(&buf, addr);
+retry:
+	ret = bman_release(bp_info->bp, &buf, 1, 0);
+	if (ret) {
+		DPAA_MEMPOOL_DEBUG("BMAN busy. Retrying...");
+		cpu_spin(CPU_SPIN_BACKOFF_CYCLES);
+		goto retry;
+	}
+}
+
+static int
+dpaa_mbuf_free_bulk(struct rte_mempool *pool,
+		    void *const *obj_table,
+		    unsigned int n)
+{
+	struct dpaa_bp_info *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+	int ret;
+	unsigned int i = 0;
+
+	DPAA_MEMPOOL_DEBUG(" Request to free %d buffers in bpid = %d",
+			   n, bp_info->bpid);
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
+				 ret);
+		return 0;
+	}
+
+	while (i < n) {
+		dpaa_buf_free(bp_info,
+			      (uint64_t)rte_mempool_virt2phy(pool,
+			      obj_table[i]) + bp_info->meta_data_size);
+		i = i + 1;
+	}
+
+	DPAA_MEMPOOL_DEBUG(" freed %d buffers in bpid =%d", n, bp_info->bpid);
+
+	return 0;
+}
+
+static int
+dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
+		     void **obj_table,
+		     unsigned int count)
+{
+	struct rte_mbuf **m = (struct rte_mbuf **)obj_table;
+	struct bm_buffer bufs[DPAA_MBUF_MAX_ACQ_REL];
+	struct dpaa_bp_info *bp_info;
+	void *bufaddr;
+	int i, ret;
+	unsigned int n = 0;
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+
+	DPAA_MEMPOOL_DEBUG(" Request to alloc %d buffers in bpid = %d",
+			   count, bp_info->bpid);
+
+	if (unlikely(count >= (RTE_MEMPOOL_CACHE_MAX_SIZE * 2))) {
+		DPAA_MEMPOOL_ERR("Unable to allocate requested (%u) buffers",
+				 count);
+		return -1;
+	}
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
+				 ret);
+		return -1;
+	}
+
+	while (n < count) {
+		/* Acquire is all-or-nothing, so we drain in 7s,
+		 * then the remainder.
+		 */
+		if ((count - n) > DPAA_MBUF_MAX_ACQ_REL) {
+			ret = bman_acquire(bp_info->bp, bufs,
+					   DPAA_MBUF_MAX_ACQ_REL, 0);
+		} else {
+			ret = bman_acquire(bp_info->bp, bufs, count - n, 0);
+		}
+		/* In case of less than requested number of buffers available
+		 * in pool, qbman_swp_acquire returns 0
+		 */
+		if (ret <= 0) {
+			DPAA_MEMPOOL_DEBUG("Buffer acquire failed with"
+					   " err code: %d", ret);
+			/* The API expect the exact number of requested
+			 * buffers. Releasing all buffers allocated
+			 */
+			dpaa_mbuf_free_bulk(pool, obj_table, n);
+			return -ENOBUFS;
+		}
+		/* assigning mbuf from the acquired objects */
+		for (i = 0; (i < ret) && bufs[i].addr; i++) {
+			/* TODO-errata - objerved that bufs may be null
+			 * i.e. first buffer is valid, remaining 6 buffers
+			 * may be null.
+			 */
+			bufaddr = (void *)rte_dpaa_mem_ptov(bufs[i].addr);
+			m[n] = (struct rte_mbuf *)((char *)bufaddr
+						- bp_info->meta_data_size);
+			DPAA_MEMPOOL_DEBUG("Acquired %p address %p from BMAN",
+					   (void *)bufaddr, (void *)m[n]);
+			n++;
+		}
+	}
+
+	DPAA_MEMPOOL_DEBUG(" allocated %d buffers from bpid =%d",
+			   n, bp_info->bpid);
+	return 0;
+}
+
+static unsigned int
+dpaa_mbuf_get_count(const struct rte_mempool *mp)
+{
+	struct dpaa_bp_info *bp_info;
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	if (!mp || !mp->pool_data) {
+		DPAA_MEMPOOL_ERR("Invalid mempool provided\n");
+		return 0;
+	}
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	return bman_query_free_buffers(bp_info->bp);
+}
+
+struct rte_mempool_ops dpaa_mpool_ops = {
+	.name = "dpaa",
+	.alloc = dpaa_mbuf_create_pool,
+	.free = dpaa_mbuf_free_pool,
+	.enqueue = dpaa_mbuf_free_bulk,
+	.dequeue = dpaa_mbuf_alloc_bulk,
+	.get_count = dpaa_mbuf_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
diff --git a/drivers/mempool/dpaa/dpaa_mempool.h b/drivers/mempool/dpaa/dpaa_mempool.h
new file mode 100644
index 0000000..de33c0c
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.h
@@ -0,0 +1,77 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_MEMPOOL_H__
+#define __DPAA_MEMPOOL_H__
+
+/* System headers */
+#include <stdio.h>
+#include <stdbool.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <unistd.h>
+
+#include <rte_mempool.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <fsl_usd.h>
+#include <fsl_bman.h>
+
+#define CPU_SPIN_BACKOFF_CYCLES               512
+
+/* total number of bpools on SoC */
+#define DPAA_MAX_BPOOLS	256
+
+/* Maximum release/acquire from BMAN */
+#define DPAA_MBUF_MAX_ACQ_REL  8
+
+struct dpaa_bp_info {
+	struct rte_mempool *mp;
+	struct bman_pool *bp;
+	uint32_t bpid;
+	uint32_t size;
+	uint32_t meta_data_size;
+	int32_t dpaa_ops_index;
+};
+
+#define DPAA_MEMPOOL_TO_POOL_INFO(__mp) \
+	((struct dpaa_bp_info *)__mp->pool_data)
+
+#define DPAA_MEMPOOL_TO_BPID(__mp) \
+	(((struct dpaa_bp_info *)__mp->pool_data)->bpid)
+
+extern struct dpaa_bp_info rte_dpaa_bpid_info[DPAA_MAX_BPOOLS];
+
+#define DPAA_BPID_TO_POOL_INFO(__bpid) (&rte_dpaa_bpid_info[__bpid])
+
+#endif
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
new file mode 100644
index 0000000..93ea216
--- /dev/null
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -0,0 +1,6 @@
+DPDK_17.11 {
+	global:
+
+	rte_dpaa_pool_table;
+
+};
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 20/41] drivers: enable compilation of DPAA Mempool driver
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (18 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 19/41] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 21/41] maintainers: claim ownership " Shreyansh Jain
                         ` (23 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch also adds configuration necessary for compilation of DPAA
Mempool driver into the DPAA specific config file.
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=dpaa is also configured to allow
applications to use DPAA mempool as default.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       | 1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 5 +++++
 drivers/mempool/Makefile                 | 2 ++
 3 files changed, 8 insertions(+)

diff --git a/config/common_base b/config/common_base
index 2bb2269..e4a9d6d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -305,6 +305,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 110042c..d91249f 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -43,3 +43,8 @@ CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
+
+# NXP DPAA Mempool
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index efd55f2..bfc5f00 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -32,6 +32,8 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mempool librte_ring
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 21/41] maintainers: claim ownership of DPAA Mempool driver
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (19 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 20/41] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 22/41] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
                         ` (22 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 10646a4..74b7aba 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -409,6 +409,7 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: drivers/mempool/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 22/41] bus/dpaa: add DPAA PMD logging macros
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (20 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 21/41] maintainers: claim ownership " Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 23/41] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
                         ` (21 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c      |  5 +++++
 drivers/bus/dpaa/rte_dpaa_logs.h | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index dc2b3ad..7ae5bfa 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -71,6 +71,7 @@
 
 int dpaa_logtype_bus;
 int dpaa_logtype_mempool;
+int dpaa_logtype_pmd;
 
 struct rte_dpaa_bus rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
@@ -457,4 +458,8 @@ dpaa_init_log(void)
 	dpaa_logtype_mempool = rte_log_register("mempool.dpaa");
 	if (dpaa_logtype_mempool >= 0)
 		rte_log_set_level(dpaa_logtype_mempool, RTE_LOG_NOTICE);
+
+	dpaa_logtype_pmd = rte_log_register("pmd.dpaa");
+	if (dpaa_logtype_pmd >= 0)
+		rte_log_set_level(dpaa_logtype_pmd, RTE_LOG_NOTICE);
 }
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
index 253962f..8442e0e 100644
--- a/drivers/bus/dpaa/rte_dpaa_logs.h
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -37,6 +37,7 @@
 
 extern int dpaa_logtype_bus;
 extern int dpaa_logtype_mempool;
+extern int dpaa_logtype_pmd;
 
 #define DPAA_BUS_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
@@ -91,4 +92,39 @@ extern int dpaa_logtype_mempool;
 #define DPAA_MEMPOOL_WARN(fmt, args...) \
 	DPAA_MEMPOOL_LOG(WARNING, fmt, ## args)
 
+/* PMD related logs */
+
+#define DPAA_PMD_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_pmd, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() DPAA_PMD_LOG(DEBUG, " >>")
+
+/* DEBUG is conditional to compiled configuration */
+#ifdef RTE_LIBRTE_DPAA_PMD_DEBUG
+#define DPAA_PMD_DEBUG(fmt, args...) \
+	DPAA_PMD_LOG(DEBUG, fmt, ## args)
+
+#else /* RTE_LIBRTE_DPAA_PMD_DEBUG */
+#define DPAA_PMD_DEBUG(fmt, args...) do { } while (0)
+#endif /* RTE_LIBRTE_DPAA_PMD_DEBUG */
+
+/* WARNING, ERR and INFO are unconditional */
+#define DPAA_PMD_ERR(fmt, args...) \
+	DPAA_PMD_LOG(ERR, fmt, ## args)
+
+#define DPAA_PMD_INFO(fmt, args...) \
+	DPAA_PMD_LOG(INFO, fmt, ## args)
+
+#define DPAA_PMD_WARN(fmt, args...) \
+	DPAA_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define DPAA_RX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, fmt, ## args)
+#define DPAA_TX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, fmt, ## args)
+#define DPAA_DP_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, fmt, ## args)
+
 #endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 23/41] net/dpaa: add NXP DPAA PMD driver skeleton
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (21 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 22/41] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 24/41] config: enable NXP DPAA PMD compilation Shreyansh Jain
                         ` (20 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

A skeleton which would be called after bus device scan. It currently
fails to identify the device.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                               |   1 +
 drivers/net/dpaa/Makefile                 |  63 ++++++++
 drivers/net/dpaa/dpaa_ethdev.c            | 256 ++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_ethdev.h            | 127 +++++++++++++++
 drivers/net/dpaa/rte_pmd_dpaa_version.map |   4 +
 5 files changed, 451 insertions(+)
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 74b7aba..48afbfc 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -410,6 +410,7 @@ M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
 F: drivers/mempool/dpaa/
+F: drivers/net/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
new file mode 100644
index 0000000..7ecd5be
--- /dev/null
+++ b/drivers/net/dpaa/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_SDK_DPAA=$(RTE_SDK)/drivers/net/dpaa
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT),y)
+CFLAGS += -O0 -g
+CFLAGS += "-Wno-error"
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+
+CFLAGS += -I$(RTE_SDK_DPAA)/
+CFLAGS += -I$(RTE_SDK_DPAA)/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
+
+EXPORT_MAP := rte_pmd_dpaa_version.map
+
+LIBABIVER := 1
+
+# Interfaces with DPDK
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
new file mode 100644
index 0000000..4543dfc
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -0,0 +1,256 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <dpaa_ethdev.h>
+
+/* Keep track of whether QMAN and BMAN have been globally initialized */
+static int is_global_init;
+
+static int
+dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	/* Change tx callback to the real one */
+	dev->tx_pkt_burst = NULL;
+
+	return 0;
+}
+
+static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = NULL;
+}
+
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static struct eth_dev_ops dpaa_devops = {
+	.dev_configure		  = dpaa_eth_dev_configure,
+	.dev_start		  = dpaa_eth_dev_start,
+	.dev_stop		  = dpaa_eth_dev_stop,
+	.dev_close		  = dpaa_eth_dev_close,
+};
+
+/* Initialise a network interface */
+static int
+dpaa_dev_init(struct rte_eth_dev *eth_dev)
+{
+	int dev_id;
+	struct rte_dpaa_device *dpaa_device;
+	struct dpaa_if *dpaa_intf;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
+	dev_id = dpaa_device->id.dev_id;
+	dpaa_intf = eth_dev->data->dev_private;
+
+	dpaa_intf->name = dpaa_device->name;
+
+	dpaa_intf->ifid = dev_id;
+
+	eth_dev->dev_ops = &dpaa_devops;
+
+	return 0;
+}
+
+static int
+dpaa_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	if (!dpaa_intf) {
+		DPAA_PMD_WARN("Already closed or not started");
+		return -1;
+	}
+
+	dpaa_eth_dev_close(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
+	return 0;
+}
+
+static int
+rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv,
+	       struct rte_dpaa_device *dpaa_dev)
+{
+	int diag;
+	int ret;
+	struct rte_eth_dev *eth_dev;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* In case of secondary process, the device is already configured
+	 * and no further action is required, except portal initialization
+	 * and verifying secondary attachment to port name.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		eth_dev = rte_eth_dev_attach_secondary(dpaa_dev->name);
+		if (!eth_dev)
+			return -ENOMEM;
+		return 0;
+	}
+
+	if (!is_global_init) {
+		/* One time load of Qman/Bman drivers */
+		ret = qman_global_init();
+		if (ret) {
+			DPAA_PMD_ERR("QMAN initialization failed: %d",
+				     ret);
+			return ret;
+		}
+		ret = bman_global_init();
+		if (ret) {
+			DPAA_PMD_ERR("BMAN initialization failed: %d",
+				     ret);
+			return ret;
+		}
+
+		is_global_init = 1;
+	}
+
+	ret = rte_dpaa_portal_init((void *)1);
+	if (ret) {
+		DPAA_PMD_ERR("Unable to initialize portal");
+		return ret;
+	}
+
+	eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
+	if (eth_dev == NULL)
+		return -ENOMEM;
+
+	eth_dev->data->dev_private = rte_zmalloc(
+					"ethdev private structure",
+					sizeof(struct dpaa_if),
+					RTE_CACHE_LINE_SIZE);
+	if (!eth_dev->data->dev_private) {
+		DPAA_PMD_ERR("Cannot allocate memzone for port data");
+		rte_eth_dev_release_port(eth_dev);
+		return -ENOMEM;
+	}
+
+	eth_dev->device = &dpaa_dev->device;
+	eth_dev->device->driver = &dpaa_drv->driver;
+	dpaa_dev->eth_dev = eth_dev;
+
+	/* Invoke PMD device initialization function */
+	diag = dpaa_dev_init(eth_dev);
+	if (diag == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eth_dev->data->dev_private);
+
+	rte_eth_dev_release_port(eth_dev);
+	return diag;
+}
+
+static int
+rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
+{
+	struct rte_eth_dev *eth_dev;
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev = dpaa_dev->eth_dev;
+	dpaa_dev_uninit(eth_dev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eth_dev->data->dev_private);
+
+	rte_eth_dev_release_port(eth_dev);
+
+	return 0;
+}
+
+static struct rte_dpaa_driver rte_dpaa_pmd = {
+	.drv_type = FSL_DPAA_ETH,
+	.probe = rte_dpaa_probe,
+	.remove = rte_dpaa_remove,
+};
+
+RTE_PMD_REGISTER_DPAA(net_dpaa, rte_dpaa_pmd);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
new file mode 100644
index 0000000..2f25acb
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -0,0 +1,127 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2014-2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_ETHDEV_H__
+#define __DPAA_ETHDEV_H__
+
+/* System headers */
+#include <stdbool.h>
+#include <rte_ethdev.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
+#define DPAA_MBUF_HW_ANNOTATION		64
+#define DPAA_FD_PTA_SIZE		64
+
+#if (DPAA_MBUF_HW_ANNOTATION + DPAA_FD_PTA_SIZE) > RTE_PKTMBUF_HEADROOM
+#error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM"
+#endif
+
+/* we will re-use the HEADROOM for annotation in RX */
+#define DPAA_HW_BUF_RESERVE	0
+#define DPAA_PACKET_LAYOUT_ALIGN	64
+
+/* Alignment to use for cpu-local structs to avoid coherency problems. */
+#define MAX_CACHELINE			64
+
+#define DPAA_MIN_RX_BUF_SIZE 512
+#define DPAA_MAX_RX_PKT_LEN  10240
+
+/* RX queue tail drop threshold
+ * currently considering 32 KB packets.
+ */
+#define CONG_THRESHOLD_RX_Q  (32 * 1024)
+
+/*max mac filter for memac(8) including primary mac addr*/
+#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
+
+/*Maximum number of slots available in TX ring*/
+#define MAX_TX_RING_SLOTS	8
+
+/* PCD frame queues */
+#define DPAA_PCD_FQID_START		0x400
+#define DPAA_PCD_FQID_MULTIPLIER	0x100
+#define DPAA_DEFAULT_NUM_PCD_QUEUES	1
+
+#define DPAA_IF_TX_PRIORITY		3
+#define DPAA_IF_RX_PRIORITY		4
+#define DPAA_IF_DEBUG_PRIORITY		7
+
+#define DPAA_IF_RX_ANNOTATION_STASH	1
+#define DPAA_IF_RX_DATA_STASH		1
+#define DPAA_IF_RX_CONTEXT_STASH		0
+
+/* Each "debug" FQ is represented by one of these */
+#define DPAA_DEBUG_FQ_RX_ERROR   0
+#define DPAA_DEBUG_FQ_TX_ERROR   1
+
+#define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
+		PKT_TX_IP_CKSUM |                \
+		PKT_TX_TCP_CKSUM |               \
+		PKT_TX_UDP_CKSUM)
+
+/* DPAA Frame descriptor macros */
+
+#define DPAA_FD_CMD_FCO			0x80000000
+/**< Frame queue Context Override */
+#define DPAA_FD_CMD_RPD			0x40000000
+/**< Read Prepended Data */
+#define DPAA_FD_CMD_UPD			0x20000000
+/**< Update Prepended Data */
+#define DPAA_FD_CMD_DTC			0x10000000
+/**< Do IP/TCP/UDP Checksum */
+#define DPAA_FD_CMD_DCL4C		0x10000000
+/**< Didn't calculate L4 Checksum */
+#define DPAA_FD_CMD_CFQ			0x00ffffff
+/**< Confirmation Frame Queue */
+
+/* Each network interface is represented by one of these */
+struct dpaa_if {
+	int valid;
+	char *name;
+	const struct fm_eth_port_cfg *cfg;
+	struct qman_fq *rx_queues;
+	struct qman_fq *tx_queues;
+	struct qman_fq debug_queues[2];
+	uint16_t nb_rx_queues;
+	uint16_t nb_tx_queues;
+	uint32_t ifid;
+	struct fman_if *fif;
+	struct dpaa_bp_info *bp_info;
+	struct rte_eth_fc_conf *fc_conf;
+};
+
+#endif
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
new file mode 100644
index 0000000..a70bd19
--- /dev/null
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -0,0 +1,4 @@
+DPDK_17.11 {
+
+	local: *;
+};
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 24/41] config: enable NXP DPAA PMD compilation
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (22 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 23/41] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
                         ` (19 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       |  1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 12 ++++++++++++
 drivers/net/Makefile                     |  2 ++
 mk/rte.app.mk                            |  5 +++++
 4 files changed, 20 insertions(+)

diff --git a/config/common_base b/config/common_base
index e4a9d6d..a780284 100644
--- a/config/common_base
+++ b/config/common_base
@@ -306,6 +306,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
+CONFIG_RTE_LIBRTE_DPAA_PMD=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index d91249f..a349cec 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -38,6 +38,14 @@ CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
 CONFIG_RTE_LIBRTE_VHOST_NUMA=n
 CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
 
+#
+# Compile Environment Abstraction Layer
+#
+CONFIG_RTE_MAX_LCORE=4
+CONFIG_RTE_MAX_NUMA_NODES=1
+CONFIG_RTE_CACHE_LINE_SIZE=64
+CONFIG_RTE_PKTMBUF_HEADROOM=128
+
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
@@ -48,3 +56,7 @@ CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL_DEBUG=n
 CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
+
+# Compile software NXP DPAA PMD
+CONFIG_RTE_LIBRTE_DPAA_PMD=y
+CONFIG_RTE_LIBRTE_DPAA_PMD_DEBUG=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index d33c959..2bd42f8 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -51,6 +51,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DEPDIRS-bonding = $(core-libs) librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
 DEPDIRS-cxgbe = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += e1000
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index c25fdd9..9c5a171 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -116,6 +116,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD)      += -lrte_pmd_cxgbe
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_pmd_dpaa
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_pmd_dpaa2
 _LDLIBS-$(CONFIG_RTE_LIBRTE_E1000_PMD)      += -lrte_pmd_e1000
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ENA_PMD)        += -lrte_pmd_ena
@@ -182,6 +183,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_bus_fslmc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_mempool_dpaa2
 endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
 
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+endif
+
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
 
 _LDLIBS-y += --no-whole-archive
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (23 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 24/41] config: enable NXP DPAA PMD compilation Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-18 14:55         ` Ferruh Yigit
  2017-09-18 14:55         ` Ferruh Yigit
  2017-09-09 11:21       ` [PATCH v4 26/41] net/dpaa: add support for MTU update Shreyansh Jain
                         ` (18 subsequent siblings)
  43 siblings, 2 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/Makefile      |   4 +
 drivers/net/dpaa/dpaa_ethdev.c | 290 +++++++++++++++++++++++++++++++-
 drivers/net/dpaa/dpaa_rxtx.c   | 370 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h   |  61 +++++++
 mk/rte.app.mk                  |   1 +
 5 files changed, 723 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h

diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index 7ecd5be..9b8debc 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -43,11 +43,13 @@ else
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 endif
+CFLAGS +=-Wno-pointer-arith
 
 CFLAGS += -I$(RTE_SDK_DPAA)/
 CFLAGS += -I$(RTE_SDK_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
 
@@ -57,7 +59,9 @@ LIBABIVER := 1
 
 # Interfaces with DPDK
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_rxtx.c
 
 LDLIBS += -lrte_bus_dpaa
+LDLIBS += -lrte_mempool_dpaa
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4543dfc..ab19b2e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -62,8 +62,15 @@
 
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
+#include <dpaa_mempool.h>
 
 #include <dpaa_ethdev.h>
+#include <dpaa_rxtx.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <fsl_fman.h>
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -78,20 +85,104 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
 	PMD_INIT_FUNC_TRACE();
 
 	/* Change tx callback to the real one */
-	dev->tx_pkt_burst = NULL;
+	dev->tx_pkt_burst = dpaa_eth_queue_tx;
+	fman_if_enable_rx(dpaa_intf->fif);
 
 	return 0;
 }
 
 static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = NULL;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_disable_rx(dpaa_intf->fif);
+	dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
 }
 
-static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+}
+
+static
+int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+			    unsigned int socket_id __rte_unused,
+			    const struct rte_eth_rxconf *rx_conf __rte_unused,
+			    struct rte_mempool *mp)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	DPAA_PMD_INFO("Rx queue setup for queue index: %d", queue_idx);
+
+	if (!dpaa_intf->bp_info || dpaa_intf->bp_info->mp != mp) {
+		struct fman_if_ic_params icp;
+		uint32_t fd_offset;
+		uint32_t bp_size;
+
+		if (!mp->pool_data) {
+			DPAA_PMD_ERR("Not an offloaded buffer pool!");
+			return -1;
+		}
+		dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+		memset(&icp, 0, sizeof(icp));
+		/* set ICEOF for to the default value , which is 0*/
+		icp.iciof = DEFAULT_ICIOF;
+		icp.iceof = DEFAULT_RX_ICEOF;
+		icp.icsz = DEFAULT_ICSZ;
+		fman_if_set_ic_params(dpaa_intf->fif, &icp);
+
+		fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
+		fman_if_set_fdoff(dpaa_intf->fif, fd_offset);
+
+		/* Buffer pool size should be equal to Dataroom Size*/
+		bp_size = rte_pktmbuf_data_room_size(mp);
+		fman_if_set_bp(dpaa_intf->fif, mp->size,
+			       dpaa_intf->bp_info->bpid, bp_size);
+		dpaa_intf->valid = 1;
+		DPAA_PMD_INFO("if =%s - fd_offset = %d offset = %d",
+			    dpaa_intf->name, fd_offset,
+			fman_if_get_fdoff(dpaa_intf->fif));
+	}
+	dev->data->rx_queues[queue_idx] = &dpaa_intf->rx_queues[queue_idx];
+
+	return 0;
+}
+
+static
+void dpaa_eth_rx_queue_release(void *rxq __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static
+int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+		unsigned int socket_id __rte_unused,
+		const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	DPAA_PMD_INFO("Tx queue setup for queue index: %d", queue_idx);
+	dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx];
+	return 0;
+}
+
+static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 }
@@ -101,15 +192,102 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+
+	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
+	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
+	.rx_queue_release	  = dpaa_eth_rx_queue_release,
+	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 };
 
+/* Initialise an Rx FQ */
+static int dpaa_rx_queue_init(struct qman_fq *fq,
+			      uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		DPAA_PMD_ERR("reserve rx fqid %d failed with ret: %d",
+			     fqid, ret);
+		return -EINVAL;
+	}
+
+	DPAA_PMD_DEBUG("creating rx fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create rx fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTA;
+
+	opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
+			   QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_a.stashing.exclusive = 0;
+	opts.fqd.context_a.stashing.annotation_cl = DPAA_IF_RX_ANNOTATION_STASH;
+	opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
+	opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
+
+	/*Enable tail drop */
+	opts.we_mask = opts.we_mask | QM_INITFQ_WE_TDTHRESH;
+	opts.fqd.fq_ctrl = opts.fqd.fq_ctrl | QM_FQCTRL_TDE;
+	qm_fqd_taildrop_set(&opts.fqd.td, CONG_THRESHOLD_RX_Q, 1);
+
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init rx fqid %d failed with ret: %d", fqid, ret);
+	return ret;
+}
+
+/* Initialise a Tx FQ */
+static int dpaa_tx_queue_init(struct qman_fq *fq,
+			      struct fman_if *fman_intf)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID |
+			     QMAN_FQ_FLAG_TO_DCPORTAL, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create tx fq failed with ret: %d", ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTB | QM_INITFQ_WE_CONTEXTA;
+	opts.fqd.dest.channel = fman_intf->tx_channel_id;
+	opts.fqd.dest.wq = DPAA_IF_TX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_b = 0;
+	/* no tx-confirmation */
+	opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+	opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
+	DPAA_PMD_DEBUG("init tx fq %p, fqid %d", fq, fq->fqid);
+	ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init tx fqid %d failed %d", fq->fqid, ret);
+	return ret;
+}
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
 {
+	int num_cores, num_rx_fqs, fqid;
+	int loop, ret = 0;
 	int dev_id;
 	struct rte_dpaa_device *dpaa_device;
 	struct dpaa_if *dpaa_intf;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+	struct fman_if_bpool *bp, *tmp_bp;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -120,12 +298,104 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
 	dev_id = dpaa_device->id.dev_id;
 	dpaa_intf = eth_dev->data->dev_private;
+	cfg = &dpaa_netcfg->port_cfg[dev_id];
+	fman_intf = cfg->fman_if;
 
 	dpaa_intf->name = dpaa_device->name;
 
+	/* save fman_if & cfg in the interface struture */
+	dpaa_intf->fif = fman_intf;
 	dpaa_intf->ifid = dev_id;
+	dpaa_intf->cfg = cfg;
+
+	/* Initialize Rx FQ's */
+	if (getenv("DPAA_NUM_RX_QUEUES"))
+		num_rx_fqs = atoi(getenv("DPAA_NUM_RX_QUEUES"));
+	else
+		num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
 
+	/* Each device can not have more than DPAA_PCD_FQID_MULTIPLIER RX
+	 * queues.
+	 */
+	if (num_rx_fqs <= 0 || num_rx_fqs > DPAA_PCD_FQID_MULTIPLIER) {
+		DPAA_PMD_ERR("Invalid number of RX queues\n");
+		return -EINVAL;
+	}
+
+	dpaa_intf->rx_queues = rte_zmalloc(NULL,
+		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
+	for (loop = 0; loop < num_rx_fqs; loop++) {
+		fqid = DPAA_PCD_FQID_START + dpaa_intf->ifid *
+			DPAA_PCD_FQID_MULTIPLIER + loop;
+		ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop], fqid);
+		if (ret)
+			return ret;
+		dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_rx_queues = num_rx_fqs;
+
+	/* Initialise Tx FQs. Have as many Tx FQ's as number of cores */
+	num_cores = rte_lcore_count();
+	dpaa_intf->tx_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
+		num_cores, MAX_CACHELINE);
+	if (!dpaa_intf->tx_queues)
+		return -ENOMEM;
+
+	for (loop = 0; loop < num_cores; loop++) {
+		ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
+					 fman_intf);
+		if (ret)
+			return ret;
+		dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_tx_queues = num_cores;
+
+	DPAA_PMD_DEBUG("All frame queues created");
+
+	/* reset bpool list, initialize bpool dynamically */
+	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+
+	/* Populate ethdev structure */
 	eth_dev->dev_ops = &dpaa_devops;
+	eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
+	eth_dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
+		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		DPAA_PMD_ERR("Failed to allocate %d bytes needed to "
+						"store MAC addresses",
+				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
+		return -ENOMEM;
+	}
+
+	/* copy the primary mac address */
+	memcpy(eth_dev->data->mac_addrs[0].addr_bytes,
+		fman_intf->mac_addr.addr_bytes,
+		ETHER_ADDR_LEN);
+
+	RTE_LOG(INFO, PMD, "net: dpaa: %s: %02x:%02x:%02x:%02x:%02x:%02x\n",
+		dpaa_device->name,
+		fman_intf->mac_addr.addr_bytes[0],
+		fman_intf->mac_addr.addr_bytes[1],
+		fman_intf->mac_addr.addr_bytes[2],
+		fman_intf->mac_addr.addr_bytes[3],
+		fman_intf->mac_addr.addr_bytes[4],
+		fman_intf->mac_addr.addr_bytes[5]);
+
+	/* Disable RX mode */
+	fman_if_discard_rx_errors(fman_intf);
+	fman_if_disable_rx(fman_intf);
+	/* Disable promiscuous mode */
+	fman_if_promiscuous_disable(fman_intf);
+	/* Disable multicast */
+	fman_if_reset_mcast_filter_table(fman_intf);
+	/* Reset interface statistics */
+	fman_if_stats_reset(fman_intf);
 
 	return 0;
 }
@@ -147,6 +417,20 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
 
 	dpaa_eth_dev_close(dev);
 
+	/* release configuration memory */
+	if (dpaa_intf->fc_conf)
+		rte_free(dpaa_intf->fc_conf);
+
+	rte_free(dpaa_intf->rx_queues);
+	dpaa_intf->rx_queues = NULL;
+
+	rte_free(dpaa_intf->tx_queues);
+	dpaa_intf->tx_queues = NULL;
+
+	/* free memory for storing MAC addresses */
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
 	dev->dev_ops = NULL;
 	dev->rx_pkt_burst = NULL;
 	dev->tx_pkt_burst = NULL;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
new file mode 100644
index 0000000..80adf9c
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -0,0 +1,370 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <limits.h>
+#include <sched.h>
+#include <pthread.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_udp.h>
+
+#include "dpaa_ethdev.h"
+#include "dpaa_rxtx.h"
+#include <rte_dpaa_bus.h>
+#include <dpaa_mempool.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
+#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
+	do { \
+		(_fd)->cmd = 0; \
+		(_fd)->opaque_addr = 0; \
+		(_fd)->opaque = QM_FD_CONTIG << DPAA_FD_FORMAT_SHIFT; \
+		(_fd)->opaque |= ((_mbuf)->data_off) << DPAA_FD_OFFSET_SHIFT; \
+		(_fd)->opaque |= (_mbuf)->pkt_len; \
+		(_fd)->addr = (_mbuf)->buf_physaddr; \
+		(_fd)->bpid = _bpid; \
+	} while (0)
+
+static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
+							uint32_t ifid)
+{
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *mbuf;
+	void *ptr;
+	uint16_t offset =
+		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
+	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
+
+	DPAA_RX_LOG(DEBUG, " FD--->MBUF");
+
+	/* Ignoring case when format != qm_fd_contig */
+	ptr = rte_dpaa_mem_ptov(fd->addr);
+	/* Ignoring case when ptr would be NULL. That is only possible incase
+	 * of a corrupted packet
+	 */
+
+	mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
+	/* Prefetch the Parse results and packet data to L1 */
+	rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
+	rte_prefetch0((void *)((uint8_t *)ptr + offset));
+
+	mbuf->data_off = offset;
+	mbuf->data_len = length;
+	mbuf->pkt_len = length;
+
+	mbuf->port = ifid;
+	mbuf->nb_segs = 1;
+	mbuf->ol_flags = 0;
+	mbuf->next = NULL;
+	rte_mbuf_refcnt_set(mbuf, 1);
+
+	return mbuf;
+}
+
+uint16_t dpaa_eth_queue_rx(void *q,
+			   struct rte_mbuf **bufs,
+			   uint16_t nb_bufs)
+{
+	struct qman_fq *fq = q;
+	struct qm_dqrr_entry *dq;
+	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_PMD_ERR("Failure in affining portal");
+		return 0;
+	}
+
+	ret = qman_set_vdq(fq, (nb_bufs > DPAA_MAX_DEQUEUE_NUM_FRAMES) ?
+				DPAA_MAX_DEQUEUE_NUM_FRAMES : nb_bufs);
+	if (ret)
+		return 0;
+
+	do {
+		dq = qman_dequeue(fq);
+		if (!dq)
+			continue;
+		bufs[num_rx++] = dpaa_eth_fd_to_mbuf(&dq->fd, ifid);
+		qman_dqrr_consume(fq, dq);
+	} while (fq->flags & QMAN_FQ_STATE_VDQCR);
+
+	return num_rx;
+}
+
+static void *dpaa_get_pktbuf(struct dpaa_bp_info *bp_info)
+{
+	int ret;
+	uint64_t buf = 0;
+	struct bm_buffer bufs;
+
+	ret = bman_acquire(bp_info->bp, &bufs, 1, 0);
+	if (ret <= 0) {
+		DPAA_PMD_WARN("Failed to allocate buffers %d", ret);
+		return (void *)buf;
+	}
+
+	DPAA_RX_LOG(DEBUG, "got buffer 0x%lx from pool %d",
+		    (uint64_t)bufs.addr, bufs.bpid);
+
+	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
+	if (!buf)
+		goto out;
+
+out:
+	return (void *)buf;
+}
+
+static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
+					     struct dpaa_if *dpaa_intf)
+{
+	struct rte_mbuf *dpaa_mbuf;
+
+	/* allocate pktbuffer on bpid for dpaa port */
+	dpaa_mbuf = dpaa_get_pktbuf(dpaa_intf->bp_info);
+	if (!dpaa_mbuf)
+		return NULL;
+
+	memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + mbuf->data_off, (void *)
+		((uint8_t *)(mbuf->buf_addr) + mbuf->data_off), mbuf->pkt_len);
+
+	/* Copy only the required fields */
+	dpaa_mbuf->data_off = mbuf->data_off;
+	dpaa_mbuf->pkt_len = mbuf->pkt_len;
+	dpaa_mbuf->ol_flags = mbuf->ol_flags;
+	dpaa_mbuf->packet_type = mbuf->packet_type;
+	dpaa_mbuf->tx_offload = mbuf->tx_offload;
+	rte_pktmbuf_free(mbuf);
+	return dpaa_mbuf;
+}
+
+/* Handle mbufs which are not segmented (non SG) */
+static inline void
+tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
+			    struct dpaa_bp_info *bp_info,
+			    struct qm_fd *fd_arr)
+{
+	struct rte_mbuf *mi = NULL;
+
+	if (RTE_MBUF_DIRECT(mbuf)) {
+		if (rte_mbuf_refcnt_read(mbuf) > 1) {
+			/* In case of direct mbuf and mbuf being cloned,
+			 * BMAN should _not_ release buffer.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, 0xff);
+			/* Buffer should be releasd by EAL */
+			rte_mbuf_refcnt_update(mbuf, -1);
+		} else {
+			/* In case of direct mbuf and no cloning, mbuf can be
+			 * released by BMAN.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+		}
+	} else {
+		/* This is data-containing core mbuf: 'mi' */
+		mi = rte_mbuf_from_indirect(mbuf);
+		if (rte_mbuf_refcnt_read(mi) > 1) {
+			/* In case of indirect mbuf, and mbuf being cloned,
+			 * BMAN should _not_ release it and let EAL release
+			 * it through pktmbuf_free below.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, 0xff);
+		} else {
+			/* In case of indirect mbuf, and no cloning, core mbuf
+			 * should be released by BMAN.
+			 * Increate refcnt of core mbuf so that when
+			 * pktmbuf_free is called and mbuf is released, EAL
+			 * doesn't try to release core mbuf which would have
+			 * been released by BMAN.
+			 */
+			rte_mbuf_refcnt_update(mi, 1);
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+		}
+		rte_pktmbuf_free(mbuf);
+	}
+}
+
+/* Handle all mbufs on dpaa BMAN managed pool */
+static inline uint16_t
+tx_on_dpaa_pool(struct rte_mbuf *mbuf,
+		struct dpaa_bp_info *bp_info,
+		struct qm_fd *fd_arr)
+{
+	DPAA_TX_LOG(DEBUG, "BMAN offloaded buffer, mbuf: %p", mbuf);
+
+	if (mbuf->nb_segs == 1) {
+		/* Case for non-segmented buffers */
+		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr);
+	} else {
+		DPAA_PMD_DEBUG("Number of Segments not supported");
+		return 1;
+	}
+
+	return 0;
+}
+
+/* Handle all mbufs on an external pool (non-dpaa2) */
+static inline uint16_t
+tx_on_external_pool(struct qman_fq *txq, struct rte_mbuf *mbuf,
+		    struct qm_fd *fd_arr)
+{
+	struct dpaa_if *dpaa_intf = txq->dpaa_intf;
+	struct rte_mbuf *dmable_mbuf;
+
+	DPAA_TX_LOG(DEBUG, "Non-BMAN offloaded buffer."
+		    "Allocating an offloaded buffer");
+	dmable_mbuf = dpaa_get_dmable_mbuf(mbuf, dpaa_intf);
+	if (!dmable_mbuf) {
+		DPAA_TX_LOG(DEBUG, "no dpaa buffers.");
+		return 1;
+	}
+
+	DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, dpaa_intf->bp_info->bpid);
+
+	return 0;
+}
+
+uint16_t
+dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+	struct rte_mbuf *mbuf, *mi = NULL;
+	struct rte_mempool *mp;
+	struct dpaa_bp_info *bp_info;
+	struct qm_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send, loop, i = 0;
+	uint16_t state;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_PMD_ERR("Failure in affining portal");
+		return 0;
+	}
+
+	DPAA_TX_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
+
+	while (nb_bufs) {
+		frames_to_send = (nb_bufs >> 3) ? MAX_TX_RING_SLOTS : nb_bufs;
+		for (loop = 0; loop < frames_to_send; loop++, i++) {
+			mbuf = bufs[i];
+			if (RTE_MBUF_DIRECT(mbuf)) {
+				mp = mbuf->pool;
+			} else {
+				mi = rte_mbuf_from_indirect(mbuf);
+				mp = mi->pool;
+			}
+
+			bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+			if (likely(mp->ops_index == bp_info->dpaa_ops_index)) {
+				state = tx_on_dpaa_pool(mbuf, bp_info,
+							&fd_arr[loop]);
+				if (unlikely(state)) {
+					/* Set frames_to_send & nb_bufs so
+					 * that packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+			} else {
+				state = tx_on_external_pool(q, mbuf,
+							    &fd_arr[loop]);
+				if (unlikely(state)) {
+					/* Set frames_to_send & nb_bufs so
+					 * that packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+			}
+		}
+
+send_pkts:
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qman_enqueue_multi(q, &fd_arr[loop],
+					frames_to_send - loop);
+		}
+		nb_bufs -= frames_to_send;
+	}
+
+	DPAA_TX_LOG(DEBUG, "Transmitted %d buffers on queue: %p", i, q);
+
+	return i;
+}
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+		uint16_t nb_bufs __rte_unused)
+{
+	DPAA_TX_LOG(DEBUG, "Drop all packets");
+
+	/* Drop all incoming packets. No need to free packets here
+	 * because the rte_eth f/w frees up the packets through tx_buffer
+	 * callback in case this functions returns count less than nb_bufs
+	 */
+	return 0;
+}
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
new file mode 100644
index 0000000..45bfae8
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -0,0 +1,61 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPDK_RXTX_H__
+#define __DPDK_RXTX_H__
+
+/* internal offset from where IC is copied to packet buffer*/
+#define DEFAULT_ICIOF          32
+/* IC transfer size */
+#define DEFAULT_ICSZ	48
+
+/* IC offsets from buffer header address */
+#define DEFAULT_RX_ICEOF	16
+
+#define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
+	/** <Maximum number of frames to be dequeued in a single rx call*/
+/* FD structure masks and offset */
+#define DPAA_FD_FORMAT_MASK 0xE0000000
+#define DPAA_FD_OFFSET_MASK 0x1FF00000
+#define DPAA_FD_LENGTH_MASK 0xFFFFF
+#define DPAA_FD_FORMAT_SHIFT 29
+#define DPAA_FD_OFFSET_SHIFT 20
+
+uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+			      uint16_t nb_bufs __rte_unused);
+#endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 9c5a171..7440848 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -185,6 +185,7 @@ endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
 
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_mempool_dpaa
 endif
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 26/41] net/dpaa: add support for MTU update
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (24 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 27/41] net/dpaa: add support for jumbo frames Shreyansh Jain
                         ` (17 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 9e8befc..59ef23d 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,5 +4,6 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index ab19b2e..ad3eaac 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -76,6 +76,26 @@
 static int is_global_init;
 
 static int
+dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (mtu < ETHER_MIN_MTU)
+		return -EINVAL;
+	if (mtu > ETHER_MAX_LEN)
+		return -1;
+
+	dev->data->dev_conf.rxmode.jumbo_frame = 0;
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
+
+	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
+
+	return 0;
+}
+
+static int
 dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
@@ -197,6 +217,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+	.mtu_set		  = dpaa_mtu_set,
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 27/41] net/dpaa: add support for jumbo frames
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (25 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 26/41] net/dpaa: add support for MTU update Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 28/41] net/dpaa: add support for link status update Shreyansh Jain
                         ` (16 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 13 +++++++++++--
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 59ef23d..e62812c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Jumbo frame          = Y
 MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index ad3eaac..d0bab36 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -85,9 +85,10 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	if (mtu < ETHER_MIN_MTU)
 		return -EINVAL;
 	if (mtu > ETHER_MAX_LEN)
-		return -1;
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
 
-	dev->data->dev_conf.rxmode.jumbo_frame = 0;
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
 
 	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
@@ -100,6 +101,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 
+	if (dev->data->dev_conf.rxmode.jumbo_frame == 1) {
+		if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
+		    DPAA_MAX_RX_PKT_LEN)
+			return dpaa_mtu_set(dev,
+				dev->data->dev_conf.rxmode.max_rx_pkt_len);
+		else
+			return -1;
+	}
 	return 0;
 }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 28/41] net/dpaa: add support for link status update
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (26 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 27/41] net/dpaa: add support for jumbo frames Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-18 14:56         ` Ferruh Yigit
  2017-09-09 11:21       ` [PATCH v4 29/41] net/dpaa: add support for device info and speed capability Shreyansh Jain
                         ` (15 subsequent siblings)
  43 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 42 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index e62812c..132f94b 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index d0bab36..75fded2 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -142,6 +142,28 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static int dpaa_eth_link_update(struct rte_eth_dev *dev,
+				int wait_to_complete __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_link *link = &dev->data->dev_link;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dpaa_intf->fif->mac_type == fman_mac_1g)
+		link->link_speed = 1000;
+	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
+		link->link_speed = 10000;
+	else
+		DPAA_PMD_ERR("invalid link_speed: %s, %d",
+			     dpaa_intf->name, dpaa_intf->fif->mac_type);
+
+	link->link_status = dpaa_intf->valid;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = ETH_LINK_AUTONEG;
+	return 0;
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -216,6 +238,22 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 	PMD_INIT_FUNC_TRACE();
 }
 
+static int dpaa_link_down(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+	return 0;
+}
+
+static int dpaa_link_up(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_start(dev);
+	return 0;
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -226,7 +264,11 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+
+	.link_update		  = dpaa_eth_link_update,
 	.mtu_set		  = dpaa_mtu_set,
+	.dev_set_link_down	  = dpaa_link_down,
+	.dev_set_link_up	  = dpaa_link_up,
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 29/41] net/dpaa: add support for device info and speed capability
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (27 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 28/41] net/dpaa: add support for link status update Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 30/41] net/dpaa: add support for promiscuous toggle Shreyansh Jain
                         ` (14 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 132f94b..19beada 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 75fded2..9751145 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -142,6 +142,25 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
+			      struct rte_eth_dev_info *dev_info)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->max_rx_queues = dpaa_intf->nb_rx_queues;
+	dev_info->max_tx_queues = dpaa_intf->nb_tx_queues;
+	dev_info->min_rx_bufsize = DPAA_MIN_RX_BUF_SIZE;
+	dev_info->max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
+	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
+	dev_info->max_hash_mac_addrs = 0;
+	dev_info->max_vfs = 0;
+	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
+				ETH_LINK_SPEED_10G);
+}
+
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 				int wait_to_complete __rte_unused)
 {
@@ -259,6 +278,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+	.dev_infos_get		  = dpaa_eth_dev_info,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 30/41] net/dpaa: add support for promiscuous toggle
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (28 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 29/41] net/dpaa: add support for device info and speed capability Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 31/41] net/dpaa: add support for multicast toggle Shreyansh Jain
                         ` (13 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 19beada..b2dfd81 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -8,5 +8,6 @@ Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
+Promiscuous mode     = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 9751145..803b9df 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -183,6 +183,25 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+
+static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_enable(dpaa_intf->fif);
+}
+
+static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_disable(dpaa_intf->fif);
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -286,6 +305,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
+	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 31/41] net/dpaa: add support for multicast toggle
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (29 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 30/41] net/dpaa: add support for promiscuous toggle Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 32/41] net/dpaa: add support for MAC address update Shreyansh Jain
                         ` (12 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index b2dfd81..f21a85f 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -9,5 +9,6 @@ Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
 Promiscuous mode     = Y
+Allmulticast mode    = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 803b9df..982e762 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -202,6 +202,24 @@ static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
 	fman_if_promiscuous_disable(dpaa_intf->fif);
 }
 
+static void dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_set_mcast_filter_table(dpaa_intf->fif);
+}
+
+static void dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_reset_mcast_filter_table(dpaa_intf->fif);
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -307,6 +325,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.link_update		  = dpaa_eth_link_update,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
+	.allmulticast_enable	  = dpaa_eth_multicast_enable,
+	.allmulticast_disable	  = dpaa_eth_multicast_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 32/41] net/dpaa: add support for MAC address update
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (30 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 31/41] net/dpaa: add support for multicast toggle Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 33/41] net/dpaa: add support for basic stats Shreyansh Jain
                         ` (11 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 48 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index f21a85f..cdf5e46 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -10,5 +10,6 @@ Jumbo frame          = Y
 MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
+Unicast MAC filter   = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 982e762..437943e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -310,6 +310,50 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *addr,
+			     uint32_t index,
+			     __rte_unused uint32_t pool)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, index);
+
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Adding the MAC ADDR failed:"
+			" err = %d", ret);
+	return 0;
+}
+
+static void
+dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
+			  uint32_t index)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_clear_mac_addr(dpaa_intf->fif, index);
+}
+
+static void
+dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
+		       struct ether_addr *addr)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, 0);
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Setting the MAC ADDR failed %d", ret);
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -330,6 +374,10 @@ static struct eth_dev_ops dpaa_devops = {
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
+	.mac_addr_add		  = dpaa_dev_add_mac_addr,
+	.mac_addr_remove	  = dpaa_dev_remove_mac_addr,
+	.mac_addr_set		  = dpaa_dev_set_mac_addr,
+
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 33/41] net/dpaa: add support for basic stats
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (31 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 32/41] net/dpaa: add support for MAC address update Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 34/41] net/dpaa: add support for flow control Shreyansh Jain
                         ` (10 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index cdf5e46..c09efd8 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,5 +11,6 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 437943e..178508e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -183,6 +183,24 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void dpaa_eth_stats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_stats *stats)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_get(dpaa_intf->fif, stats);
+}
+
+static void dpaa_eth_stats_reset(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_reset(dpaa_intf->fif);
+}
 
 static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
 {
@@ -367,6 +385,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.stats_get		  = dpaa_eth_stats_get,
+	.stats_reset		  = dpaa_eth_stats_reset,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.allmulticast_enable	  = dpaa_eth_multicast_enable,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 34/41] net/dpaa: add support for flow control
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (32 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 33/41] net/dpaa: add support for basic stats Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 35/41] net/dpaa: add support for hashed RSS Shreyansh Jain
                         ` (9 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 112 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index c09efd8..1ba6b11 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Flow control         = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 178508e..f423e51 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -329,6 +329,85 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 }
 
 static int
+dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			DPAA_PMD_ERR("unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	net_fc = dpaa_intf->fc_conf;
+
+	if (fc_conf->high_water < fc_conf->low_water) {
+		DPAA_PMD_ERR("Incorrect Flow Control Configuration");
+		return -EINVAL;
+	}
+
+	if (fc_conf->mode == RTE_FC_NONE) {
+		return 0;
+	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_FC_FULL) {
+		fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water,
+					 fc_conf->low_water,
+				dpaa_intf->bp_info->bpid);
+		if (fc_conf->pause_time)
+			fman_if_set_fc_quanta(dpaa_intf->fif,
+					      fc_conf->pause_time);
+	}
+
+	/* Save the information in dpaa device */
+	net_fc->pause_time = fc_conf->pause_time;
+	net_fc->high_water = fc_conf->high_water;
+	net_fc->low_water = fc_conf->low_water;
+	net_fc->send_xon = fc_conf->send_xon;
+	net_fc->mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd;
+	net_fc->mode = fc_conf->mode;
+	net_fc->autoneg = fc_conf->autoneg;
+
+	return 0;
+}
+
+static int
+dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc = dpaa_intf->fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (net_fc) {
+		fc_conf->pause_time = net_fc->pause_time;
+		fc_conf->high_water = net_fc->high_water;
+		fc_conf->low_water = net_fc->low_water;
+		fc_conf->send_xon = net_fc->send_xon;
+		fc_conf->mac_ctrl_frame_fwd = net_fc->mac_ctrl_frame_fwd;
+		fc_conf->mode = net_fc->mode;
+		fc_conf->autoneg = net_fc->autoneg;
+		return 0;
+	}
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
+static int
 dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *addr,
 			     uint32_t index,
@@ -384,6 +463,9 @@ static struct eth_dev_ops dpaa_devops = {
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
+	.flow_ctrl_get		  = dpaa_flow_ctrl_get,
+	.flow_ctrl_set		  = dpaa_flow_ctrl_set,
+
 	.link_update		  = dpaa_eth_link_update,
 	.stats_get		  = dpaa_eth_stats_get,
 	.stats_reset		  = dpaa_eth_stats_reset,
@@ -400,6 +482,33 @@ static struct eth_dev_ops dpaa_devops = {
 
 };
 
+static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
+{
+	struct rte_eth_fc_conf *fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			DPAA_PMD_ERR("unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	fc_conf = dpaa_intf->fc_conf;
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
 /* Initialise an Rx FQ */
 static int dpaa_rx_queue_init(struct qman_fq *fq,
 			      uint32_t fqid)
@@ -553,6 +662,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 
 	DPAA_PMD_DEBUG("All frame queues created");
 
+	/* Get the initial configuration for flow control */
+	dpaa_fc_set_default(dpaa_intf);
+
 	/* reset bpool list, initialize bpool dynamically */
 	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
 		list_del(&bp->node);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 35/41] net/dpaa: add support for hashed RSS
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (33 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 34/41] net/dpaa: add support for flow control Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 36/41] net/dpaa: add support for packet type parsing Shreyansh Jain
                         ` (8 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c |  1 +
 drivers/net/dpaa/dpaa_ethdev.h | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f423e51..b1525a4 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -157,6 +157,7 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
 				ETH_LINK_SPEED_10G);
 }
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 2f25acb..e1e062e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -88,6 +88,16 @@
 #define DPAA_DEBUG_FQ_RX_ERROR   0
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
+#define DPAA_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP)
+
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
 		PKT_TX_TCP_CKSUM |               \
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 36/41] net/dpaa: add support for packet type parsing
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (34 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 35/41] net/dpaa: add support for hashed RSS Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-18 14:56         ` Ferruh Yigit
  2017-09-09 11:21       ` [PATCH v4 37/41] net/dpaa: add support for checksum offload Shreyansh Jain
                         ` (7 subsequent siblings)
  43 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Add support for parsing the packet type and L2/L3 checksum offload
capability information.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   2 +
 drivers/net/dpaa/dpaa_ethdev.c    |  27 +++++
 drivers/net/dpaa/dpaa_rxtx.c      | 116 +++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 206 ++++++++++++++++++++++++++++++++++++++
 4 files changed, 351 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 1ba6b11..2ef1b56 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,7 +11,9 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+RSS hash             = Y
 Flow control         = Y
+Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index b1525a4..64c70b8 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -112,6 +112,28 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 	return 0;
 }
 
+static const uint32_t *
+dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		/*todo -= add more types */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_SCTP
+	};
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dev->rx_pkt_burst == dpaa_eth_queue_rx)
+		return ptypes;
+	return NULL;
+}
+
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -160,6 +182,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
 				ETH_LINK_SPEED_10G);
+	dev_info->rx_offload_capa =
+		(DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM   |
+		DEV_RX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
@@ -458,6 +484,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
 	.dev_infos_get		  = dpaa_eth_dev_info,
+	.dev_supported_ptypes_get = dpaa_supported_ptypes_get,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 80adf9c..90be40d 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,121 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
+				     uint64_t prs __rte_unused)
+{
+	DPAA_RX_LOG(DEBUG, "Slow parsing");
+	/*TBD:XXX: to be implemented*/
+}
+
+static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
+					uint64_t fd_virt_addr)
+{
+	struct annotations_t *annot = GET_ANNOTATIONS(fd_virt_addr);
+	uint64_t prs = *((uint64_t *)(&annot->parse)) & DPAA_PARSE_MASK;
+
+	DPAA_RX_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
+
+	switch (prs) {
+	case DPAA_PKT_TYPE_NONE:
+		m->packet_type = 0;
+		break;
+	case DPAA_PKT_TYPE_ETHER:
+		m->packet_type = RTE_PTYPE_L2_ETHER;
+		break;
+	case DPAA_PKT_TYPE_IPV4:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4;
+		break;
+	case DPAA_PKT_TYPE_IPV6:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6;
+		break;
+	case DPAA_PKT_TYPE_IPV4_FRAG:
+	case DPAA_PKT_TYPE_IPV4_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV6_FRAG:
+	case DPAA_PKT_TYPE_IPV6_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV4_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP;
+		break;
+	/* More switch cases can be added */
+	default:
+		dpaa_slow_parsing(m, prs);
+	}
+
+	m->tx_offload = annot->parse.ip_off[0];
+	m->tx_offload |= (annot->parse.l4_off - annot->parse.ip_off[0])
+					<< DPAA_PKT_L3_LEN_SHIFT;
+
+	/* Set the hash values */
+	m->hash.rss = (uint32_t)(rte_be_to_cpu_64(annot->hash));
+	m->ol_flags = PKT_RX_RSS_HASH;
+	/* All packets with Bad checksum are dropped by interface (and
+	 * corresponding notification issued to RX error queues).
+	 */
+	m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	/* Check if Vlan is present */
+	if (prs & DPAA_PARSE_VLAN_MASK)
+		m->ol_flags |= PKT_RX_VLAN_PKT;
+	/* Packet received without stripping the vlan */
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -117,6 +232,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 	mbuf->ol_flags = 0;
 	mbuf->next = NULL;
 	rte_mbuf_refcnt_set(mbuf, 1);
+	dpaa_eth_packet_info(mbuf, (uint64_t)mbuf->buf_addr);
 
 	return mbuf;
 }
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 45bfae8..68d2c41 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -44,6 +44,7 @@
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
+
 /* FD structure masks and offset */
 #define DPAA_FD_FORMAT_MASK 0xE0000000
 #define DPAA_FD_OFFSET_MASK 0x1FF00000
@@ -51,6 +52,211 @@
 #define DPAA_FD_FORMAT_SHIFT 29
 #define DPAA_FD_OFFSET_SHIFT 20
 
+/* Parsing mask (Little Endian) - 0x00E044ED00800000
+ *	Classification Plan ID 0x00
+ *	L4R 0xE0 -
+ *		0x20 - TCP
+ *		0x40 - UDP
+ *		0x80 - SCTP
+ *	L3R 0xEDC4 (in Big Endian) -
+ *		0x8000 - IPv4
+ *		0x4000 - IPv6
+ *		0x8140 - IPv4 Ext + Frag
+ *		0x8040 - IPv4 Frag
+ *		0x8100 - IPv4 Ext
+ *		0x4140 - IPv6 Ext + Frag
+ *		0x4040 - IPv6 Frag
+ *		0x4100 - IPv6 Ext
+ *	L2R 0x8000 (in Big Endian) -
+ *		0x8000 - Ethernet type
+ *	ShimR & Logical Port ID 0x0000
+ */
+#define DPAA_PARSE_MASK			0x00E044ED00800000
+#define DPAA_PARSE_VLAN_MASK		0x0000000000700000
+
+/* Parsed values (Little Endian) */
+#define DPAA_PKT_TYPE_NONE		0x0000000000000000
+#define DPAA_PKT_TYPE_ETHER		0x0000000000800000
+#define DPAA_PKT_TYPE_IPV4 \
+			(0x0000008000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV6 \
+			(0x0000004000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_GRE \
+			(0x0000002000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV4_FRAG	\
+			(0x0000400000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_FRAG	\
+			(0x0000400000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_EXT \
+			(0x0000000100000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_EXT \
+			(0x0000000100000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_SCTP	\
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_SCTP	\
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_FRAG_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_SCTP \
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_SCTP \
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_EXT_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_IPV4_EXT_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_TUNNEL_4_4 \
+			(0x0000000800000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6 \
+			(0x0000000400000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6 \
+			(0x0000000400000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_4 \
+			(0x0000000800000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_L3_LEN_SHIFT	7
+
+/**
+ * FMan parse result array
+ */
+struct dpaa_eth_parse_results_t {
+	 uint8_t     lpid;		 /**< Logical port id */
+	 uint8_t     shimr;		 /**< Shim header result  */
+	 union {
+		uint16_t              l2r;	/**< Layer 2 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      ethernet:1;
+			uint16_t      vlan:1;
+			uint16_t      llc_snap:1;
+			uint16_t      mpls:1;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      unused_1:3;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      eth_frame_type:2;
+			uint16_t      l2r_err:5;
+			/*00-unicast, 01-multicast, 11-broadcast*/
+#else
+			uint16_t      l2r_err:5;
+			uint16_t      eth_frame_type:2;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      unused_1:3;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      mpls:1;
+			uint16_t      llc_snap:1;
+			uint16_t      vlan:1;
+			uint16_t      ethernet:1;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint16_t              l3r;	/**< Layer 3 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      first_ipv4:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      gre:1;
+			uint16_t      min_enc:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      last_ipv6:1;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      last_ip_err_code:3;
+#else
+			uint16_t      last_ip_err_code:3;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      last_ipv6:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      min_enc:1;
+			uint16_t      gre:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      first_ipv4:1;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint8_t               l4r;	/**< Layer 4 result */
+		struct{
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint8_t	       l4_type:3;
+			uint8_t	       l4_info_err:1;
+			uint8_t	       l4_result:4;
+					/* if type IPSec: 1 ESP, 2 AH */
+#else
+			uint8_t        l4_result:4;
+					/* if type IPSec: 1 ESP, 2 AH */
+			uint8_t        l4_info_err:1;
+			uint8_t        l4_type:3;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 uint8_t     cplan;		 /**< Classification plan id */
+	 uint16_t    nxthdr;		 /**< Next Header  */
+	 uint16_t    cksum;		 /**< Checksum */
+	 uint32_t    lcv;		 /**< LCV */
+	 uint8_t     shim_off[3];	 /**< Shim offset */
+	 uint8_t     eth_off;		 /**< ETH offset */
+	 uint8_t     llc_snap_off;	 /**< LLC_SNAP offset */
+	 uint8_t     vlan_off[2];	 /**< VLAN offset */
+	 uint8_t     etype_off;		 /**< ETYPE offset */
+	 uint8_t     pppoe_off;		 /**< PPP offset */
+	 uint8_t     mpls_off[2];	 /**< MPLS offset */
+	 uint8_t     ip_off[2];		 /**< IP offset */
+	 uint8_t     gre_off;		 /**< GRE offset */
+	 uint8_t     l4_off;		 /**< Layer 4 offset */
+	 uint8_t     nxthdr_off;	 /**< Parser end point */
+} __attribute__ ((__packed__));
+
+/* The structure is the Prepended Data to the Frame which is used by FMAN */
+struct annotations_t {
+	uint8_t reserved[DEFAULT_RX_ICEOF];
+	struct dpaa_eth_parse_results_t parse;	/**< Pointer to Parsed result*/
+	uint64_t reserved1;
+	uint64_t hash;			/**< Hash Result */
+};
+
+#define GET_ANNOTATIONS(_buf) \
+	(struct annotations_t *)(_buf)
+
+#define GET_RX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
+
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
 uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 37/41] net/dpaa: add support for checksum offload
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (35 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 36/41] net/dpaa: add support for packet type parsing Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 38/41] net/dpaa: add support for Scattered Rx Shreyansh Jain
                         ` (6 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  2 +
 drivers/net/dpaa/dpaa_ethdev.c    |  4 ++
 drivers/net/dpaa/dpaa_rxtx.c      | 89 +++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 23 +++++++++-
 4 files changed, 117 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 2ef1b56..23626c0 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -13,6 +13,8 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 RSS hash             = Y
 Flow control         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 64c70b8..1deefd3 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -186,6 +186,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 		(DEV_RX_OFFLOAD_IPV4_CKSUM |
 		DEV_RX_OFFLOAD_UDP_CKSUM   |
 		DEV_RX_OFFLOAD_TCP_CKSUM);
+	dev_info->tx_offload_capa =
+		(DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 90be40d..0f43bb4 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -200,6 +200,82 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
 	/* Packet received without stripping the vlan */
 }
 
+static inline void dpaa_checksum(struct rte_mbuf *mbuf)
+{
+	struct ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf, struct ether_hdr *);
+	char *l3_hdr = (char *)eth_hdr + mbuf->l2_len;
+	struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+	struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	DPAA_TX_LOG(DEBUG, "Calculating checksum for mbuf: %p", mbuf);
+
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	    ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	    RTE_PTYPE_L3_IPV4_EXT)) {
+		ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+		ipv4_hdr->hdr_checksum = 0;
+		ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
+	} else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6_EXT))
+		ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP) {
+		struct tcp_hdr *tcp_hdr = (struct tcp_hdr *)(l3_hdr +
+					  mbuf->l3_len);
+		tcp_hdr->cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			tcp_hdr->cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+							       tcp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			tcp_hdr->cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+							       tcp_hdr);
+	} else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) ==
+		   RTE_PTYPE_L4_UDP) {
+		struct udp_hdr *udp_hdr = (struct udp_hdr *)(l3_hdr +
+							     mbuf->l3_len);
+		udp_hdr->dgram_cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			udp_hdr->dgram_cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+								     udp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			udp_hdr->dgram_cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+								     udp_hdr);
+	}
+}
+
+static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
+					 struct qm_fd *fd, char *prs_buf)
+{
+	struct dpaa_eth_parse_results_t *prs;
+
+	DPAA_TX_LOG(DEBUG, " Offloading checksum for mbuf: %p", mbuf);
+
+	prs = GET_TX_PRS(prs_buf);
+	prs->l3r = 0;
+	prs->l4r = 0;
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	   RTE_PTYPE_L3_IPV4_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV4;
+	else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		 ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		RTE_PTYPE_L3_IPV6_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV6;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_TCP;
+	else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_UDP;
+
+	prs->ip_off[0] = mbuf->l2_len;
+	prs->l4_off = mbuf->l3_len + mbuf->l2_len;
+	/* Enable L3 (and L4, if TCP or UDP) HW checksum*/
+	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -358,6 +434,19 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 		}
 		rte_pktmbuf_free(mbuf);
 	}
+
+	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+		if (mbuf->data_off < (DEFAULT_TX_ICEOF +
+		    sizeof(struct dpaa_eth_parse_results_t))) {
+			DPAA_TX_LOG(DEBUG, "Checksum offload Err: "
+				"Not enough Headroom "
+				"space for correct Checksum offload."
+				"So Calculating checksum in Software.");
+			dpaa_checksum(mbuf);
+		} else {
+			dpaa_checksum_offload(mbuf, fd_arr, mbuf->buf_addr);
+		}
+	}
 }
 
 /* Handle all mbufs on dpaa BMAN managed pool */
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 68d2c41..d10298e 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -41,6 +41,22 @@
 
 /* IC offsets from buffer header address */
 #define DEFAULT_RX_ICEOF	16
+#define DEFAULT_TX_ICEOF	16
+
+/*
+ * Values for the L3R field of the FM Parse Results
+ */
+/* L3 Type field: First IP Present IPv4 */
+#define DPAA_L3_PARSE_RESULT_IPV4 0x80
+/* L3 Type field: First IP Present IPv6 */
+#define DPAA_L3_PARSE_RESULT_IPV6	0x40
+/* Values for the L4R field of the FM Parse Results
+ * See $8.8.4.7.20 - L4 HXS - L4 Results from DPAA-Rev2 Reference Manual.
+ */
+/* L4 Type field: UDP */
+#define DPAA_L4_PARSE_RESULT_UDP	0x40
+/* L4 Type field: TCP */
+#define DPAA_L4_PARSE_RESULT_TCP	0x20
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
@@ -255,7 +271,12 @@ struct annotations_t {
 	(struct annotations_t *)(_buf)
 
 #define GET_RX_PRS(_buf) \
-	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)(_buf) + \
+	DEFAULT_RX_ICEOF)
+
+#define GET_TX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)(_buf) + \
+	DEFAULT_TX_ICEOF)
 
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 38/41] net/dpaa: add support for Scattered Rx
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (36 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 37/41] net/dpaa: add support for checksum offload Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 39/41] net/dpaa: add packet dump for debugging Shreyansh Jain
                         ` (5 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_rxtx.c      | 159 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      |   9 +++
 3 files changed, 169 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 23626c0..0e7956c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -8,6 +8,7 @@ Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
+Scattered Rx         = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 0f43bb4..8133a89 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -276,18 +276,82 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
 	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
 }
 
+struct rte_mbuf *
+dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
+{
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+	struct qm_sg_entry *sgt, *sg_temp;
+	void *vaddr, *sg_vaddr;
+	int i = 0;
+	uint8_t fd_offset = fd->offset;
+
+	DPAA_RX_LOG(DEBUG, "Received an SG frame");
+
+	vaddr = rte_dpaa_mem_ptov(qm_fd_addr(fd));
+	if (!vaddr) {
+		DPAA_PMD_ERR("unable to convert physical address");
+		return NULL;
+	}
+	sgt = vaddr + fd_offset;
+	sg_temp = &sgt[i++];
+	hw_sg_to_cpu(sg_temp);
+	temp = (struct rte_mbuf *)((char *)vaddr - bp_info->meta_data_size);
+	sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+
+	first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						bp_info->meta_data_size);
+	first_seg->data_off = sg_temp->offset;
+	first_seg->data_len = sg_temp->length;
+	first_seg->pkt_len = sg_temp->length;
+	rte_mbuf_refcnt_set(first_seg, 1);
+
+	first_seg->port = ifid;
+	first_seg->nb_segs = 1;
+	first_seg->ol_flags = 0;
+	prev_seg = first_seg;
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		hw_sg_to_cpu(sg_temp);
+		sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+		cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						      bp_info->meta_data_size);
+		cur_seg->data_off = sg_temp->offset;
+		cur_seg->data_len = sg_temp->length;
+		first_seg->pkt_len += sg_temp->length;
+		first_seg->nb_segs += 1;
+		rte_mbuf_refcnt_set(cur_seg, 1);
+		prev_seg->next = cur_seg;
+		if (sg_temp->final) {
+			cur_seg->next = NULL;
+			break;
+		}
+		prev_seg = cur_seg;
+	}
+
+	dpaa_eth_packet_info(first_seg, (uint64_t)vaddr);
+	rte_pktmbuf_free_seg(temp);
+
+	return first_seg;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
 	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
 	struct rte_mbuf *mbuf;
 	void *ptr;
+	uint8_t format =
+		(fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
 	uint16_t offset =
 		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
 	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
 
 	DPAA_RX_LOG(DEBUG, " FD--->MBUF");
 
+	if (unlikely(format == qm_fd_sg))
+		return dpaa_eth_sg_to_mbuf(fd, ifid);
+
 	/* Ignoring case when format != qm_fd_contig */
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
@@ -390,6 +454,95 @@ static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
 	return dpaa_mbuf;
 }
 
+int
+dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
+		struct qm_fd *fd,
+		uint32_t bpid)
+{
+	struct rte_mbuf *cur_seg = mbuf, *prev_seg = NULL;
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(bpid);
+	struct rte_mbuf *temp, *mi;
+	struct qm_sg_entry *sg_temp, *sgt;
+	int i = 0;
+
+	DPAA_TX_LOG(DEBUG, "Creating SG FD to transmit");
+
+	temp = rte_pktmbuf_alloc(bp_info->mp);
+	if (!temp) {
+		DPAA_PMD_ERR("Failure in allocation of mbuf");
+		return -1;
+	}
+	if (temp->buf_len < ((mbuf->nb_segs * sizeof(struct qm_sg_entry))
+				+ temp->data_off)) {
+		DPAA_PMD_ERR("Insufficient space in mbuf for SG entries");
+		return -1;
+	}
+
+	fd->cmd = 0;
+	fd->opaque_addr = 0;
+
+	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+		if (temp->data_off < DEFAULT_TX_ICEOF
+			+ sizeof(struct dpaa_eth_parse_results_t))
+			temp->data_off = DEFAULT_TX_ICEOF
+				+ sizeof(struct dpaa_eth_parse_results_t);
+		dcbz_64(temp->buf_addr);
+		dpaa_checksum_offload(mbuf, fd, temp->buf_addr);
+	}
+
+	sgt = temp->buf_addr + temp->data_off;
+	fd->format = QM_FD_SG;
+	fd->addr = temp->buf_physaddr;
+	fd->offset = temp->data_off;
+	fd->bpid = bpid;
+	fd->length20 = mbuf->pkt_len;
+
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		sg_temp->opaque = 0;
+		sg_temp->val = 0;
+		sg_temp->addr = cur_seg->buf_physaddr;
+		sg_temp->offset = cur_seg->data_off;
+		sg_temp->length = cur_seg->data_len;
+		if (RTE_MBUF_DIRECT(cur_seg)) {
+			if (rte_mbuf_refcnt_read(cur_seg) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+				rte_mbuf_refcnt_update(cur_seg, -1);
+			} else {
+				sg_temp->bpid =
+					DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
+			}
+			cur_seg = cur_seg->next;
+		} else {
+			/* Get owner MBUF from indirect buffer */
+			mi = rte_mbuf_from_indirect(cur_seg);
+			if (rte_mbuf_refcnt_read(mi) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * owner buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+			} else {
+				sg_temp->bpid = DPAA_MEMPOOL_TO_BPID(mi->pool);
+				rte_mbuf_refcnt_update(mi, 1);
+			}
+			prev_seg = cur_seg;
+			cur_seg = cur_seg->next;
+			prev_seg->next = NULL;
+			rte_pktmbuf_free(prev_seg);
+		}
+		if (cur_seg == NULL) {
+			sg_temp->final = 1;
+			cpu_to_hw_sg(sg_temp);
+			break;
+		}
+		cpu_to_hw_sg(sg_temp);
+	}
+	return 0;
+}
+
 /* Handle mbufs which are not segmented (non SG) */
 static inline void
 tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
@@ -460,6 +613,12 @@ tx_on_dpaa_pool(struct rte_mbuf *mbuf,
 	if (mbuf->nb_segs == 1) {
 		/* Case for non-segmented buffers */
 		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr);
+	} else if (mbuf->nb_segs > 1 &&
+		   mbuf->nb_segs <= DPAA_SGT_MAX_ENTRIES) {
+		if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr, bp_info->bpid)) {
+			DPAA_PMD_DEBUG("Unable to create Scatter Gather FD");
+			return 1;
+		}
 	} else {
 		DPAA_PMD_DEBUG("Number of Segments not supported");
 		return 1;
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index d10298e..2ffc4ff 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -58,6 +58,8 @@
 /* L4 Type field: TCP */
 #define DPAA_L4_PARSE_RESULT_TCP	0x20
 
+#define DPAA_SGT_MAX_ENTRIES 16 /* maximum number of entries in SG Table */
+
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
 
@@ -285,4 +287,11 @@ uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
 			      struct rte_mbuf **bufs __rte_unused,
 			      uint16_t nb_bufs __rte_unused);
+
+struct rte_mbuf *dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid);
+
+int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
+			   struct qm_fd *fd,
+			   uint32_t bpid);
+
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 39/41] net/dpaa: add packet dump for debugging
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (37 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 38/41] net/dpaa: add support for Scattered Rx Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-09 11:21       ` [PATCH v4 40/41] net/dpaa: support for firmware version get API Shreyansh Jain
                         ` (4 subsequent siblings)
  43 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc |  2 ++
 drivers/net/dpaa/dpaa_ethdev.c           | 42 ++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.c             | 26 ++++++++++++++++++++
 3 files changed, 70 insertions(+)

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index a349cec..c0f5e4a 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -51,6 +51,8 @@ CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT=n
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER_DISPLAY=n
+CONFIG_RTE_LIBRTE_DPAA_CHECKING=n
 
 # NXP DPAA Mempool
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 1deefd3..3e3e091 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -618,6 +618,39 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
 	return ret;
 }
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
+static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		DPAA_PMD_LOG(ERR, "reserve debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return -EINVAL;
+	}
+	/* "map" this Rx FQ to one of the interfaces Tx FQID */
+	DPAA_PMD_LOG(DEBUG, "creating debug fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		DPAA_PMD_LOG(ERR, "create debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL;
+	opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		DPAA_PMD_LOG(ERR, "init debug fqid %d failed with ret: %d",
+			    fqid, ret);
+	return ret;
+}
+#endif
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -692,6 +725,15 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	}
 	dpaa_intf->nb_tx_queues = num_cores;
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+#endif
+
 	DPAA_PMD_DEBUG("All frame queues created");
 
 	/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 8133a89..3c11376 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,31 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+#if (defined RTE_LIBRTE_DPAA_DEBUG_DRIVER_DISPLAY)
+void dpaa_display_frame(const struct qm_fd *fd)
+{
+	int ii;
+	char *ptr;
+
+	printf("%s::bpid %x addr %08x%08x, format %d off %d, len %d stat %x\n",
+	       __func__, fd->bpid, fd->addr_hi, fd->addr_lo, fd->format,
+		fd->offset, fd->length20, fd->status);
+
+	ptr = (char *)rte_dpaa_mem_ptov(fd->addr);
+	ptr += fd->offset;
+	printf("%02x ", *ptr);
+	for (ii = 1; ii < fd->length20; ii++) {
+		printf("%02x ", *ptr);
+		if ((ii % 16) == 0)
+			printf("\n");
+		ptr++;
+	}
+	printf("\n");
+}
+#else
+#define dpaa_display_frame(a)
+#endif
+
 static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
 				     uint64_t prs __rte_unused)
 {
@@ -353,6 +378,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 		return dpaa_eth_sg_to_mbuf(fd, ifid);
 
 	/* Ignoring case when format != qm_fd_contig */
+	dpaa_display_frame(fd);
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
 	 * of a corrupted packet
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 40/41] net/dpaa: support for firmware version get API
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (38 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 39/41] net/dpaa: add packet dump for debugging Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-18 14:57         ` Ferruh Yigit
  2017-09-09 11:21       ` [PATCH v4 41/41] net/dpaa: support for extended statistics Shreyansh Jain
                         ` (3 subsequent siblings)
  43 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 0e7956c..09b9bd9 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -18,5 +18,6 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
+FW version           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3e3e091..22f56a4 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -164,6 +164,41 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static int
+dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
+		     char *fw_version,
+		     size_t fw_size)
+{
+	int ret;
+	FILE *svr_file = NULL;
+	unsigned int svr_ver = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	svr_file = fopen("/sys/devices/soc0/soc_id", "r");
+	if (!svr_file) {
+		DPAA_PMD_ERR("Unable to open SoC device");
+		return -ENOTSUP; /* Not supported on this infra */
+	}
+
+	ret = fscanf(svr_file, "svr:%x", &svr_ver);
+	if (ret <= 0) {
+		DPAA_PMD_ERR("Unable to read SoC device");
+		return -ENOTSUP; /* Not supported on this infra */
+	}
+
+	ret = snprintf(fw_version, fw_size,
+		       "svr:%x-fman-v%x",
+		       svr_ver,
+		       fman_ip_rev);
+
+	ret += 1; /* add the size of '\0' */
+	if (fw_size < (uint32_t)ret)
+		return ret;
+	else
+		return 0;
+}
+
 static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 			      struct rte_eth_dev_info *dev_info)
 {
@@ -512,6 +547,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.mac_addr_remove	  = dpaa_dev_remove_mac_addr,
 	.mac_addr_set		  = dpaa_dev_set_mac_addr,
 
+	.fw_version_get		  = dpaa_fw_version_get,
 };
 
 static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v4 41/41] net/dpaa: support for extended statistics
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (39 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 40/41] net/dpaa: support for firmware version get API Shreyansh Jain
@ 2017-09-09 11:21       ` Shreyansh Jain
  2017-09-18 14:57         ` Ferruh Yigit
  2017-09-21 22:09       ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Thomas Monjalon
                         ` (2 subsequent siblings)
  43 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-09 11:21 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 143 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_ethdev.h    |  40 +++++++++++
 3 files changed, 184 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 09b9bd9..24cfd85 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -18,6 +18,7 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
+Extended stats       = Y
 FW version           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 22f56a4..5b5ec9c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -75,6 +75,40 @@
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
 
+struct rte_dpaa_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	uint32_t offset;
+};
+
+static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = {
+	{"rx_align_err",
+		offsetof(struct dpaa_if_stats, raln)},
+	{"rx_valid_pause",
+		offsetof(struct dpaa_if_stats, rxpf)},
+	{"rx_fcs_err",
+		offsetof(struct dpaa_if_stats, rfcs)},
+	{"rx_vlan_frame",
+		offsetof(struct dpaa_if_stats, rvlan)},
+	{"rx_frame_err",
+		offsetof(struct dpaa_if_stats, rerr)},
+	{"rx_drop_err",
+		offsetof(struct dpaa_if_stats, rdrp)},
+	{"rx_undersized",
+		offsetof(struct dpaa_if_stats, rund)},
+	{"rx_oversize_err",
+		offsetof(struct dpaa_if_stats, rovr)},
+	{"rx_fragment_pkt",
+		offsetof(struct dpaa_if_stats, rfrg)},
+	{"tx_valid_pause",
+		offsetof(struct dpaa_if_stats, txpf)},
+	{"tx_fcs_err",
+		offsetof(struct dpaa_if_stats, terr)},
+	{"tx_vlan_frame",
+		offsetof(struct dpaa_if_stats, tvlan)},
+	{"rx_undersized",
+		offsetof(struct dpaa_if_stats, tund)},
+};
+
 static int
 dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
@@ -268,6 +302,110 @@ static void dpaa_eth_stats_reset(struct rte_eth_dev *dev)
 	fman_if_stats_reset(dpaa_intf->fif);
 }
 
+static int
+dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+		    unsigned int n)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
+	uint64_t values[sizeof(struct dpaa_if_stats) / 8];
+
+	if (xstats == NULL)
+		return 0;
+
+	if (n < num)
+		return num;
+
+	fman_if_stats_get_all(dpaa_intf->fif, values,
+			      sizeof(struct dpaa_if_stats) / 8);
+
+	for (i = 0; i < num; i++) {
+		xstats[i].id = i;
+		xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
+	}
+	return i;
+}
+
+static int
+dpaa_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+		      struct rte_eth_xstat_name *xstats_names,
+		      __rte_unused unsigned int limit)
+{
+	unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+
+	if (xstats_names != NULL)
+		for (i = 0; i < stat_cnt; i++)
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s",
+				 dpaa_xstats_strings[i].name);
+
+	return stat_cnt;
+}
+
+static int
+dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
+		      uint64_t *values, unsigned int n)
+{
+	unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+	uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8];
+
+	if (!ids) {
+		struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+		if (n < stat_cnt)
+			return stat_cnt;
+
+		if (!values)
+			return 0;
+
+		fman_if_stats_get_all(dpaa_intf->fif, values_copy,
+				      sizeof(struct dpaa_if_stats));
+
+		for (i = 0; i < stat_cnt; i++)
+			values[i] =
+				values_copy[dpaa_xstats_strings[i].offset / 8];
+
+		return stat_cnt;
+	}
+
+	dpaa_xstats_get_by_id(dev, NULL, values_copy, stat_cnt);
+
+	for (i = 0; i < n; i++) {
+		if (ids[i] >= stat_cnt) {
+			DPAA_PMD_ERR("id value isn't valid");
+			return -1;
+		}
+		values[i] = values_copy[ids[i]];
+	}
+	return n;
+}
+
+static int
+dpaa_xstats_get_names_by_id(
+	struct rte_eth_dev *dev,
+	struct rte_eth_xstat_name *xstats_names,
+	const uint64_t *ids,
+	unsigned int limit)
+{
+	unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+	struct rte_eth_xstat_name xstats_names_copy[stat_cnt];
+
+	if (!ids)
+		return dpaa_xstats_get_names(dev, xstats_names, limit);
+
+	dpaa_xstats_get_names(dev, xstats_names_copy, limit);
+
+	for (i = 0; i < limit; i++) {
+		if (ids[i] >= stat_cnt) {
+			DPAA_PMD_ERR("id value isn't valid");
+			return -1;
+		}
+		strcpy(xstats_names[i].name, xstats_names_copy[ids[i]].name);
+	}
+	return limit;
+}
+
 static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -535,6 +673,11 @@ static struct eth_dev_ops dpaa_devops = {
 
 	.link_update		  = dpaa_eth_link_update,
 	.stats_get		  = dpaa_eth_stats_get,
+	.xstats_get		  = dpaa_dev_xstats_get,
+	.xstats_get_by_id	  = dpaa_xstats_get_by_id,
+	.xstats_get_names_by_id	  = dpaa_xstats_get_names_by_id,
+	.xstats_get_names	  = dpaa_xstats_get_names,
+	.xstats_reset		  = dpaa_eth_stats_reset,
 	.stats_reset		  = dpaa_eth_stats_reset,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index e1e062e..3f06d63 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -134,4 +134,44 @@ struct dpaa_if {
 	struct rte_eth_fc_conf *fc_conf;
 };
 
+struct dpaa_if_stats {
+	/* Rx Statistics Counter */
+	uint64_t reoct;		/**<Rx Eth Octets Counter */
+	uint64_t roct;		/**<Rx Octet Counters */
+	uint64_t raln;		/**<Rx Alignment Error Counter */
+	uint64_t rxpf;		/**<Rx valid Pause Frame */
+	uint64_t rfrm;		/**<Rx Frame counter */
+	uint64_t rfcs;		/**<Rx frame check seq error */
+	uint64_t rvlan;		/**<Rx Vlan Frame Counter */
+	uint64_t rerr;		/**<Rx Frame error */
+	uint64_t ruca;		/**<Rx Unicast */
+	uint64_t rmca;		/**<Rx Multicast */
+	uint64_t rbca;		/**<Rx Broadcast */
+	uint64_t rdrp;		/**<Rx Dropped Packet */
+	uint64_t rpkt;		/**<Rx packet */
+	uint64_t rund;		/**<Rx undersized packets */
+	uint32_t res_x[14];
+	uint64_t rovr;		/**<Rx oversized but good */
+	uint64_t rjbr;		/**<Rx oversized with bad csum */
+	uint64_t rfrg;		/**<Rx fragment Packet */
+	uint64_t rcnp;		/**<Rx control packets (0x8808 */
+	uint64_t rdrntp;	/**<Rx dropped due to FIFO overflow */
+	uint32_t res01d0[12];
+	/* Tx Statistics Counter */
+	uint64_t teoct;		/**<Tx eth octets */
+	uint64_t toct;		/**<Tx Octets */
+	uint32_t res0210[2];
+	uint64_t txpf;		/**<Tx valid pause frame */
+	uint64_t tfrm;		/**<Tx frame counter */
+	uint64_t tfcs;		/**<Tx FCS error */
+	uint64_t tvlan;		/**<Tx Vlan Frame */
+	uint64_t terr;		/**<Tx frame error */
+	uint64_t tuca;		/**<Tx Unicast */
+	uint64_t tmca;		/**<Tx Multicast */
+	uint64_t tbca;		/**<Tx Broadcast */
+	uint32_t res0258[2];
+	uint64_t tpkt;		/**<Tx Packet */
+	uint64_t tund;		/**<Tx Undersized */
+};
+
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-09-09 11:20       ` [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
@ 2017-09-18 14:47         ` Ferruh Yigit
  2017-09-19 13:14           ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:47 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>

<...>

> diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
> new file mode 100644
> index 0000000..d97a009
> --- /dev/null
> +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
> @@ -0,0 +1,7 @@
> +DPDK_17.11 {
> +	global:
> +
> +	rte_dpaa_driver_register;
> +	rte_dpaa_driver_unregister;

"local *;" ?

<...>

> +struct rte_dpaa_device {
> +	TAILQ_ENTRY(rte_dpaa_device) next;
> +	struct rte_device device;
> +	union {
> +		struct rte_eth_dev *eth_dev;
> +		struct rte_cryptodev *crypto_dev;
> +	};

Bus struct should be independt from functionality, this has been done in
PCI, can same thing be done for dpaa bus too?

<...>

> + * @return
> + *	0 in case of success, error otherwise
> + */
> +int rte_dpaa_portal_init(void *arg);

Definition is not in this patch.

> +
> +/**
> + * Cleanup a DPAA Portal
> + */
> +void dpaa_portal_finish(void *arg);

Definition is not in this patch.

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros
  2017-09-09 11:20       ` [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros Shreyansh Jain
@ 2017-09-18 14:49         ` Ferruh Yigit
  2017-09-19 13:18           ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:49 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
> From: Hemant Agrawal <hemant.agrawal@nxp.com>
> 
> Linked list, bit operations and compatibility macros.
> 
> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>

<...>

> diff --git a/drivers/bus/dpaa/include/compat.h b/drivers/bus/dpaa/include/compat.h
> new file mode 100644
> index 0000000..a1fd53e
> --- /dev/null
> +++ b/drivers/bus/dpaa/include/compat.h
> @@ -0,0 +1,389 @@
> +/*-
> + * This file is provided under a dual BSD/GPLv2 license. When using or
> + * redistributing this file, you may do so under either license.

The content of the file looks like for Linux, is the file coming from an
existing GPL license? If so, is it allowed to add BSD license to this?

> + *
> + *   BSD LICENSE
> + *
> + * Copyright 2011 Freescale Semiconductor, Inc.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions are met:
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in the
> + * documentation and/or other materials provided with the distribution.
> + * * Neither the name of the above-listed copyright holders nor the
> + * names of any contributors may be used to endorse or promote products
> + * derived from this software without specific prior written permission.
> + *
> + *   GPL LICENSE SUMMARY
> + *
> + * ALTERNATIVELY, this software may be distributed under the terms of the
> + * GNU General Public License ("GPL") as published by the Free Software
> + * Foundation, either version 2 of that License or (at your option) any
> + * later version.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
> + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
> + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
> + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
> + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
> + * POSSIBILITY OF SUCH DAMAGE.
> + */

<...>

> +#ifdef ARRAY_SIZE
> +#undef ARRAY_SIZE
> +#endif
> +#define ARRAY_SIZE(a) (sizeof(a) / sizeof((a)[0]))

Can re-use RTE_DIM

<...>

> +#define ASSERT(x) do {\
> +	if (!(x)) \
> +		rte_panic("DPAA: x"); \
> +} while (0)
> +#define DPAA_BUG_ON(x) ASSERT(!(x))

Can use RTE_ASSERT

<...>

> +
> +#ifndef __DPAA_LIST_H
> +#define __DPAA_LIST_H
> +
> +/****************/
> +/* Linked-lists */
> +/****************/

Do we need to maintain a linked list implementation, why no just use
sys/queue.h ones as done many places in DPDK?

> +
> +struct list_head {
> +	struct list_head *prev;
> +	struct list_head *next;
> +};
> +

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 04/41] bus/dpaa: add OF parser for device scanning
  2017-09-09 11:20       ` [PATCH v4 04/41] bus/dpaa: add OF parser for device scanning Shreyansh Jain
@ 2017-09-18 14:49         ` Ferruh Yigit
  2017-09-19 13:37           ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:49 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
> This layer is used by Bus driver's scan function. Devices are parsed
> using OF parser and added to DPAA device list.

So this is device tree parser in DPDK. Do we really want this, and as
long as DPDK target the bare metal why not get device information from
Linux, as done in other cases?

> 
> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> ---
>  drivers/bus/dpaa/Makefile       |   7 +
>  drivers/bus/dpaa/base/fman/of.c | 576 ++++++++++++++++++++++++++++++++++++++++
>  drivers/bus/dpaa/include/of.h   | 190 +++++++++++++
>  3 files changed, 773 insertions(+)
>  create mode 100644 drivers/bus/dpaa/base/fman/of.c
>  create mode 100644 drivers/bus/dpaa/include/of.h
> 

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 05/41] bus/dpaa: introducing FMan configurations
  2017-09-09 11:20       ` [PATCH v4 05/41] bus/dpaa: introducing FMan configurations Shreyansh Jain
@ 2017-09-18 14:50         ` Ferruh Yigit
  2017-09-18 16:15           ` Thomas Monjalon
  2017-09-19 13:43           ` Shreyansh Jain
  0 siblings, 2 replies; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:50 UTC (permalink / raw)
  To: Shreyansh Jain, dev, Thomas Monjalon; +Cc: hemant.agrawal

On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
> FMan or Frame Manager, inspects traffic, splits it into queueson ingress.
> It is also responsible for directing traffic on queues on egress.
> 
> This patch introduces FMan configurational interfaces. This layer is
> used by Bus driver for configuring the hardware block.
> 
> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

> --- /dev/null
> +++ b/drivers/bus/dpaa/base/fman/fman.c
> @@ -0,0 +1,611 @@
> +/*-
> + * This file is provided under a dual BSD/GPLv2 license. When using or
> + * redistributing this file, you may do so under either license.

Another set of dual licensed files.

Shreyansh, Hemant, Thomas,

Who should approve/check licensing?

> + *
> + *   BSD LICENSE
> + *
> + * Copyright 2010-2016 Freescale Semiconductor Inc.
> + * Copyright 2017 NXP.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions are met:
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in the
> + * documentation and/or other materials provided with the distribution.
> + * * Neither the name of the above-listed copyright holders nor the
> + * names of any contributors may be used to endorse or promote products
> + * derived from this software without specific prior written permission.
> + *
> + *   GPL LICENSE SUMMARY
> + *
> + * ALTERNATIVELY, this software may be distributed under the terms of the
> + * GNU General Public License ("GPL") as published by the Free Software
> + * Foundation, either version 2 of that License or (at your option) any
> + * later version.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
> + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
> + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
> + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
> + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
> + * POSSIBILITY OF SUCH DAMAGE.
> + */
> +

<...>

> +		if (!char_prop) {
> +			printf("memac: unknown MII type assuming 1G\n");

Please prefer logging macros against direct printf.

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 07/41] bus/dpaa: enable DPAA IOCTL portal driver
  2017-09-09 11:20       ` [PATCH v4 07/41] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
@ 2017-09-18 14:51         ` Ferruh Yigit
  2017-09-19 14:17           ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:51 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
> Userspace applications interact with DPAA blocks using this IOCTL driver.
> 
> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

> +static int fd = -1;
> +static pthread_mutex_t fd_init_lock = PTHREAD_MUTEX_INITIALIZER;
> +
> +static int check_fd(void)
> +{
> +	int ret;
> +
> +	if (fd >= 0)
> +		return 0;
> +	ret = pthread_mutex_lock(&fd_init_lock);

Do you need to link against pthred library for this":
LDLIBS += -lpthread

<...>

> +/* The process device underlies process-wide user/kernel interactions, such as
> + * mapping dma_mem memory and providing accompanying ioctl()s. (This isn't used
> + * for portals, which use one UIO device each.).
> + */
> +#define PROCESS_PATH		"/dev/fsl-usdpaa"

Who is creating this file, who is responsible to responding ioctl()
calls, there must a kernel module, right?

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 13/41] bus/dpaa: add support for FMAN frame queue lookup
  2017-09-09 11:21       ` [PATCH v4 13/41] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
@ 2017-09-18 14:51         ` Ferruh Yigit
  2017-09-28 11:47           ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:51 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
> Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

>  
> +#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)
> +#error "_ARM64 requires _FSL_QMAN_FQ_LOOKUP"
> +#endif

This PMD enabled with new added config
"defconfig_arm64-armv8a-linuxapp-gcc", which is 64bits. So this means
CONFIG_FSL_QMAN_FQ_LOOKUP always defined for the bus.

Does is make sense to keep above check, but for rest of the code assume
CONFIG_FSL_QMAN_FQ_LOOKUP always set and remove the #ifdefs, to simplify
the code?

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 11/41] bus/dpaa: add QMan driver core routines
  2017-09-09 11:21       ` [PATCH v4 11/41] bus/dpaa: add QMan driver core routines Shreyansh Jain
@ 2017-09-18 14:53         ` Ferruh Yigit
  2017-09-19 14:18           ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:53 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
> Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>
> +#ifdef RTE_LIBRTE_DPAA_CHECKING

This is not defined anywhere, it looks this will come from config file
in further patches, config file update can be moved to this patch.

> +	eqcr->busy = 0;
> +	eqcr->pmode = pmode;
> +#endif

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 17/41] doc: add NXP DPAA PMD documentation
  2017-09-09 11:21       ` [PATCH v4 17/41] doc: add NXP DPAA PMD documentation Shreyansh Jain
@ 2017-09-18 14:53         ` Ferruh Yigit
  2017-09-19 14:25           ` Shreyansh Jain
  2017-09-18 18:33         ` Mcnamara, John
  1 sibling, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:53 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

> +Config File Options
> +~~~~~~~~~~~~~~~~~~~
> +
> +The following options can be modified in the ``config`` file.
> +Please note that enabling debugging options may affect system performance.
> +
> +- ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``n``)
> +
> +  By default it is enabled only for defconfig_arm64-dpaa-* config.
> +  Toggle compilation of the ``librte_bus_dpaa`` driver.
> +
> +- ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``n``)
> +
> +  By default it is enabled only for defconfig_arm64-dpaa-* config.
> +  Toggle compilation of the ``librte_pmd_dpaa`` driver.
> +
> +- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
> +
> +  Toggle display of generic debugging messages
> +
> +- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT`` (default ``n``)
> +
> +  Toggle display of initialization related messages.
> +
> +- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` (default ``dpaa``)

There are a few new config missing in documentation.

> +
> +  This is not a DPAA specific configuration - it is a generic RTE config.
> +  For optimal performance and hardware utilization, it is expected that DPAA
> +  Mempool driver is used for mempools. For that, this configuration needs to
> +  enabled.
> +
> +Environment Variables
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +DPAA drivers uses the following environment variables to configure its
> +state during application initialization:
> +
> +- ``DPAA_NUM_RX_QUEUES`` (default 1)

Why not getting this value as device arg?

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup
  2017-09-09 11:21       ` [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
@ 2017-09-18 14:55         ` Ferruh Yigit
  2017-09-21 12:59           ` Shreyansh Jain
  2017-09-18 14:55         ` Ferruh Yigit
  1 sibling, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:55 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

> +	for (loop = 0; loop < num_cores; loop++) {
> +		ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
> +					 fman_intf);
> +		if (ret)
> +			return ret;
> +		dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
> +	}
> +	dpaa_intf->nb_tx_queues = num_cores;

Is number of the tx_queues always same as core count?

> +
> +	DPAA_PMD_DEBUG("All frame queues created");
> +
> +	/* reset bpool list, initialize bpool dynamically */
> +	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
> +		list_del(&bp->node);
> +		rte_free(bp);

Why freeing them during initialization ?

> +	}
> +
> +	/* Populate ethdev structure */
>  	eth_dev->dev_ops = &dpaa_devops;
> +	eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
> +	eth_dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
> +
> +	/* Allocate memory for storing MAC addresses */
> +	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
> +		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
> +	if (eth_dev->data->mac_addrs == NULL) {
> +		DPAA_PMD_ERR("Failed to allocate %d bytes needed to "
> +						"store MAC addresses",
> +				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);

free dpaa_intf->rx_queues, tx_queues ?

> +		return -ENOMEM;
> +	}
> +
> +	/* copy the primary mac address */
> +	memcpy(eth_dev->data->mac_addrs[0].addr_bytes,
> +		fman_intf->mac_addr.addr_bytes,
> +		ETHER_ADDR_LEN);

Instead can use ether_addr_copy() instead.

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup
  2017-09-09 11:21       ` [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
  2017-09-18 14:55         ` Ferruh Yigit
@ 2017-09-18 14:55         ` Ferruh Yigit
  2017-09-21 13:00           ` Shreyansh Jain
  1 sibling, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:55 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

> +
> +/* Handle all mbufs on an external pool (non-dpaa2) */

minor typo, but is intention dpaa ?

> +static inline uint16_t
> +tx_on_external_pool(struct qman_fq *txq, struct rte_mbuf *mbuf,
> +		    struct qm_fd *fd_arr)
> +{

<...>

> @@ -185,6 +185,7 @@ endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
>  
>  ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_mempool_dpaa

This should go to patch that introduces mempool.

>  endif
>  
>  endif # !CONFIG_RTE_BUILD_SHARED_LIBS
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 28/41] net/dpaa: add support for link status update
  2017-09-09 11:21       ` [PATCH v4 28/41] net/dpaa: add support for link status update Shreyansh Jain
@ 2017-09-18 14:56         ` Ferruh Yigit
  2017-09-21 13:09           ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:56 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

<...>

> +static int dpaa_eth_link_update(struct rte_eth_dev *dev,
> +				int wait_to_complete __rte_unused)
> +{
> +	struct dpaa_if *dpaa_intf = dev->data->dev_private;
> +	struct rte_eth_link *link = &dev->data->dev_link;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	if (dpaa_intf->fif->mac_type == fman_mac_1g)
> +		link->link_speed = 1000;
> +	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
> +		link->link_speed = 10000;
> +	else
> +		DPAA_PMD_ERR("invalid link_speed: %s, %d",
> +			     dpaa_intf->name, dpaa_intf->fif->mac_type);
> +
> +	link->link_status = dpaa_intf->valid;
> +	link->link_duplex = ETH_LINK_FULL_DUPLEX;
> +	link->link_autoneg = ETH_LINK_AUTONEG;

Shouldn't this function go and get link information from hardware?

> +	return 0;
> +}
> +
>  static
>  int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
>  			    uint16_t nb_desc __rte_unused,
> @@ -216,6 +238,22 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
>  	PMD_INIT_FUNC_TRACE();
>  }
>  
> +static int dpaa_link_down(struct rte_eth_dev *dev)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +
> +	dpaa_eth_dev_stop(dev);

Drivers tend to do revers, make link down on device stop. Just to double
check if stop() is intended for link down.

> +	return 0;
> +}
> +
> +static int dpaa_link_up(struct rte_eth_dev *dev)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +
> +	dpaa_eth_dev_start(dev);
> +	return 0;
> +}

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 36/41] net/dpaa: add support for packet type parsing
  2017-09-09 11:21       ` [PATCH v4 36/41] net/dpaa: add support for packet type parsing Shreyansh Jain
@ 2017-09-18 14:56         ` Ferruh Yigit
  2017-09-21 13:16           ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:56 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
> Add support for parsing the packet type and L2/L3 checksum offload
> capability information.
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> ---
>  doc/guides/nics/features/dpaa.ini |   2 +
>  drivers/net/dpaa/dpaa_ethdev.c    |  27 +++++
>  drivers/net/dpaa/dpaa_rxtx.c      | 116 +++++++++++++++++++++
>  drivers/net/dpaa/dpaa_rxtx.h      | 206 ++++++++++++++++++++++++++++++++++++++
>  4 files changed, 351 insertions(+)
> 
> diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
> index 1ba6b11..2ef1b56 100644
> --- a/doc/guides/nics/features/dpaa.ini
> +++ b/doc/guides/nics/features/dpaa.ini
> @@ -11,7 +11,9 @@ MTU update           = Y
>  Promiscuous mode     = Y
>  Allmulticast mode    = Y
>  Unicast MAC filter   = Y
> +RSS hash             = Y

Not sure about claiming this support yet. Iss mbuf rss hash field set in
Rx path, or is packets distributed to multiple queues using rss hash
functions at this point?

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 40/41] net/dpaa: support for firmware version get API
  2017-09-09 11:21       ` [PATCH v4 40/41] net/dpaa: support for firmware version get API Shreyansh Jain
@ 2017-09-18 14:57         ` Ferruh Yigit
  2017-09-21 13:18           ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:57 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
> From: Hemant Agrawal <hemant.agrawal@nxp.com>
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>

<...>

> +static int
> +dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
> +		     char *fw_version,
> +		     size_t fw_size)
> +{
> +	int ret;
> +	FILE *svr_file = NULL;
> +	unsigned int svr_ver = 0;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	svr_file = fopen("/sys/devices/soc0/soc_id", "r");

Is this sysfs file fixed, can it be enumerated as soc1 etc.. in some
systems?

> +	if (!svr_file) {
> +		DPAA_PMD_ERR("Unable to open SoC device");
> +		return -ENOTSUP; /* Not supported on this infra */
> +	}
> +
> +	ret = fscanf(svr_file, "svr:%x", &svr_ver);
> +	if (ret <= 0) {
> +		DPAA_PMD_ERR("Unable to read SoC device");
> +		return -ENOTSUP; /* Not supported on this infra */
> +	}
> +
> +	ret = snprintf(fw_version, fw_size,
> +		       "svr:%x-fman-v%x",
> +		       svr_ver,
> +		       fman_ip_rev);
> +
> +	ret += 1; /* add the size of '\0' */
> +	if (fw_size < (uint32_t)ret)
> +		return ret;
> +	else
> +		return 0;
> +}

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 41/41] net/dpaa: support for extended statistics
  2017-09-09 11:21       ` [PATCH v4 41/41] net/dpaa: support for extended statistics Shreyansh Jain
@ 2017-09-18 14:57         ` Ferruh Yigit
  2017-09-21 13:26           ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-18 14:57 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
> From: Hemant Agrawal <hemant.agrawal@nxp.com>
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>

<...>

> +static int
> +dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
> +		    unsigned int n)
> +{
> +	struct dpaa_if *dpaa_intf = dev->data->dev_private;
> +	unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
> +	uint64_t values[sizeof(struct dpaa_if_stats) / 8];
> +
> +	if (xstats == NULL)
> +		return 0;

This is a little not clear from API definition, but I guess when xstats
is NULL, it should return num of available stats, "num" for this case. I
guess there are PMDs implements both, can you please double check?

> +
> +	if (n < num)
> +		return num;
> +
> +	fman_if_stats_get_all(dpaa_intf->fif, values,
> +			      sizeof(struct dpaa_if_stats) / 8);
> +
> +	for (i = 0; i < num; i++) {
> +		xstats[i].id = i;
> +		xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
> +	}
> +	return i;
> +}

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 05/41] bus/dpaa: introducing FMan configurations
  2017-09-18 14:50         ` Ferruh Yigit
@ 2017-09-18 16:15           ` Thomas Monjalon
  2017-09-18 17:12             ` Hemant Agrawal
  2017-09-19 13:43           ` Shreyansh Jain
  1 sibling, 1 reply; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-18 16:15 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Shreyansh Jain, dev, hemant.agrawal, techboard

18/09/2017 16:50, Ferruh Yigit:
> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
> > FMan or Frame Manager, inspects traffic, splits it into queueson ingress.
> > It is also responsible for directing traffic on queues on egress.
> > 
> > This patch introduces FMan configurational interfaces. This layer is
> > used by Bus driver for configuring the hardware block.
> > 
> > Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
> > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
> > --- /dev/null
> > +++ b/drivers/bus/dpaa/base/fman/fman.c
> > @@ -0,0 +1,611 @@
> > +/*-
> > + * This file is provided under a dual BSD/GPLv2 license. When using or
> > + * redistributing this file, you may do so under either license.
> 
> Another set of dual licensed files.
> 
> Shreyansh, Hemant, Thomas,
> 
> Who should approve/check licensing?

Hemant is currently handling such issues with the Linux Foundation
and the Governing Board.

We already have some dual licensed files.
Some of them are explicitly referenced in the DPDK charter:
	http://dpdk.org/about/charter#ip
I think we must ask the Governing Board to allow dual BSD/GPL
for any file.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 05/41] bus/dpaa: introducing FMan configurations
  2017-09-18 16:15           ` Thomas Monjalon
@ 2017-09-18 17:12             ` Hemant Agrawal
  0 siblings, 0 replies; 367+ messages in thread
From: Hemant Agrawal @ 2017-09-18 17:12 UTC (permalink / raw)
  To: Thomas Monjalon, Ferruh Yigit; +Cc: Shreyansh Jain, dev, techboard

On 9/18/2017 9:45 PM, Thomas Monjalon wrote:
> 18/09/2017 16:50, Ferruh Yigit:
>> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>>> FMan or Frame Manager, inspects traffic, splits it into queueson ingress.
>>> It is also responsible for directing traffic on queues on egress.
>>>
>>> This patch introduces FMan configurational interfaces. This layer is
>>> used by Bus driver for configuring the hardware block.
>>>
>>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>
>> <...>
>>
>>> --- /dev/null
>>> +++ b/drivers/bus/dpaa/base/fman/fman.c
>>> @@ -0,0 +1,611 @@
>>> +/*-
>>> + * This file is provided under a dual BSD/GPLv2 license. When using or
>>> + * redistributing this file, you may do so under either license.
>>
>> Another set of dual licensed files.
>>
>> Shreyansh, Hemant, Thomas,
>>
>> Who should approve/check licensing?
>
> Hemant is currently handling such issues with the Linux Foundation
> and the Governing Board.
>
> We already have some dual licensed files.
> Some of them are explicitly referenced in the DPDK charter:
> 	http://dpdk.org/about/charter#ip
> I think we must ask the Governing Board to allow dual BSD/GPL
> for any file.

I am working with GB to seek guidance on non-standard license code in DPDK.

Standard Dual licensed code (GPL + BSD) is generally Legally OK. I am 
checking whether these files should be explicitly listed or not.
Many of the DPDK files in user space (which also exist in kernel) are 
dual licensed.

As a outcome of these discussions, we are hoping that techboard will be 
publishing the Licensing Guidelines.

>
>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 17/41] doc: add NXP DPAA PMD documentation
  2017-09-09 11:21       ` [PATCH v4 17/41] doc: add NXP DPAA PMD documentation Shreyansh Jain
  2017-09-18 14:53         ` Ferruh Yigit
@ 2017-09-18 18:33         ` Mcnamara, John
  1 sibling, 0 replies; 367+ messages in thread
From: Mcnamara, John @ 2017-09-18 18:33 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: Yigit, Ferruh, hemant.agrawal



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shreyansh Jain
> Sent: Saturday, September 9, 2017 12:21 PM
> To: dev@dpdk.org
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; hemant.agrawal@nxp.com
> Subject: [dpdk-dev] [PATCH v4 17/41] doc: add NXP DPAA PMD documentation
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

From a documentation point of view:

Acked-by: John McNamara <john.mcnamara@intel.com>




^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-09-18 14:47         ` Ferruh Yigit
@ 2017-09-19 13:14           ` Shreyansh Jain
  2017-09-19 13:33             ` Ferruh Yigit
  2017-09-25 14:32             ` Shreyansh Jain
  0 siblings, 2 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-19 13:14 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

Hello Ferruh,

On Monday 18 September 2017 08:17 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> 
> <...>
> 
>> diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>> new file mode 100644
>> index 0000000..d97a009
>> --- /dev/null
>> +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>> @@ -0,0 +1,7 @@
>> +DPDK_17.11 {
>> +	global:
>> +
>> +	rte_dpaa_driver_register;
>> +	rte_dpaa_driver_unregister;
> 
> "local *;" ?

Agree. I will change this.
Currently rte_dpaa_driver_* functions are being used locally within 
bus/dpaa.

> 
> <...>
> 
>> +struct rte_dpaa_device {
>> +	TAILQ_ENTRY(rte_dpaa_device) next;
>> +	struct rte_device device;
>> +	union {
>> +		struct rte_eth_dev *eth_dev;
>> +		struct rte_cryptodev *crypto_dev;
>> +	};
> 
> Bus struct should be independt from functionality, this has been done in
> PCI, can same thing be done for dpaa bus too?

Sorry, I didn't get your point. This is the rte_dpaa_bus structure:

struct rte_dpaa_bus {
         struct rte_bus bus;
         struct rte_dpaa_device_list device_list;
         struct rte_dpaa_driver_list driver_list;
         int device_count;
};

If you are referring to unlinking eth/crypto functionality from 
rte_dpaa_device - that is something which needs investigation. I have 
seen patches on PCI from Gaetan. Can that be an incremental change over 
this?

> 
> <...>
> 
>> + * @return
>> + *	0 in case of success, error otherwise
>> + */
>> +int rte_dpaa_portal_init(void *arg);
> 
> Definition is not in this patch.
> 
>> +
>> +/**
>> + * Cleanup a DPAA Portal
>> + */
>> +void dpaa_portal_finish(void *arg);
> 
> Definition is not in this patch.
> 
> <...>
> 

Yes, this is my mistake. I will fix this.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros
  2017-09-18 14:49         ` Ferruh Yigit
@ 2017-09-19 13:18           ` Shreyansh Jain
  2017-09-19 13:40             ` Ferruh Yigit
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-19 13:18 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 18 September 2017 08:19 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>
>> Linked list, bit operations and compatibility macros.
>>
>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> 
> <...>
> 
>> diff --git a/drivers/bus/dpaa/include/compat.h b/drivers/bus/dpaa/include/compat.h
>> new file mode 100644
>> index 0000000..a1fd53e
>> --- /dev/null
>> +++ b/drivers/bus/dpaa/include/compat.h
>> @@ -0,0 +1,389 @@
>> +/*-
>> + * This file is provided under a dual BSD/GPLv2 license. When using or
>> + * redistributing this file, you may do so under either license.
> 
> The content of the file looks like for Linux, is the file coming from an
> existing GPL license? If so, is it allowed to add BSD license to this?
> 
>> + *
>> + *   BSD LICENSE
>> + *
>> + * Copyright 2011 Freescale Semiconductor, Inc.
>> + * All rights reserved.
>> + *
>> + * Redistribution and use in source and binary forms, with or without
>> + * modification, are permitted provided that the following conditions are met:
>> + * * Redistributions of source code must retain the above copyright
>> + * notice, this list of conditions and the following disclaimer.
>> + * * Redistributions in binary form must reproduce the above copyright
>> + * notice, this list of conditions and the following disclaimer in the
>> + * documentation and/or other materials provided with the distribution.
>> + * * Neither the name of the above-listed copyright holders nor the
>> + * names of any contributors may be used to endorse or promote products
>> + * derived from this software without specific prior written permission.
>> + *
>> + *   GPL LICENSE SUMMARY
>> + *
>> + * ALTERNATIVELY, this software may be distributed under the terms of the
>> + * GNU General Public License ("GPL") as published by the Free Software
>> + * Foundation, either version 2 of that License or (at your option) any
>> + * later version.
>> + *
>> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
>> + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
>> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
>> + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
>> + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
>> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
>> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
>> + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
>> + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
>> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
>> + * POSSIBILITY OF SUCH DAMAGE.
>> + */
> 
> <...>
> 
>> +#ifdef ARRAY_SIZE
>> +#undef ARRAY_SIZE
>> +#endif
>> +#define ARRAY_SIZE(a) (sizeof(a) / sizeof((a)[0]))
> 
> Can re-use RTE_DIM

I can change this. Thanks for highlighting.

> 
> <...>
> 
>> +#define ASSERT(x) do {\
>> +	if (!(x)) \
>> +		rte_panic("DPAA: x"); \
>> +} while (0)
>> +#define DPAA_BUG_ON(x) ASSERT(!(x))
> 
> Can use RTE_ASSERT

I will change this.

> 
> <...>
> 
>> +
>> +#ifndef __DPAA_LIST_H
>> +#define __DPAA_LIST_H
>> +
>> +/****************/
>> +/* Linked-lists */
>> +/****************/
> 
> Do we need to maintain a linked list implementation, why no just use
> sys/queue.h ones as done many places in DPDK?
> 
>> +
>> +struct list_head {
>> +	struct list_head *prev;
>> +	struct list_head *next;
>> +};
>> +
> 
> <...>
> 

The underlying DPAA infrastructure code is shared between kernel and 
userspace. That is why, changing the internal headers (for example, 
using RTE_* queues) is something I want to avoid until absolutely 
necessary. The outer layers (drivers/*/dpaa/<here>) are something I am 
trying to keep as close to possible to DPDK.

-
Shreyansh

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-09-19 13:14           ` Shreyansh Jain
@ 2017-09-19 13:33             ` Ferruh Yigit
  2017-09-25 14:32             ` Shreyansh Jain
  1 sibling, 0 replies; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-19 13:33 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, hemant.agrawal

On 9/19/2017 2:14 PM, Shreyansh Jain wrote:
> Hello Ferruh,
> 
> On Monday 18 September 2017 08:17 PM, Ferruh Yigit wrote:
>> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>

<...>

>>> +struct rte_dpaa_device {
>>> +	TAILQ_ENTRY(rte_dpaa_device) next;
>>> +	struct rte_device device;
>>> +	union {
>>> +		struct rte_eth_dev *eth_dev;
>>> +		struct rte_cryptodev *crypto_dev;
>>> +	};
>>
>> Bus struct should be independt from functionality, this has been done in
>> PCI, can same thing be done for dpaa bus too?
> 
> Sorry, I didn't get your point. This is the rte_dpaa_bus structure:
> 
> struct rte_dpaa_bus {
>          struct rte_bus bus;
>          struct rte_dpaa_device_list device_list;
>          struct rte_dpaa_driver_list driver_list;
>          int device_count;
> };
> 
> If you are referring to unlinking eth/crypto functionality from 
> rte_dpaa_device - that is something which needs investigation. I have 
> seen patches on PCI from Gaetan. Can that be an incremental change over 
> this?

Yes, I was refereeing this. I am OK doing this incremental.

<...>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 04/41] bus/dpaa: add OF parser for device scanning
  2017-09-18 14:49         ` Ferruh Yigit
@ 2017-09-19 13:37           ` Shreyansh Jain
  2017-09-19 14:15             ` Ferruh Yigit
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-19 13:37 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 18 September 2017 08:19 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>> This layer is used by Bus driver's scan function. Devices are parsed
>> using OF parser and added to DPAA device list.
> 
> So this is device tree parser in DPDK. Do we really want this, and as
> long as DPDK target the bare metal why not get device information from
> Linux, as done in other cases?
As of now I don't prefer to modify the internal framework as much as 
possible as this is stable DPDK DPAA driver.
There is indeed a planned transition from OF to /sys/ parsing, but it is 
still in pipeline.

You see a blocking issue if we go incremental here?
That would be probably more of replacing this file with another /sys 
parser without much changes to the DPDK glue code.

> 
>>
>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>> ---
>>   drivers/bus/dpaa/Makefile       |   7 +
>>   drivers/bus/dpaa/base/fman/of.c | 576 ++++++++++++++++++++++++++++++++++++++++
>>   drivers/bus/dpaa/include/of.h   | 190 +++++++++++++
>>   3 files changed, 773 insertions(+)
>>   create mode 100644 drivers/bus/dpaa/base/fman/of.c
>>   create mode 100644 drivers/bus/dpaa/include/of.h
>>
> 
> <...>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros
  2017-09-19 13:18           ` Shreyansh Jain
@ 2017-09-19 13:40             ` Ferruh Yigit
  2017-09-19 13:57               ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-19 13:40 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, hemant.agrawal

On 9/19/2017 2:18 PM, Shreyansh Jain wrote:
> On Monday 18 September 2017 08:19 PM, Ferruh Yigit wrote:
>> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>
>>> Linked list, bit operations and compatibility macros.
>>>
>>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>
>> <...>
>>
>>> diff --git a/drivers/bus/dpaa/include/compat.h b/drivers/bus/dpaa/include/compat.h
>>> new file mode 100644
>>> index 0000000..a1fd53e
>>> --- /dev/null
>>> +++ b/drivers/bus/dpaa/include/compat.h
>>> @@ -0,0 +1,389 @@
>>> +/*-
>>> + * This file is provided under a dual BSD/GPLv2 license. When using or
>>> + * redistributing this file, you may do so under either license.
>>
>> The content of the file looks like for Linux, is the file coming from an
>> existing GPL license? If so, is it allowed to add BSD license to this?
>>
>>> + *
>>> + *   BSD LICENSE
>>> + *
>>> + * Copyright 2011 Freescale Semiconductor, Inc.
>>> + * All rights reserved.
>>> + *
>>> + * Redistribution and use in source and binary forms, with or without
>>> + * modification, are permitted provided that the following conditions are met:
>>> + * * Redistributions of source code must retain the above copyright
>>> + * notice, this list of conditions and the following disclaimer.
>>> + * * Redistributions in binary form must reproduce the above copyright
>>> + * notice, this list of conditions and the following disclaimer in the
>>> + * documentation and/or other materials provided with the distribution.
>>> + * * Neither the name of the above-listed copyright holders nor the
>>> + * names of any contributors may be used to endorse or promote products
>>> + * derived from this software without specific prior written permission.
>>> + *
>>> + *   GPL LICENSE SUMMARY
>>> + *
>>> + * ALTERNATIVELY, this software may be distributed under the terms of the
>>> + * GNU General Public License ("GPL") as published by the Free Software
>>> + * Foundation, either version 2 of that License or (at your option) any
>>> + * later version.
>>> + *
>>> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
>>> + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
>>> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
>>> + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
>>> + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
>>> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
>>> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
>>> + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
>>> + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
>>> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
>>> + * POSSIBILITY OF SUCH DAMAGE.
>>> + */
>>

<...>

>>> +
>>> +#ifndef __DPAA_LIST_H
>>> +#define __DPAA_LIST_H
>>> +
>>> +/****************/
>>> +/* Linked-lists */
>>> +/****************/
>>
>> Do we need to maintain a linked list implementation, why no just use
>> sys/queue.h ones as done many places in DPDK?
>>
>>> +
>>> +struct list_head {
>>> +	struct list_head *prev;
>>> +	struct list_head *next;
>>> +};
>>> +
>>
>> <...>
>>
> 
> The underlying DPAA infrastructure code is shared between kernel and 
> userspace. That is why, changing the internal headers (for example, 
> using RTE_* queues) is something I want to avoid until absolutely 
> necessary. The outer layers (drivers/*/dpaa/<here>) are something I am 
> trying to keep as close to possible to DPDK.

I understand you want to escape from maintaining a copy of common files
for DPDK, this has been done by many drivers, as not changing "base"
files, this makes sense.

But for this case, file is "dpaa_list.h" and as far as I can see all it
has is linked list implementation, this looked easy to exclude, but if
not you can ignore the comment.

> 
> -
> Shreyansh
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 05/41] bus/dpaa: introducing FMan configurations
  2017-09-18 14:50         ` Ferruh Yigit
  2017-09-18 16:15           ` Thomas Monjalon
@ 2017-09-19 13:43           ` Shreyansh Jain
  1 sibling, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-19 13:43 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Thomas Monjalon, hemant.agrawal

On Monday 18 September 2017 08:20 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>> FMan or Frame Manager, inspects traffic, splits it into queueson ingress.
>> It is also responsible for directing traffic on queues on egress.
>>
>> This patch introduces FMan configurational interfaces. This layer is
>> used by Bus driver for configuring the hardware block.
>>
>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 

[...]

> <...>
> 
>> +		if (!char_prop) {
>> +			printf("memac: unknown MII type assuming 1G\n");
> 
> Please prefer logging macros against direct printf.

I will change this and another similar occurrence.

> 
> <...>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros
  2017-09-19 13:40             ` Ferruh Yigit
@ 2017-09-19 13:57               ` Shreyansh Jain
  2017-09-26 12:43                 ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-19 13:57 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Tuesday 19 September 2017 07:10 PM, Ferruh Yigit wrote:
> On 9/19/2017 2:18 PM, Shreyansh Jain wrote:
>> On Monday 18 September 2017 08:19 PM, Ferruh Yigit wrote:
>>> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>
>>>> Linked list, bit operations and compatibility macros.
>>>>
>>>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>
>>> <...>
>>>
>>>> diff --git a/drivers/bus/dpaa/include/compat.h b/drivers/bus/dpaa/include/compat.h
>>>> new file mode 100644
>>>> index 0000000..a1fd53e
>>>> --- /dev/null
>>>> +++ b/drivers/bus/dpaa/include/compat.h
>>>> @@ -0,0 +1,389 @@
>>>> +/*-
>>>> + * This file is provided under a dual BSD/GPLv2 license. When using or
>>>> + * redistributing this file, you may do so under either license.
>>>
>>> The content of the file looks like for Linux, is the file coming from an
>>> existing GPL license? If so, is it allowed to add BSD license to this?
>>>
>>>> + *
>>>> + *   BSD LICENSE
>>>> + *
>>>> + * Copyright 2011 Freescale Semiconductor, Inc.
>>>> + * All rights reserved.
>>>> + *
>>>> + * Redistribution and use in source and binary forms, with or without
>>>> + * modification, are permitted provided that the following conditions are met:
>>>> + * * Redistributions of source code must retain the above copyright
>>>> + * notice, this list of conditions and the following disclaimer.
>>>> + * * Redistributions in binary form must reproduce the above copyright
>>>> + * notice, this list of conditions and the following disclaimer in the
>>>> + * documentation and/or other materials provided with the distribution.
>>>> + * * Neither the name of the above-listed copyright holders nor the
>>>> + * names of any contributors may be used to endorse or promote products
>>>> + * derived from this software without specific prior written permission.
>>>> + *
>>>> + *   GPL LICENSE SUMMARY
>>>> + *
>>>> + * ALTERNATIVELY, this software may be distributed under the terms of the
>>>> + * GNU General Public License ("GPL") as published by the Free Software
>>>> + * Foundation, either version 2 of that License or (at your option) any
>>>> + * later version.
>>>> + *
>>>> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
>>>> + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
>>>> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
>>>> + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
>>>> + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
>>>> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
>>>> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
>>>> + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
>>>> + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
>>>> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
>>>> + * POSSIBILITY OF SUCH DAMAGE.
>>>> + */
>>>
> 
> <...>
> 
>>>> +
>>>> +#ifndef __DPAA_LIST_H
>>>> +#define __DPAA_LIST_H
>>>> +
>>>> +/****************/
>>>> +/* Linked-lists */
>>>> +/****************/
>>>
>>> Do we need to maintain a linked list implementation, why no just use
>>> sys/queue.h ones as done many places in DPDK?
>>>
>>>> +
>>>> +struct list_head {
>>>> +	struct list_head *prev;
>>>> +	struct list_head *next;
>>>> +};
>>>> +
>>>
>>> <...>
>>>
>>
>> The underlying DPAA infrastructure code is shared between kernel and
>> userspace. That is why, changing the internal headers (for example,
>> using RTE_* queues) is something I want to avoid until absolutely
>> necessary. The outer layers (drivers/*/dpaa/<here>) are something I am
>> trying to keep as close to possible to DPDK.
> 
> I understand you want to escape from maintaining a copy of common files
> for DPDK, this has been done by many drivers, as not changing "base"
> files, this makes sense.
> 
> But for this case, file is "dpaa_list.h" and as far as I can see all it
> has is linked list implementation, this looked easy to exclude, but if
> not you can ignore the comment.

Got your point. I will respin and see how much is the impact.
Thanks for inputs.

> 
>>
>> -
>> Shreyansh
>>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 04/41] bus/dpaa: add OF parser for device scanning
  2017-09-19 13:37           ` Shreyansh Jain
@ 2017-09-19 14:15             ` Ferruh Yigit
  2017-09-19 20:01               ` Thomas Monjalon
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-19 14:15 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, hemant.agrawal, Thomas Monjalon

On 9/19/2017 2:37 PM, Shreyansh Jain wrote:
> On Monday 18 September 2017 08:19 PM, Ferruh Yigit wrote:
>> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>>> This layer is used by Bus driver's scan function. Devices are parsed
>>> using OF parser and added to DPAA device list.
>>
>> So this is device tree parser in DPDK. Do we really want this, and as
>> long as DPDK target the bare metal why not get device information from
>> Linux, as done in other cases?
> As of now I don't prefer to modify the internal framework as much as 
> possible as this is stable DPDK DPAA driver.
> There is indeed a planned transition from OF to /sys/ parsing, but it is 
> still in pipeline.
> 
> You see a blocking issue if we go incremental here?
> That would be probably more of replacing this file with another /sys 
> parser without much changes to the DPDK glue code.

OF parser in DPDK looks weird to me, OS will do this for us already.

If replacing this is in the roadmap, I think this is not showstopper,
added Thomas in case he thinks otherwise.

> 
>>
>>>
>>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>> ---
>>>   drivers/bus/dpaa/Makefile       |   7 +
>>>   drivers/bus/dpaa/base/fman/of.c | 576 ++++++++++++++++++++++++++++++++++++++++
>>>   drivers/bus/dpaa/include/of.h   | 190 +++++++++++++
>>>   3 files changed, 773 insertions(+)
>>>   create mode 100644 drivers/bus/dpaa/base/fman/of.c
>>>   create mode 100644 drivers/bus/dpaa/include/of.h
>>>
>>
>> <...>
>>
>>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 07/41] bus/dpaa: enable DPAA IOCTL portal driver
  2017-09-18 14:51         ` Ferruh Yigit
@ 2017-09-19 14:17           ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-19 14:17 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 18 September 2017 08:21 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>> Userspace applications interact with DPAA blocks using this IOCTL driver.
>>
>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>> +static int fd = -1;
>> +static pthread_mutex_t fd_init_lock = PTHREAD_MUTEX_INITIALIZER;
>> +
>> +static int check_fd(void)
>> +{
>> +	int ret;
>> +
>> +	if (fd >= 0)
>> +		return 0;
>> +	ret = pthread_mutex_lock(&fd_init_lock);
> 
> Do you need to link against pthred library for this":
> LDLIBS += -lpthread

We are already doing that in driver/bus/dpaa/Makefile.
Only issue is that I have introduced that two patches from this.
I will fix this.

> 
> <...>
> 
>> +/* The process device underlies process-wide user/kernel interactions, such as
>> + * mapping dma_mem memory and providing accompanying ioctl()s. (This isn't used
>> + * for portals, which use one UIO device each.).
>> + */
>> +#define PROCESS_PATH		"/dev/fsl-usdpaa"
> 
> Who is creating this file, who is responsible to responding ioctl()
> calls, there must a kernel module, right?

This is provided by Userspace DPAA (usdpaa) drivers in the QorIQ kernel. 
This is currently part of the NXP SDK 
(https://lsdk.github.io/components.html) for DPAA boards. 
(https://github.com/qoriq-open-source/linux). We are still in process of 
pushing it to upstream.

So, assumption is that DPAA DPDK driver will only work with this SDK 
until the Linux Kernel upstreaming completes. I guess I had documented 
this in dpaa.rst but if not, I will explicitly add this.

> 
> <...>
>

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 11/41] bus/dpaa: add QMan driver core routines
  2017-09-18 14:53         ` Ferruh Yigit
@ 2017-09-19 14:18           ` Shreyansh Jain
  2017-09-28 11:45             ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-19 14:18 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 18 September 2017 08:23 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>> Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
>> +#ifdef RTE_LIBRTE_DPAA_CHECKING
> 
> This is not defined anywhere, it looks this will come from config file
> in further patches, config file update can be moved to this patch.

Its more of a debugging macro and it was introduced in later patches.
Not that I see any reason why it can't be introduced here. I will fix this.

> 
>> +	eqcr->busy = 0;
>> +	eqcr->pmode = pmode;
>> +#endif
> 
> <...>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 17/41] doc: add NXP DPAA PMD documentation
  2017-09-18 14:53         ` Ferruh Yigit
@ 2017-09-19 14:25           ` Shreyansh Jain
  2017-09-28 11:49             ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-19 14:25 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 18 September 2017 08:23 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>> +Config File Options
>> +~~~~~~~~~~~~~~~~~~~
>> +
>> +The following options can be modified in the ``config`` file.
>> +Please note that enabling debugging options may affect system performance.
>> +
>> +- ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``n``)
>> +
>> +  By default it is enabled only for defconfig_arm64-dpaa-* config.
>> +  Toggle compilation of the ``librte_bus_dpaa`` driver.
>> +
>> +- ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``n``)
>> +
>> +  By default it is enabled only for defconfig_arm64-dpaa-* config.
>> +  Toggle compilation of the ``librte_pmd_dpaa`` driver.
>> +
>> +- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
>> +
>> +  Toggle display of generic debugging messages
>> +
>> +- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_INIT`` (default ``n``)
>> +
>> +  Toggle display of initialization related messages.
>> +
>> +- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` (default ``dpaa``)
> 
> There are a few new config missing in documentation.

Those are some non-documented/hidden toggles which I don't want to 
expose. Like *CHECKING. They are there only so that some deep debugging 
can be done - which would not be useful for someone not used to the base 
code.

Now that you have highlighted, I will see if we can entirely remove 
those non-documented toggles.

> 
>> +
>> +  This is not a DPAA specific configuration - it is a generic RTE config.
>> +  For optimal performance and hardware utilization, it is expected that DPAA
>> +  Mempool driver is used for mempools. For that, this configuration needs to
>> +  enabled.
>> +
>> +Environment Variables
>> +~~~~~~~~~~~~~~~~~~~~~
>> +
>> +DPAA drivers uses the following environment variables to configure its
>> +state during application initialization:
>> +
>> +- ``DPAA_NUM_RX_QUEUES`` (default 1)
> 
> Why not getting this value as device arg?

We had this discussion during DPAA2 as well. This time, I was not sure 
of how the device argument patches are turning out to be after the 
re-shuffle being done by Gaetan. So, I kept this as it is.

> 
> <...>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 04/41] bus/dpaa: add OF parser for device scanning
  2017-09-19 14:15             ` Ferruh Yigit
@ 2017-09-19 20:01               ` Thomas Monjalon
  2017-09-20 20:39                 ` Jan Viktorin
  0 siblings, 1 reply; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-19 20:01 UTC (permalink / raw)
  To: Ferruh Yigit, Shreyansh Jain; +Cc: dev, hemant.agrawal, Jan Viktorin

19/09/2017 16:15, Ferruh Yigit:
> On 9/19/2017 2:37 PM, Shreyansh Jain wrote:
> > On Monday 18 September 2017 08:19 PM, Ferruh Yigit wrote:
> >> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
> >>> This layer is used by Bus driver's scan function. Devices are parsed
> >>> using OF parser and added to DPAA device list.
> >>
> >> So this is device tree parser in DPDK. Do we really want this, and as
> >> long as DPDK target the bare metal why not get device information from
> >> Linux, as done in other cases?
> > As of now I don't prefer to modify the internal framework as much as 
> > possible as this is stable DPDK DPAA driver.
> > There is indeed a planned transition from OF to /sys/ parsing, but it is 
> > still in pipeline.
> > 
> > You see a blocking issue if we go incremental here?
> > That would be probably more of replacing this file with another /sys 
> > parser without much changes to the DPDK glue code.
> 
> OF parser in DPDK looks weird to me, OS will do this for us already.
> 
> If replacing this is in the roadmap, I think this is not showstopper,
> added Thomas in case he thinks otherwise.

I agree with Ferruh.

I am interested to know if there are cases where a device tree parser
would be relevant in DPDK.
Cc Jan who already worked on this idea.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 04/41] bus/dpaa: add OF parser for device scanning
  2017-09-19 20:01               ` Thomas Monjalon
@ 2017-09-20 20:39                 ` Jan Viktorin
  0 siblings, 0 replies; 367+ messages in thread
From: Jan Viktorin @ 2017-09-20 20:39 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Ferruh Yigit, Shreyansh Jain, dev, hemant.agrawal

On Tue, 19 Sep 2017 22:01:23 +0200
Thomas Monjalon <thomas@monjalon.net> wrote:

> 19/09/2017 16:15, Ferruh Yigit:
> > On 9/19/2017 2:37 PM, Shreyansh Jain wrote:  
> > > On Monday 18 September 2017 08:19 PM, Ferruh Yigit wrote:  
> > >> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:  
> > >>> This layer is used by Bus driver's scan function. Devices are parsed
> > >>> using OF parser and added to DPAA device list.  
> > >>
> > >> So this is device tree parser in DPDK. Do we really want this, and as
> > >> long as DPDK target the bare metal why not get device information from
> > >> Linux, as done in other cases?  
> > > As of now I don't prefer to modify the internal framework as much as 
> > > possible as this is stable DPDK DPAA driver.
> > > There is indeed a planned transition from OF to /sys/ parsing, but it is 
> > > still in pipeline.
> > > 
> > > You see a blocking issue if we go incremental here?
> > > That would be probably more of replacing this file with another /sys 
> > > parser without much changes to the DPDK glue code.  
> > 
> > OF parser in DPDK looks weird to me, OS will do this for us already.
> > 
> > If replacing this is in the roadmap, I think this is not showstopper,
> > added Thomas in case he thinks otherwise.  
> 
> I agree with Ferruh.
> 
> I am interested to know if there are cases where a device tree parser
> would be relevant in DPDK.
> Cc Jan who already worked on this idea.

Hello,

I don't know the details here. In general, I think it is better to
always use /sys. However, there might be information in the device tree
which are not exposed via /sys. This highly depends on the used driver.
I was trying to use some generic driver (uio) which is very limited in
many ways.

I was also dealing with a specific HW configuration for FPGA where the
NIC was divided into separate DMA and EMAC components. For DPDK, these
would be two separate devices with not information how they are
connected to each other. Such information was accessible only via the
device tree. Finally, I also needed to control the PHY from DPDK.
Again, information about PHY is unavailable via /sys.

Regards
Jan

-- 
  Jan Viktorin                E-mail: Viktorin@RehiveTech.com
  System Architect            Web:    www.RehiveTech.com
  RehiveTech
  Brno, Czech Republic

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup
  2017-09-18 14:55         ` Ferruh Yigit
@ 2017-09-21 12:59           ` Shreyansh Jain
  2017-09-28 11:51             ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-21 12:59 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

Hello Ferruh,

Apologies for delay in response for these, I am already working to get 
the next version based on your comments. Meanwhile, some comments inline...

On Monday 18 September 2017 08:25 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>> +	for (loop = 0; loop < num_cores; loop++) {
>> +		ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
>> +					 fman_intf);
>> +		if (ret)
>> +			return ret;
>> +		dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
>> +	}
>> +	dpaa_intf->nb_tx_queues = num_cores;
> 
> Is number of the tx_queues always same as core count?

Number of cores decides the max number of queues that we support. It is 
our internal design to use this as maximum number of available tx queue.

With the above variable, we are only limiting what we will copy in the 
dev_info->max_tx_queues to be reported to application from eth_dev_info. 
Application would still continue to use its own Tx queue ids.

But yes, we don't yet support limiting the number of queues to a user 
count, if less than number of cores.

> 
>> +
>> +	DPAA_PMD_DEBUG("All frame queues created");
>> +
>> +	/* reset bpool list, initialize bpool dynamically */
>> +	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
>> +		list_del(&bp->node);
>> +		rte_free(bp);
> 
> Why freeing them during initialization ?

Just in case if there is anything still in the list.
This is possible in case where multiple devie

> 
>> +	}
>> +
>> +	/* Populate ethdev structure */
>>   	eth_dev->dev_ops = &dpaa_devops;
>> +	eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
>> +	eth_dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
>> +
>> +	/* Allocate memory for storing MAC addresses */
>> +	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
>> +		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
>> +	if (eth_dev->data->mac_addrs == NULL) {
>> +		DPAA_PMD_ERR("Failed to allocate %d bytes needed to "
>> +						"store MAC addresses",
>> +				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
> 
> free dpaa_intf->rx_queues, tx_queues ?

yes, certainly an issue. I will fix it.

> 
>> +		return -ENOMEM;
>> +	}
>> +
>> +	/* copy the primary mac address */
>> +	memcpy(eth_dev->data->mac_addrs[0].addr_bytes,
>> +		fman_intf->mac_addr.addr_bytes,
>> +		ETHER_ADDR_LEN);
> 
> Instead can use ether_addr_copy() instead.

:) Yes, I can.

> 
> <...>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup
  2017-09-18 14:55         ` Ferruh Yigit
@ 2017-09-21 13:00           ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-21 13:00 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 18 September 2017 08:25 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>> +
>> +/* Handle all mbufs on an external pool (non-dpaa2) */
> 
> minor typo, but is intention dpaa ?

Yes, this is 'dpaa'.

> 
>> +static inline uint16_t
>> +tx_on_external_pool(struct qman_fq *txq, struct rte_mbuf *mbuf,
>> +		    struct qm_fd *fd_arr)
>> +{
> 
> <...>
> 
>> @@ -185,6 +185,7 @@ endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
>>   
>>   ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
>>   _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
>> +_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_mempool_dpaa
> 
> This should go to patch that introduces mempool.

My patch splits are not effective it seems. Third issue of incorrect 
introduction of patch that you have pointed out. I will fix this.

> 
>>   endif
>>   
>>   endif # !CONFIG_RTE_BUILD_SHARED_LIBS
>>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 28/41] net/dpaa: add support for link status update
  2017-09-18 14:56         ` Ferruh Yigit
@ 2017-09-21 13:09           ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-21 13:09 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 18 September 2017 08:26 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>> +static int dpaa_eth_link_update(struct rte_eth_dev *dev,
>> +				int wait_to_complete __rte_unused)
>> +{
>> +	struct dpaa_if *dpaa_intf = dev->data->dev_private;
>> +	struct rte_eth_link *link = &dev->data->dev_link;
>> +
>> +	PMD_INIT_FUNC_TRACE();
>> +
>> +	if (dpaa_intf->fif->mac_type == fman_mac_1g)
>> +		link->link_speed = 1000;
>> +	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
>> +		link->link_speed = 10000;
>> +	else
>> +		DPAA_PMD_ERR("invalid link_speed: %s, %d",
>> +			     dpaa_intf->name, dpaa_intf->fif->mac_type);
>> +
>> +	link->link_status = dpaa_intf->valid;
>> +	link->link_duplex = ETH_LINK_FULL_DUPLEX;
>> +	link->link_autoneg = ETH_LINK_AUTONEG;
> 
> Shouldn't this function go and get link information from hardware?

Our currently hardware interfaces don't support these operations 
explicitly. For the "fman_mac_1g" and "fman_mac_10g", these are the 
default sets which are exposing.
Overtime, we will get more such interfaces exposed from Linux kernel to 
Fman library and update this code.

> 
>> +	return 0;
>> +}
>> +
>>   static
>>   int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
>>   			    uint16_t nb_desc __rte_unused,
>> @@ -216,6 +238,22 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
>>   	PMD_INIT_FUNC_TRACE();
>>   }
>>   
>> +static int dpaa_link_down(struct rte_eth_dev *dev)
>> +{
>> +	PMD_INIT_FUNC_TRACE();
>> +
>> +	dpaa_eth_dev_stop(dev);
> 
> Drivers tend to do revers, make link down on device stop. Just to double
> check if stop() is intended for link down.

fman_if_disable_rx is equivalent to "link down" as well as stop (because 
it flushes the queues). That is why these APIs are linked.

> 
>> +	return 0;
>> +}
>> +
>> +static int dpaa_link_up(struct rte_eth_dev *dev)
>> +{
>> +	PMD_INIT_FUNC_TRACE();
>> +
>> +	dpaa_eth_dev_start(dev);
>> +	return 0;
>> +}
> 
> <...>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 36/41] net/dpaa: add support for packet type parsing
  2017-09-18 14:56         ` Ferruh Yigit
@ 2017-09-21 13:16           ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-21 13:16 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 18 September 2017 08:26 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>> Add support for parsing the packet type and L2/L3 checksum offload
>> capability information.
>>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>> ---
>>   doc/guides/nics/features/dpaa.ini |   2 +
>>   drivers/net/dpaa/dpaa_ethdev.c    |  27 +++++
>>   drivers/net/dpaa/dpaa_rxtx.c      | 116 +++++++++++++++++++++
>>   drivers/net/dpaa/dpaa_rxtx.h      | 206 ++++++++++++++++++++++++++++++++++++++
>>   4 files changed, 351 insertions(+)
>>
>> diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
>> index 1ba6b11..2ef1b56 100644
>> --- a/doc/guides/nics/features/dpaa.ini
>> +++ b/doc/guides/nics/features/dpaa.ini
>> @@ -11,7 +11,9 @@ MTU update           = Y
>>   Promiscuous mode     = Y
>>   Allmulticast mode    = Y
>>   Unicast MAC filter   = Y
>> +RSS hash             = Y
> 
> Not sure about claiming this support yet. Iss mbuf rss hash field set in
> Rx path, or is packets distributed to multiple queues using rss hash
> functions at this point?

For DPAA, distribution is enabled through configuration prior to running 
DPDK binary. At this point, the code is fetching the current state and 
filling in the mbuf support.

So, this is not about enabling RSS only when dev_conf.rxmode.mq_mode is 
set. Does this change the way we look at "RSS hash" support? If that is 
not what this feature is intended for, I will remove this support tag.

> 
> <...>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 40/41] net/dpaa: support for firmware version get API
  2017-09-18 14:57         ` Ferruh Yigit
@ 2017-09-21 13:18           ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-21 13:18 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 18 September 2017 08:27 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> 
> <...>
> 
>> +static int
>> +dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
>> +		     char *fw_version,
>> +		     size_t fw_size)
>> +{
>> +	int ret;
>> +	FILE *svr_file = NULL;
>> +	unsigned int svr_ver = 0;
>> +
>> +	PMD_INIT_FUNC_TRACE();
>> +
>> +	svr_file = fopen("/sys/devices/soc0/soc_id", "r");
> 
> Is this sysfs file fixed, can it be enumerated as soc1 etc.. in some
> systems?

The first base SoC slot is assumed to be the one for DPDK DPAA driver. 
That is the reason this path is assumed to be fixed. I can move this 
into a macro though, for readability.

> 
>> +	if (!svr_file) {
>> +		DPAA_PMD_ERR("Unable to open SoC device");
>> +		return -ENOTSUP; /* Not supported on this infra */
>> +	}
>> +
>> +	ret = fscanf(svr_file, "svr:%x", &svr_ver);
>> +	if (ret <= 0) {
>> +		DPAA_PMD_ERR("Unable to read SoC device");
>> +		return -ENOTSUP; /* Not supported on this infra */
>> +	}
>> +
>> +	ret = snprintf(fw_version, fw_size,
>> +		       "svr:%x-fman-v%x",
>> +		       svr_ver,
>> +		       fman_ip_rev);
>> +
>> +	ret += 1; /* add the size of '\0' */
>> +	if (fw_size < (uint32_t)ret)
>> +		return ret;
>> +	else
>> +		return 0;
>> +}
> 
> <...>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 41/41] net/dpaa: support for extended statistics
  2017-09-18 14:57         ` Ferruh Yigit
@ 2017-09-21 13:26           ` Shreyansh Jain
  2017-09-27  8:26             ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-21 13:26 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 18 September 2017 08:27 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> 
> <...>
> 
>> +static int
>> +dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
>> +		    unsigned int n)
>> +{
>> +	struct dpaa_if *dpaa_intf = dev->data->dev_private;
>> +	unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
>> +	uint64_t values[sizeof(struct dpaa_if_stats) / 8];
>> +
>> +	if (xstats == NULL)
>> +		return 0;
> 
> This is a little not clear from API definition, but I guess when xstats
> is NULL, it should return num of available stats, "num" for this case. I
> guess there are PMDs implements both, can you please double check?

Ok. I will check again.

> 
>> +
>> +	if (n < num)
>> +		return num;
>> +
>> +	fman_if_stats_get_all(dpaa_intf->fif, values,
>> +			      sizeof(struct dpaa_if_stats) / 8);
>> +
>> +	for (i = 0; i < num; i++) {
>> +		xstats[i].id = i;
>> +		xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
>> +	}
>> +	return i;
>> +}
> 
> <...>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 20/40] drivers: enable compilation of DPAA Mempool driver
  2017-08-23 14:11     ` [PATCH v3 20/40] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
@ 2017-09-21 21:55       ` Thomas Monjalon
  2017-09-22  6:35         ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-21 21:55 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

23/08/2017 16:11, Shreyansh Jain:
> +CONFIG_RTE_LIBRTE_DPAA_MEMPOOL_DEBUG=n

Please could you try to remove this kind of option?
We are going to remove them from DPDK.

For control path, no need of removing logs at compilation time.
For data path, compilation of logs is controlled by CONFIG_RTE_LOG_DP_LEVEL.
For enabling/disabling logs at runtime in the component,
there is the dynamic log types.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 21/40] maintainers: claim ownership of DPAA Mempool driver
  2017-08-23 14:11     ` [PATCH v3 21/40] maintainers: claim ownership " Shreyansh Jain
@ 2017-09-21 21:56       ` Thomas Monjalon
  2017-09-22  6:47         ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-21 21:56 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

23/08/2017 16:11, Shreyansh Jain:
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -409,6 +409,7 @@ NXP dpaa
>  M: Hemant Agrawal <hemant.agrawal@nxp.com>
>  M: Shreyansh Jain <shreyansh.jain@nxp.com>
>  F: drivers/bus/dpaa/
> +F: drivers/mempool/dpaa/
>  F: doc/guides/nics/dpaa.rst
>  F: doc/guides/nics/features/dpaa.ini

This kind of patch can be squashed in the first patch introducing
this new directory.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 24/40] config: enable NXP DPAA PMD compilation
  2017-08-23 14:11     ` [PATCH v3 24/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
@ 2017-09-21 22:03       ` Thomas Monjalon
  2017-09-22  6:51         ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-21 22:03 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

23/08/2017 16:11, Shreyansh Jain:
> --- a/config/defconfig_arm64-dpaa-linuxapp-gcc
> +++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
> +#
> +# Compile Environment Abstraction Layer
> +#
> +CONFIG_RTE_MAX_LCORE=4
> +CONFIG_RTE_MAX_NUMA_NODES=1
> +CONFIG_RTE_CACHE_LINE_SIZE=64
> +CONFIG_RTE_PKTMBUF_HEADROOM=128

This should be part of the SoC introduction.

The rest of this patch can be squashed with PMD skeleton.

[...]
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -116,6 +116,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD)      += -lrte_pmd_cxgbe
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_pmd_dpaa
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_pmd_dpaa2
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_E1000_PMD)      += -lrte_pmd_e1000
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_ENA_PMD)        += -lrte_pmd_ena
> @@ -182,6 +183,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_bus_fslmc
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_mempool_dpaa2
>  endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
>  
> +ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
> +endif

It does not make sense. Please read it carefully.
The same config condition is used twice.

And the dependency should be on the same line as the PMD link above.

The same mistake was done for DPAA2. Please fix it separately.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 26/40] net/dpaa: add support for MTU update
  2017-08-23 14:11     ` [PATCH v3 26/40] net/dpaa: add support for MTU update Shreyansh Jain
@ 2017-09-21 22:07       ` Thomas Monjalon
  2017-09-22  6:48         ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-21 22:07 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

23/08/2017 16:11, Shreyansh Jain:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> ---
>  doc/guides/nics/features/dpaa.ini |  1 +
>  drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++

It is very good to update features matrix and code in the same patch.
History tracking will be easy, thanks.

About the title, every patches start with "add support for".
It is wasting 7 characters :) You could just say "support".
Example:
	net/dpaa: support MTU update

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (40 preceding siblings ...)
  2017-09-09 11:21       ` [PATCH v4 41/41] net/dpaa: support for extended statistics Shreyansh Jain
@ 2017-09-21 22:09       ` Thomas Monjalon
  2017-09-21 22:10       ` Thomas Monjalon
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
  43 siblings, 0 replies; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-21 22:09 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

09/09/2017 13:20, Shreyansh Jain:
> v4:
>  - Some checkpatch fixes which were reported by checkpatch@dpdk
>  - adding support for extended stats (patch 41)

Sorry, I did some comments on v3 instead of v4.
But I think they apply anyway.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (41 preceding siblings ...)
  2017-09-21 22:09       ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Thomas Monjalon
@ 2017-09-21 22:10       ` Thomas Monjalon
  2017-09-22  6:25         ` Shreyansh Jain
  2017-09-22 13:06         ` Shreyansh Jain
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
  43 siblings, 2 replies; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-21 22:10 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

09/09/2017 13:20, Shreyansh Jain:
> DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
> components designed for high-speed network packet processing. This
> architecture provides the infrastructure to support simplified sharing of
> networking interfaces and accelerators by multiple CPU cores, and the
> accelerators themselves.
> 
> This patchset introduces the following:
> 1. DPAA Bus (drivers/bus/dpaa)
>  The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
>  or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
>  The patches introduce necessary layers to expose the DPAA hardware
>  blocks for interfacing with RTE framework.

I guess these are the same blocks as for DPAA2?
They are in drivers/bus/fslmc/
Why introducing yet another bus driver?
The fslmc one was supposed to cover any Freescale (NXP (Qualcomm)) SoC.

> 2. DPAA Mempool (drivers/mempool/dpaa)
>  BMan, or Buffer Manager, block of DPAA features a hardware offloaded
>  mempool. These patches add support for a driver to manage the BMan
>  block. This driver allows for mempool creation, deletion, buffer
>  acquire and release, as per the RTE APIs.
> 
> 3. DPAA PMD (drivers/net/dpaa)
>  The Poll Mode Driver for DPAA NIC Interfaces.
> 
> Patch Layout
> ============
> 
> 01: Add DPAA SoC build configuration
> 02~16: Add DPAA Bus support and features, incrementally
> 17: Add Documentation
> 18~21: Add DPAA Mempool support
> 22~41: Add PMD and its various features, incrementally

It is a very long series introducing 3 different subsystems.
I think everybody was scared about reviewing it.
Why you did not split it?

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-21 22:10       ` Thomas Monjalon
@ 2017-09-22  6:25         ` Shreyansh Jain
  2017-09-22  6:33           ` Thomas Monjalon
  2017-09-22 13:06         ` Shreyansh Jain
  1 sibling, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-22  6:25 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, hemant.agrawal

Hi Thomas,

Thanks for comments. I will reply soon to those on v3 as well.

On Friday 22 September 2017 03:40 AM, Thomas Monjalon wrote:
> 09/09/2017 13:20, Shreyansh Jain:
>> DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
>> components designed for high-speed network packet processing. This
>> architecture provides the infrastructure to support simplified sharing of
>> networking interfaces and accelerators by multiple CPU cores, and the
>> accelerators themselves.
>>
>> This patchset introduces the following:
>> 1. DPAA Bus (drivers/bus/dpaa)
>>   The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
>>   or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
>>   The patches introduce necessary layers to expose the DPAA hardware
>>   blocks for interfacing with RTE framework.
> 
> I guess these are the same blocks as for DPAA2?
> They are in drivers/bus/fslmc/
> Why introducing yet another bus driver?
> The fslmc one was supposed to cover any Freescale (NXP (Qualcomm)) SoC.
> 
>> 2. DPAA Mempool (drivers/mempool/dpaa)
>>   BMan, or Buffer Manager, block of DPAA features a hardware offloaded
>>   mempool. These patches add support for a driver to manage the BMan
>>   block. This driver allows for mempool creation, deletion, buffer
>>   acquire and release, as per the RTE APIs.
>>
>> 3. DPAA PMD (drivers/net/dpaa)
>>   The Poll Mode Driver for DPAA NIC Interfaces.
>>
>> Patch Layout
>> ============
>>
>> 01: Add DPAA SoC build configuration
>> 02~16: Add DPAA Bus support and features, incrementally
>> 17: Add Documentation
>> 18~21: Add DPAA Mempool support
>> 22~41: Add PMD and its various features, incrementally
> 
> It is a very long series introducing 3 different subsystems.
> I think everybody was scared about reviewing it.

Well, then Ferruh is quite a brave man - I got loads of comments from 
him. :D

> Why you did not split it?

All the components are serially orders. So, whether I split it into 
three separate series, or clearly separated in a single series - it 
would be same thing. Isn't it?

In fact, having three series, one dependent on other, looks more 
confusing to me. Personally, it would be difficult for me to review such 
patch series(s).

Still, if you and Ferruh think this split helps, I am OK. But, I don't 
want it look as if a new request has been made which cannot be completed 
within 17.11 window.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-22  6:25         ` Shreyansh Jain
@ 2017-09-22  6:33           ` Thomas Monjalon
  0 siblings, 0 replies; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-22  6:33 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

22/09/2017 08:25, Shreyansh Jain:
> > It is a very long series introducing 3 different subsystems.
> > I think everybody was scared about reviewing it.
> 
> Well, then Ferruh is quite a brave man - I got loads of comments from 
> him. :D

He is :)

> > Why you did not split it?
> 
> All the components are serially orders. So, whether I split it into 
> three separate series, or clearly separated in a single series - it 
> would be same thing. Isn't it?
> 
> In fact, having three series, one dependent on other, looks more 
> confusing to me. Personally, it would be difficult for me to review such 
> patch series(s).
> 
> Still, if you and Ferruh think this split helps, I am OK. But, I don't 
> want it look as if a new request has been made which cannot be completed 
> within 17.11 window.

I think it is too late now to split it.
Let's continue with this big series.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 20/40] drivers: enable compilation of DPAA Mempool driver
  2017-09-21 21:55       ` Thomas Monjalon
@ 2017-09-22  6:35         ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-22  6:35 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, hemant.agrawal

On Friday 22 September 2017 03:25 AM, Thomas Monjalon wrote:
> 23/08/2017 16:11, Shreyansh Jain:
>> +CONFIG_RTE_LIBRTE_DPAA_MEMPOOL_DEBUG=n
> 
> Please could you try to remove this kind of option?
> We are going to remove them from DPDK.

OK. I will revisit and remove which are not impacting the performance.

> 
> For control path, no need of removing logs at compilation time.
> For data path, compilation of logs is controlled by CONFIG_RTE_LOG_DP_LEVEL.
> For enabling/disabling logs at runtime in the component,
> there is the dynamic log types.
> 

I had already introduced dynamic log types for DPAA1. There weren't much 
examples for me to identify 'best practice'. I preferred this toggling 
so that I can be clear about when debugging is truly disabled.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 21/40] maintainers: claim ownership of DPAA Mempool driver
  2017-09-21 21:56       ` Thomas Monjalon
@ 2017-09-22  6:47         ` Shreyansh Jain
  2017-09-22  6:53           ` Thomas Monjalon
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-22  6:47 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, hemant.agrawal

On Friday 22 September 2017 03:26 AM, Thomas Monjalon wrote:
> 23/08/2017 16:11, Shreyansh Jain:
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -409,6 +409,7 @@ NXP dpaa
>>   M: Hemant Agrawal <hemant.agrawal@nxp.com>
>>   M: Shreyansh Jain <shreyansh.jain@nxp.com>
>>   F: drivers/bus/dpaa/
>> +F: drivers/mempool/dpaa/
>>   F: doc/guides/nics/dpaa.rst
>>   F: doc/guides/nics/features/dpaa.ini
> 
> This kind of patch can be squashed in the first patch introducing
> this new directory.
> 

Then the patch script (devtools/check-git-log.sh) reports error - I 
think. That is the primary reason I split them across multiple patches.
You sure that doesn't matter?

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 26/40] net/dpaa: add support for MTU update
  2017-09-21 22:07       ` Thomas Monjalon
@ 2017-09-22  6:48         ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-22  6:48 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, hemant.agrawal

On Friday 22 September 2017 03:37 AM, Thomas Monjalon wrote:
> 23/08/2017 16:11, Shreyansh Jain:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>> ---
>>   doc/guides/nics/features/dpaa.ini |  1 +
>>   drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
> 
> It is very good to update features matrix and code in the same patch.
> History tracking will be easy, thanks.
> 
> About the title, every patches start with "add support for".
> It is wasting 7 characters :) You could just say "support".
> Example:
> 	net/dpaa: support MTU update
> 

Ok. I will do that.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 24/40] config: enable NXP DPAA PMD compilation
  2017-09-21 22:03       ` Thomas Monjalon
@ 2017-09-22  6:51         ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-22  6:51 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, hemant.agrawal

On Friday 22 September 2017 03:33 AM, Thomas Monjalon wrote:
> 23/08/2017 16:11, Shreyansh Jain:
>> --- a/config/defconfig_arm64-dpaa-linuxapp-gcc
>> +++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
>> +#
>> +# Compile Environment Abstraction Layer
>> +#
>> +CONFIG_RTE_MAX_LCORE=4
>> +CONFIG_RTE_MAX_NUMA_NODES=1
>> +CONFIG_RTE_CACHE_LINE_SIZE=64
>> +CONFIG_RTE_PKTMBUF_HEADROOM=128
> 
> This should be part of the SoC introduction.
> 
> The rest of this patch can be squashed with PMD skeleton.

Ok. I will revisit this.

> 
> [...]
>> --- a/mk/rte.app.mk
>> +++ b/mk/rte.app.mk
>> @@ -116,6 +116,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
>>   _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
>>   _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
>>   _LDLIBS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD)      += -lrte_pmd_cxgbe
>> +_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_pmd_dpaa
>>   _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_pmd_dpaa2
>>   _LDLIBS-$(CONFIG_RTE_LIBRTE_E1000_PMD)      += -lrte_pmd_e1000
>>   _LDLIBS-$(CONFIG_RTE_LIBRTE_ENA_PMD)        += -lrte_pmd_ena
>> @@ -182,6 +183,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_bus_fslmc
>>   _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_mempool_dpaa2
>>   endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
>>   
>> +ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
>> +_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
>> +endif
> 
> It does not make sense. Please read it carefully.
> The same config condition is used twice.
> 
> And the dependency should be on the same line as the PMD link above.
> 
> The same mistake was done for DPAA2. Please fix it separately.
> 
> 

This definitely is fishy. I am not sure why I did this - but it seems 
that I was trying to base this linking on BUS_DPAA was available. 
Apologies. I will fix this.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 21/40] maintainers: claim ownership of DPAA Mempool driver
  2017-09-22  6:47         ` Shreyansh Jain
@ 2017-09-22  6:53           ` Thomas Monjalon
  2017-09-22  7:37             ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-22  6:53 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

22/09/2017 08:47, Shreyansh Jain:
> On Friday 22 September 2017 03:26 AM, Thomas Monjalon wrote:
> > 23/08/2017 16:11, Shreyansh Jain:
> >> --- a/MAINTAINERS
> >> +++ b/MAINTAINERS
> >> @@ -409,6 +409,7 @@ NXP dpaa
> >>   M: Hemant Agrawal <hemant.agrawal@nxp.com>
> >>   M: Shreyansh Jain <shreyansh.jain@nxp.com>
> >>   F: drivers/bus/dpaa/
> >> +F: drivers/mempool/dpaa/
> >>   F: doc/guides/nics/dpaa.rst
> >>   F: doc/guides/nics/features/dpaa.ini
> > 
> > This kind of patch can be squashed in the first patch introducing
> > this new directory.
> > 
> 
> Then the patch script (devtools/check-git-log.sh) reports error - I 
> think. That is the primary reason I split them across multiple patches.
> You sure that doesn't matter?

Which error?

To be clear I suggest to squash with patch 19 where
drivers/mempool/dpaa/Makefile is introduced.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 21/40] maintainers: claim ownership of DPAA Mempool driver
  2017-09-22  7:37             ` Shreyansh Jain
@ 2017-09-22  7:35               ` Thomas Monjalon
  2017-09-27  8:30                 ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-22  7:35 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

22/09/2017 09:37, Shreyansh Jain:
> On Friday 22 September 2017 12:23 PM, Thomas Monjalon wrote:
> > 22/09/2017 08:47, Shreyansh Jain:
> >> On Friday 22 September 2017 03:26 AM, Thomas Monjalon wrote:
> >>> 23/08/2017 16:11, Shreyansh Jain:
> >>>> --- a/MAINTAINERS
> >>>> +++ b/MAINTAINERS
> >>>> @@ -409,6 +409,7 @@ NXP dpaa
> >>>>    M: Hemant Agrawal <hemant.agrawal@nxp.com>
> >>>>    M: Shreyansh Jain <shreyansh.jain@nxp.com>
> >>>>    F: drivers/bus/dpaa/
> >>>> +F: drivers/mempool/dpaa/
> >>>>    F: doc/guides/nics/dpaa.rst
> >>>>    F: doc/guides/nics/features/dpaa.ini
> >>>
> >>> This kind of patch can be squashed in the first patch introducing
> >>> this new directory.
> >>
> >> Then the patch script (devtools/check-git-log.sh) reports error - I
> >> think. That is the primary reason I split them across multiple patches.
> >> You sure that doesn't matter?
> > 
> > Which error?
> > 
> > To be clear I suggest to squash with patch 19 where
> > drivers/mempool/dpaa/Makefile is introduced.
> 
> Yes, I understand that.
> It would report error that the headline is wrong because I am hitting 
> different directories - "MAINTAINERS" and "drivers/mempool/*" with the 
> same patch having headline "mempool/*".

The test you are talking about has this comment:
	# check headline prefix when touching only drivers, e.g. net/<driver name>
If you hit a warning, there is a bug.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 21/40] maintainers: claim ownership of DPAA Mempool driver
  2017-09-22  6:53           ` Thomas Monjalon
@ 2017-09-22  7:37             ` Shreyansh Jain
  2017-09-22  7:35               ` Thomas Monjalon
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-22  7:37 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, hemant.agrawal

On Friday 22 September 2017 12:23 PM, Thomas Monjalon wrote:
> 22/09/2017 08:47, Shreyansh Jain:
>> On Friday 22 September 2017 03:26 AM, Thomas Monjalon wrote:
>>> 23/08/2017 16:11, Shreyansh Jain:
>>>> --- a/MAINTAINERS
>>>> +++ b/MAINTAINERS
>>>> @@ -409,6 +409,7 @@ NXP dpaa
>>>>    M: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>    M: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>>    F: drivers/bus/dpaa/
>>>> +F: drivers/mempool/dpaa/
>>>>    F: doc/guides/nics/dpaa.rst
>>>>    F: doc/guides/nics/features/dpaa.ini
>>>
>>> This kind of patch can be squashed in the first patch introducing
>>> this new directory.
>>>
>>
>> Then the patch script (devtools/check-git-log.sh) reports error - I
>> think. That is the primary reason I split them across multiple patches.
>> You sure that doesn't matter?
> 
> Which error?
> 
> To be clear I suggest to squash with patch 19 where
> drivers/mempool/dpaa/Makefile is introduced.
> 

Yes, I understand that.
It would report error that the headline is wrong because I am hitting 
different directories - "MAINTAINERS" and "drivers/mempool/*" with the 
same patch having headline "mempool/*".

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-21 22:10       ` Thomas Monjalon
  2017-09-22  6:25         ` Shreyansh Jain
@ 2017-09-22 13:06         ` Shreyansh Jain
  2017-09-22 13:13           ` Thomas Monjalon
  1 sibling, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-22 13:06 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, hemant.agrawal

On Friday 22 September 2017 03:40 AM, Thomas Monjalon wrote:
> 09/09/2017 13:20, Shreyansh Jain:
>> DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
>> components designed for high-speed network packet processing. This
>> architecture provides the infrastructure to support simplified sharing of
>> networking interfaces and accelerators by multiple CPU cores, and the
>> accelerators themselves.
>>
>> This patchset introduces the following:
>> 1. DPAA Bus (drivers/bus/dpaa)
>>   The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
>>   or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
>>   The patches introduce necessary layers to expose the DPAA hardware
>>   blocks for interfacing with RTE framework.
> 
> I guess these are the same blocks as for DPAA2?
> They are in drivers/bus/fslmc/
> Why introducing yet another bus driver?
> The fslmc one was supposed to cover any Freescale (NXP (Qualcomm)) SoC.

Forgot to reply to this in previous email:

No, fslmc is not compatible with DPAA. They are completely different 
architectures.
I am not sure why you have the notion "fslmc one was supposed to cover 
any Freescale (NXP (Qualcomm)) SoC". That is not correct - FSLMC was 
always for supporting DPAA2 which is based on VFIO. DPAA is more closer 
to a platform layout.

And I don't think we should have single "bus/fslmc" just so that it can 
encompass all NXP SoC. I am assuming you didn't mean this :P.

> 
>> 2. DPAA Mempool (drivers/mempool/dpaa)
>>   BMan, or Buffer Manager, block of DPAA features a hardware offloaded
>>   mempool. These patches add support for a driver to manage the BMan
>>   block. This driver allows for mempool creation, deletion, buffer
>>   acquire and release, as per the RTE APIs.
>>
>> 3. DPAA PMD (drivers/net/dpaa)
>>   The Poll Mode Driver for DPAA NIC Interfaces.
>>
>> Patch Layout
>> ============
>>
>> 01: Add DPAA SoC build configuration
>> 02~16: Add DPAA Bus support and features, incrementally
>> 17: Add Documentation
>> 18~21: Add DPAA Mempool support
>> 22~41: Add PMD and its various features, incrementally
> 
> It is a very long series introducing 3 different subsystems.
> I think everybody was scared about reviewing it.
> Why you did not split it?
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-22 13:06         ` Shreyansh Jain
@ 2017-09-22 13:13           ` Thomas Monjalon
  2017-09-22 14:00             ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-22 13:13 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, hemant.agrawal

22/09/2017 15:06, Shreyansh Jain:
> On Friday 22 September 2017 03:40 AM, Thomas Monjalon wrote:
> > 09/09/2017 13:20, Shreyansh Jain:
> >> DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
> >> components designed for high-speed network packet processing. This
> >> architecture provides the infrastructure to support simplified sharing of
> >> networking interfaces and accelerators by multiple CPU cores, and the
> >> accelerators themselves.
> >>
> >> This patchset introduces the following:
> >> 1. DPAA Bus (drivers/bus/dpaa)
> >>   The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
> >>   or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
> >>   The patches introduce necessary layers to expose the DPAA hardware
> >>   blocks for interfacing with RTE framework.
> > 
> > I guess these are the same blocks as for DPAA2?
> > They are in drivers/bus/fslmc/
> > Why introducing yet another bus driver?
> > The fslmc one was supposed to cover any Freescale (NXP (Qualcomm)) SoC.
> 
> Forgot to reply to this in previous email:
> 
> No, fslmc is not compatible with DPAA. They are completely different 
> architectures.
> I am not sure why you have the notion "fslmc one was supposed to cover 
> any Freescale (NXP (Qualcomm)) SoC". That is not correct - FSLMC was 
> always for supporting DPAA2 which is based on VFIO. DPAA is more closer 
> to a platform layout.
> 
> And I don't think we should have single "bus/fslmc" just so that it can 
> encompass all NXP SoC. I am assuming you didn't mean this :P.

At the beginning of fslmc work, I had understood that every NXP SoC were
connecting components with the same principle which we could call the
"Freescale bus".
Then you came with this bus named bus/fslmc, not bus/dpaa2.
Now I am confused. What is the exact scope of fslmc? Is it just DPAA2?

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-22 13:13           ` Thomas Monjalon
@ 2017-09-22 14:00             ` Shreyansh Jain
  2017-09-22 14:19               ` Thomas Monjalon
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-22 14:00 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, Hemant Agrawal

Hello Thomas,

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday, September 22, 2017 6:43 PM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>
> Cc: dev@dpdk.org; ferruh.yigit@intel.com; Hemant Agrawal
> <hemant.agrawal@nxp.com>
> Subject: Re: [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
> 
> 22/09/2017 15:06, Shreyansh Jain:
> > On Friday 22 September 2017 03:40 AM, Thomas Monjalon wrote:
> > > 09/09/2017 13:20, Shreyansh Jain:
> > >> DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
> > >> components designed for high-speed network packet processing. This
> > >> architecture provides the infrastructure to support simplified sharing
> of
> > >> networking interfaces and accelerators by multiple CPU cores, and the
> > >> accelerators themselves.
> > >>
> > >> This patchset introduces the following:
> > >> 1. DPAA Bus (drivers/bus/dpaa)
> > >>   The core of DPAA bus is implemented using 3 main hardware blocks:
> QMan,
> > >>   or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
> > >>   The patches introduce necessary layers to expose the DPAA hardware
> > >>   blocks for interfacing with RTE framework.
> > >
> > > I guess these are the same blocks as for DPAA2?
> > > They are in drivers/bus/fslmc/
> > > Why introducing yet another bus driver?
> > > The fslmc one was supposed to cover any Freescale (NXP (Qualcomm)) SoC.
> >
> > Forgot to reply to this in previous email:
> >
> > No, fslmc is not compatible with DPAA. They are completely different
> > architectures.
> > I am not sure why you have the notion "fslmc one was supposed to cover
> > any Freescale (NXP (Qualcomm)) SoC". That is not correct - FSLMC was
> > always for supporting DPAA2 which is based on VFIO. DPAA is more closer
> > to a platform layout.
> >
> > And I don't think we should have single "bus/fslmc" just so that it can
> > encompass all NXP SoC. I am assuming you didn't mean this :P.
> 
> At the beginning of fslmc work, I had understood that every NXP SoC were
> connecting components with the same principle which we could call the
> "Freescale bus".
> Then you came with this bus named bus/fslmc, not bus/dpaa2.
> Now I am confused. What is the exact scope of fslmc? Is it just DPAA2?

My memory is poor. I will have to look through the old emails what happened - but I recall there was a discussion in initial phases about the naming. "fslmc" came out as a name that is what is the real name of the DPAA2 bus. There was initial a confusion if name of bus in Linux Kernel should match or not - but, we realized that bus is *not* device and device name is "dpaa2".

As for whether fslmc would cover multiple SoC - that is still true. There are multiple SoCs within the DPAA2 umbrella. LS20XX, LS108X series and some more - all of which use the FSLMC bus (DPAA2 architecture, on FSLMC bus, having 'dpaa2' devices).

There is another architecture, an old one, which are still popular. This is platform type bus which is aptly named 'dpaa' - and here the confusion of bus name and device doesn't appear. (DPAA bus, using DPAA architecture, exposing 'dpaa' devices).

Exact scope of FSLMC is just DPAA2 architecture based SoCs. There are many here with new coming up.
Exact scope of DPAA bus is just DPAA architecture based SoCs. There are many here.

Does this clear your doubt to some extent?

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-22 14:00             ` Shreyansh Jain
@ 2017-09-22 14:19               ` Thomas Monjalon
  2017-09-23 10:39                 ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Thomas Monjalon @ 2017-09-22 14:19 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, ferruh.yigit, Hemant Agrawal

22/09/2017 16:00, Shreyansh Jain:
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > At the beginning of fslmc work, I had understood that every NXP SoC were
> > connecting components with the same principle which we could call the
> > "Freescale bus".
> > Then you came with this bus named bus/fslmc, not bus/dpaa2.
> > Now I am confused. What is the exact scope of fslmc? Is it just DPAA2?
> 
> My memory is poor. I will have to look through the old emails what happened - but I recall there was a discussion in initial phases about the naming. "fslmc" came out as a name that is what is the real name of the DPAA2 bus. There was initial a confusion if name of bus in Linux Kernel should match or not - but, we realized that bus is *not* device and device name is "dpaa2".
> 
> As for whether fslmc would cover multiple SoC - that is still true. There are multiple SoCs within the DPAA2 umbrella. LS20XX, LS108X series and some more - all of which use the FSLMC bus (DPAA2 architecture, on FSLMC bus, having 'dpaa2' devices).
> 
> There is another architecture, an old one, which are still popular. This is platform type bus which is aptly named 'dpaa' - and here the confusion of bus name and device doesn't appear. (DPAA bus, using DPAA architecture, exposing 'dpaa' devices).
> 
> Exact scope of FSLMC is just DPAA2 architecture based SoCs. There are many here with new coming up.
> Exact scope of DPAA bus is just DPAA architecture based SoCs. There are many here.
> 
> Does this clear your doubt to some extent?

Yes it is a lot clearer! Thanks

Now that I better understand, I think flsmc bus should have been named
dpaa2 bus. Is it too late?

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-22 14:19               ` Thomas Monjalon
@ 2017-09-23 10:39                 ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-23 10:39 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, Hemant Agrawal

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday, September 22, 2017 7:49 PM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>
> Cc: dev@dpdk.org; ferruh.yigit@intel.com; Hemant Agrawal
> <hemant.agrawal@nxp.com>
> Subject: Re: [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD
> 
> 22/09/2017 16:00, Shreyansh Jain:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > At the beginning of fslmc work, I had understood that every NXP SoC were
> > > connecting components with the same principle which we could call the
> > > "Freescale bus".
> > > Then you came with this bus named bus/fslmc, not bus/dpaa2.
> > > Now I am confused. What is the exact scope of fslmc? Is it just DPAA2?
> >
> > My memory is poor. I will have to look through the old emails what happened
> - but I recall there was a discussion in initial phases about the naming.
> "fslmc" came out as a name that is what is the real name of the DPAA2 bus.
> There was initial a confusion if name of bus in Linux Kernel should match or
> not - but, we realized that bus is *not* device and device name is "dpaa2".
> >
> > As for whether fslmc would cover multiple SoC - that is still true. There
> are multiple SoCs within the DPAA2 umbrella. LS20XX, LS108X series and some
> more - all of which use the FSLMC bus (DPAA2 architecture, on FSLMC bus,
> having 'dpaa2' devices).
> >
> > There is another architecture, an old one, which are still popular. This is
> platform type bus which is aptly named 'dpaa' - and here the confusion of bus
> name and device doesn't appear. (DPAA bus, using DPAA architecture, exposing
> 'dpaa' devices).
> >
> > Exact scope of FSLMC is just DPAA2 architecture based SoCs. There are many
> here with new coming up.
> > Exact scope of DPAA bus is just DPAA architecture based SoCs. There are
> many here.
> >
> > Does this clear your doubt to some extent?
> 
> Yes it is a lot clearer! Thanks
> 
> Now that I better understand, I think flsmc bus should have been named
> dpaa2 bus. Is it too late?

:)

I resonate your thought that drivers/bus/dpaa2, drivers/mempool/dpaa2, drivers/net/dpaa2, drivers/crypto/dpaa2_sec would have been more uniform. But again, that would have misled a lot of DPAA2 users into thinking bus name is 'dpaa2' which is not the case.
And anyways, the changes required in the code to reflect this name change are not worthwhile.
I would prefer to go as is.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-09-19 13:14           ` Shreyansh Jain
  2017-09-19 13:33             ` Ferruh Yigit
@ 2017-09-25 14:32             ` Shreyansh Jain
  2017-09-25 15:11               ` Ferruh Yigit
  1 sibling, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-25 14:32 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Tuesday 19 September 2017 06:44 PM, Shreyansh Jain wrote:
> Hello Ferruh,
> 
> On Monday 18 September 2017 08:17 PM, Ferruh Yigit wrote:
>> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>
>> <...>
>>
>>> diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map 
>>> b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>> new file mode 100644
>>> index 0000000..d97a009
>>> --- /dev/null
>>> +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>> @@ -0,0 +1,7 @@
>>> +DPDK_17.11 {
>>> +    global:
>>> +
>>> +    rte_dpaa_driver_register;
>>> +    rte_dpaa_driver_unregister;
>>
>> "local *;" ?
> 
> Agree. I will change this.
> Currently rte_dpaa_driver_* functions are being used locally within 
> bus/dpaa.
> 

Even though I agree earlier that I will change this (append 'local *:' 
to the file), probably I will have to skip this.
Further in the patch series, there are some symbols which are added 
which are required by the mempool and net drivers (and crypto, in 
future). Shared compilation fails for them if I add 'local: *;' here.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-09-25 14:32             ` Shreyansh Jain
@ 2017-09-25 15:11               ` Ferruh Yigit
  2017-09-26 11:26                 ` Shreyansh Jain
  2017-09-27  9:30                 ` Shreyansh Jain
  0 siblings, 2 replies; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-25 15:11 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, hemant.agrawal

On 9/25/2017 3:32 PM, Shreyansh Jain wrote:
> On Tuesday 19 September 2017 06:44 PM, Shreyansh Jain wrote:
>> Hello Ferruh,
>>
>> On Monday 18 September 2017 08:17 PM, Ferruh Yigit wrote:
>>> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>
>>> <...>
>>>
>>>> diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map 
>>>> b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>>> new file mode 100644
>>>> index 0000000..d97a009
>>>> --- /dev/null
>>>> +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>>> @@ -0,0 +1,7 @@
>>>> +DPDK_17.11 {
>>>> +    global:
>>>> +
>>>> +    rte_dpaa_driver_register;
>>>> +    rte_dpaa_driver_unregister;
>>>
>>> "local *;" ?
>>
>> Agree. I will change this.
>> Currently rte_dpaa_driver_* functions are being used locally within 
>> bus/dpaa.
>>
> 
> Even though I agree earlier that I will change this (append 'local *:' 
> to the file), probably I will have to skip this.
> Further in the patch series, there are some symbols which are added 
> which are required by the mempool and net drivers (and crypto, in 
> future). Shared compilation fails for them if I add 'local: *;' here.

It should be OK if this is last item in the first group.

Technically I believe it will be OK to remove that line, but not quite sure.

Lets be consistent with exiting usage and keep it, there are many sample
map files.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-09-25 15:11               ` Ferruh Yigit
@ 2017-09-26 11:26                 ` Shreyansh Jain
  2017-09-27  9:30                 ` Shreyansh Jain
  1 sibling, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-26 11:26 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 25 September 2017 08:41 PM, Ferruh Yigit wrote:
> On 9/25/2017 3:32 PM, Shreyansh Jain wrote:
>> On Tuesday 19 September 2017 06:44 PM, Shreyansh Jain wrote:
>>> Hello Ferruh,
>>>
>>> On Monday 18 September 2017 08:17 PM, Ferruh Yigit wrote:
>>>> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>>>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>
>>>> <...>
>>>>
>>>>> diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>>>> b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>>>> new file mode 100644
>>>>> index 0000000..d97a009
>>>>> --- /dev/null
>>>>> +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>>>> @@ -0,0 +1,7 @@
>>>>> +DPDK_17.11 {
>>>>> +    global:
>>>>> +
>>>>> +    rte_dpaa_driver_register;
>>>>> +    rte_dpaa_driver_unregister;
>>>>
>>>> "local *;" ?
>>>
>>> Agree. I will change this.
>>> Currently rte_dpaa_driver_* functions are being used locally within
>>> bus/dpaa.
>>>
>>
>> Even though I agree earlier that I will change this (append 'local *:'
>> to the file), probably I will have to skip this.
>> Further in the patch series, there are some symbols which are added
>> which are required by the mempool and net drivers (and crypto, in
>> future). Shared compilation fails for them if I add 'local: *;' here.
> 
> It should be OK if this is last item in the first group.
> 
> Technically I believe it will be OK to remove that line, but not quite sure.
> 
> Lets be consistent with exiting usage and keep it, there are many sample
> map files.

I understand your point. Let me try - it is possible that I am doing 
something incorrect.
Thanks.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros
  2017-09-19 13:57               ` Shreyansh Jain
@ 2017-09-26 12:43                 ` Shreyansh Jain
  2017-09-27 23:09                   ` Ferruh Yigit
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-26 12:43 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Tuesday 19 September 2017 07:27 PM, Shreyansh Jain wrote:
> On Tuesday 19 September 2017 07:10 PM, Ferruh Yigit wrote:
>> On 9/19/2017 2:18 PM, Shreyansh Jain wrote:
>>> On Monday 18 September 2017 08:19 PM, Ferruh Yigit wrote:
>>>> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>
>>>>> Linked list, bit operations and compatibility macros.
>>>>>
>>>>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>

[...]

>>>>> + */
>>>>
>>
>> <...>
>>
>>>>> +
>>>>> +#ifndef __DPAA_LIST_H
>>>>> +#define __DPAA_LIST_H
>>>>> +
>>>>> +/****************/
>>>>> +/* Linked-lists */
>>>>> +/****************/
>>>>
>>>> Do we need to maintain a linked list implementation, why no just use
>>>> sys/queue.h ones as done many places in DPDK?
>>>>
>>>>> +
>>>>> +struct list_head {
>>>>> +    struct list_head *prev;
>>>>> +    struct list_head *next;
>>>>> +};
>>>>> +
>>>>
>>>> <...>
>>>>
>>>
>>> The underlying DPAA infrastructure code is shared between kernel and
>>> userspace. That is why, changing the internal headers (for example,
>>> using RTE_* queues) is something I want to avoid until absolutely
>>> necessary. The outer layers (drivers/*/dpaa/<here>) are something I am
>>> trying to keep as close to possible to DPDK.
>>
>> I understand you want to escape from maintaining a copy of common files
>> for DPDK, this has been done by many drivers, as not changing "base"
>> files, this makes sense.
>>
>> But for this case, file is "dpaa_list.h" and as far as I can see all it
>> has is linked list implementation, this looked easy to exclude, but if
>> not you can ignore the comment.
> 
> Got your point. I will respin and see how much is the impact.
> Thanks for inputs.

I tried to work around the dpaa_list.h use in DPAA code - but, the 
changes are subtle but large in number - though, restricted only to base 
framework.
I would prefer to skip this for a while as the driver is stable now. I 
would probably do this change in a incremental manner to keep it traceable.

Ferruh, Is that OK with you?

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 41/41] net/dpaa: support for extended statistics
  2017-09-21 13:26           ` Shreyansh Jain
@ 2017-09-27  8:26             ` Shreyansh Jain
  2017-09-27 23:37               ` Ferruh Yigit
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-27  8:26 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Thursday 21 September 2017 06:56 PM, Shreyansh Jain wrote:
> On Monday 18 September 2017 08:27 PM, Ferruh Yigit wrote:
>> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>
>> <...>
>>
>>> +static int
>>> +dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat 
>>> *xstats,
>>> +            unsigned int n)
>>> +{
>>> +    struct dpaa_if *dpaa_intf = dev->data->dev_private;
>>> +    unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
>>> +    uint64_t values[sizeof(struct dpaa_if_stats) / 8];
>>> +
>>> +    if (xstats == NULL)
>>> +        return 0;
>>
>> This is a little not clear from API definition, but I guess when xstats
>> is NULL, it should return num of available stats, "num" for this case. I
>> guess there are PMDs implements both, can you please double check?
> 
> Ok. I will check again.

I checked a number of other ethev implementations. Some like i40e/e1000 
also return 0 when xstats is NULL. Others, like bnx2x and qede don't 
handle this situation.
All return "num" when passed argument is larger than number of elements 
in the table.

Though, I think the logic that get_xstats should return its size (num) 
when passed with NULL, looks good to me.
How does one standardize such semantics for existing APIs?

(I can add this info to the API document that you created - but only 
once we know if others will agree to change)

> 
>>
>>> +
>>> +    if (n < num)
>>> +        return num;
>>> +
>>> +    fman_if_stats_get_all(dpaa_intf->fif, values,
>>> +                  sizeof(struct dpaa_if_stats) / 8);
>>> +
>>> +    for (i = 0; i < num; i++) {
>>> +        xstats[i].id = i;
>>> +        xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
>>> +    }
>>> +    return i;
>>> +}
>>
>> <...>
>>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v3 21/40] maintainers: claim ownership of DPAA Mempool driver
  2017-09-22  7:35               ` Thomas Monjalon
@ 2017-09-27  8:30                 ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-27  8:30 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, hemant.agrawal

On Friday 22 September 2017 01:05 PM, Thomas Monjalon wrote:
> 22/09/2017 09:37, Shreyansh Jain:
>> On Friday 22 September 2017 12:23 PM, Thomas Monjalon wrote:
>>> 22/09/2017 08:47, Shreyansh Jain:
>>>> On Friday 22 September 2017 03:26 AM, Thomas Monjalon wrote:
>>>>> 23/08/2017 16:11, Shreyansh Jain:
>>>>>> --- a/MAINTAINERS
>>>>>> +++ b/MAINTAINERS
>>>>>> @@ -409,6 +409,7 @@ NXP dpaa
>>>>>>     M: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>>     M: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>>>>     F: drivers/bus/dpaa/
>>>>>> +F: drivers/mempool/dpaa/
>>>>>>     F: doc/guides/nics/dpaa.rst
>>>>>>     F: doc/guides/nics/features/dpaa.ini
>>>>>
>>>>> This kind of patch can be squashed in the first patch introducing
>>>>> this new directory.
>>>>
>>>> Then the patch script (devtools/check-git-log.sh) reports error - I
>>>> think. That is the primary reason I split them across multiple patches.
>>>> You sure that doesn't matter?
>>>
>>> Which error?
>>>
>>> To be clear I suggest to squash with patch 19 where
>>> drivers/mempool/dpaa/Makefile is introduced.
>>
>> Yes, I understand that.
>> It would report error that the headline is wrong because I am hitting
>> different directories - "MAINTAINERS" and "drivers/mempool/*" with the
>> same patch having headline "mempool/*".
> 
> The test you are talking about has this comment:
> 	# check headline prefix when touching only drivers, e.g. net/<driver name>
> If you hit a warning, there is a bug.

Somehow I had the impression it throws an error in such cases. I changed 
as suggested and check-git-log.sh script didn't throw any error - I was 
wrong. Thanks for correcting.

-
Shreyansh

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-09-25 15:11               ` Ferruh Yigit
  2017-09-26 11:26                 ` Shreyansh Jain
@ 2017-09-27  9:30                 ` Shreyansh Jain
  1 sibling, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-27  9:30 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 25 September 2017 08:41 PM, Ferruh Yigit wrote:
> On 9/25/2017 3:32 PM, Shreyansh Jain wrote:
>> On Tuesday 19 September 2017 06:44 PM, Shreyansh Jain wrote:
>>> Hello Ferruh,
>>>
>>> On Monday 18 September 2017 08:17 PM, Ferruh Yigit wrote:
>>>> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>>>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>
>>>> <...>
>>>>
>>>>> diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>>>> b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>>>> new file mode 100644
>>>>> index 0000000..d97a009
>>>>> --- /dev/null
>>>>> +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>>>> @@ -0,0 +1,7 @@
>>>>> +DPDK_17.11 {
>>>>> +    global:
>>>>> +
>>>>> +    rte_dpaa_driver_register;
>>>>> +    rte_dpaa_driver_unregister;
>>>>
>>>> "local *;" ?
>>>
>>> Agree. I will change this.
>>> Currently rte_dpaa_driver_* functions are being used locally within
>>> bus/dpaa.
>>>
>>
>> Even though I agree earlier that I will change this (append 'local *:'
>> to the file), probably I will have to skip this.
>> Further in the patch series, there are some symbols which are added
>> which are required by the mempool and net drivers (and crypto, in
>> future). Shared compilation fails for them if I add 'local: *;' here.
> 
> It should be OK if this is last item in the first group.
> 
> Technically I believe it will be OK to remove that line, but not quite sure.
> 
> Lets be consistent with exiting usage and keep it, there are many sample
> map files.
> 

I had a look at the various map files in code. There is a mixed usage.
Most don't have 'local' tag in their last blocks which exposes symbols. 
Some, like octeonx, bnxt have both.

I am not very sure of how this changes the scope of variables. So, as of 
now I have made DPAA to have both - global and local in its 17.11 symbol 
block.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros
  2017-09-26 12:43                 ` Shreyansh Jain
@ 2017-09-27 23:09                   ` Ferruh Yigit
  0 siblings, 0 replies; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-27 23:09 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, hemant.agrawal

On 9/26/2017 1:43 PM, Shreyansh Jain wrote:
> On Tuesday 19 September 2017 07:27 PM, Shreyansh Jain wrote:
>> On Tuesday 19 September 2017 07:10 PM, Ferruh Yigit wrote:
>>> On 9/19/2017 2:18 PM, Shreyansh Jain wrote:
>>>> On Monday 18 September 2017 08:19 PM, Ferruh Yigit wrote:
>>>>> On 9/9/2017 12:20 PM, Shreyansh Jain wrote:
>>>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>>
>>>>>> Linked list, bit operations and compatibility macros.
>>>>>>
>>>>>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>>>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>
> 
> [...]
> 
>>>>>> + */
>>>>>
>>>
>>> <...>
>>>
>>>>>> +
>>>>>> +#ifndef __DPAA_LIST_H
>>>>>> +#define __DPAA_LIST_H
>>>>>> +
>>>>>> +/****************/
>>>>>> +/* Linked-lists */
>>>>>> +/****************/
>>>>>
>>>>> Do we need to maintain a linked list implementation, why no just use
>>>>> sys/queue.h ones as done many places in DPDK?
>>>>>
>>>>>> +
>>>>>> +struct list_head {
>>>>>> +    struct list_head *prev;
>>>>>> +    struct list_head *next;
>>>>>> +};
>>>>>> +
>>>>>
>>>>> <...>
>>>>>
>>>>
>>>> The underlying DPAA infrastructure code is shared between kernel and
>>>> userspace. That is why, changing the internal headers (for example,
>>>> using RTE_* queues) is something I want to avoid until absolutely
>>>> necessary. The outer layers (drivers/*/dpaa/<here>) are something I am
>>>> trying to keep as close to possible to DPDK.
>>>
>>> I understand you want to escape from maintaining a copy of common files
>>> for DPDK, this has been done by many drivers, as not changing "base"
>>> files, this makes sense.
>>>
>>> But for this case, file is "dpaa_list.h" and as far as I can see all it
>>> has is linked list implementation, this looked easy to exclude, but if
>>> not you can ignore the comment.
>>
>> Got your point. I will respin and see how much is the impact.
>> Thanks for inputs.
> 
> I tried to work around the dpaa_list.h use in DPAA code - but, the 
> changes are subtle but large in number - though, restricted only to base 
> framework.
> I would prefer to skip this for a while as the driver is stable now. I 
> would probably do this change in a incremental manner to keep it traceable.
> 
> Ferruh, Is that OK with you?

That is OK, if it is not easy to escape from it.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 41/41] net/dpaa: support for extended statistics
  2017-09-27  8:26             ` Shreyansh Jain
@ 2017-09-27 23:37               ` Ferruh Yigit
  2017-09-28  2:30                 ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-27 23:37 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, hemant.agrawal

On 9/27/2017 9:26 AM, Shreyansh Jain wrote:
> On Thursday 21 September 2017 06:56 PM, Shreyansh Jain wrote:
>> On Monday 18 September 2017 08:27 PM, Ferruh Yigit wrote:
>>> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>
>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>
>>> <...>
>>>
>>>> +static int
>>>> +dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat 
>>>> *xstats,
>>>> +            unsigned int n)
>>>> +{
>>>> +    struct dpaa_if *dpaa_intf = dev->data->dev_private;
>>>> +    unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
>>>> +    uint64_t values[sizeof(struct dpaa_if_stats) / 8];
>>>> +
>>>> +    if (xstats == NULL)
>>>> +        return 0;
>>>
>>> This is a little not clear from API definition, but I guess when xstats
>>> is NULL, it should return num of available stats, "num" for this case. I
>>> guess there are PMDs implements both, can you please double check?
>>
>> Ok. I will check again.
> 
> I checked a number of other ethev implementations. Some like i40e/e1000 
> also return 0 when xstats is NULL. Others, like bnx2x and qede don't 
> handle this situation.
> All return "num" when passed argument is larger than number of elements 
> in the table.
> 
> Though, I think the logic that get_xstats should return its size (num) 
> when passed with NULL, looks good to me.
> How does one standardize such semantics for existing APIs?

Thanks for checking, I guess first we should clarify the API and the
expected behavior [1] and later update required PMDs.

So for now I think PMD is OK as it is.


[1]
I double checked the rte_eth_xstats_get(). It does:

If xstats == NULL
	xcount = dev_ops->xstats_get(dev, NULL, 0);
	return count + xcount;

Intention looks like to returning number of available stats, otherwise
returning "count + 0" will be useless.

So it looks like expectation from eth_xstats_get_t for that case is
returning xstats size, but this not clear and not documented in API comment.

> 
> (I can add this info to the API document that you created - but only 
> once we know if others will agree to change)
> 
>>
>>>
>>>> +
>>>> +    if (n < num)
>>>> +        return num;
>>>> +
>>>> +    fman_if_stats_get_all(dpaa_intf->fif, values,
>>>> +                  sizeof(struct dpaa_if_stats) / 8);
>>>> +
>>>> +    for (i = 0; i < num; i++) {
>>>> +        xstats[i].id = i;
>>>> +        xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
>>>> +    }
>>>> +    return i;
>>>> +}
>>>
>>> <...>
>>>
>>
>>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 41/41] net/dpaa: support for extended statistics
  2017-09-27 23:37               ` Ferruh Yigit
@ 2017-09-28  2:30                 ` Shreyansh Jain
  2017-09-28  2:52                   ` Shreyansh Jain
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28  2:30 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Hemant Agrawal

Hi Ferruh,

> -----Original Message-----
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Thursday, September 28, 2017 5:07 AM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>
> Cc: dev@dpdk.org; Hemant Agrawal <hemant.agrawal@nxp.com>
> Subject: Re: [PATCH v4 41/41] net/dpaa: support for extended statistics
> 
> On 9/27/2017 9:26 AM, Shreyansh Jain wrote:
> > On Thursday 21 September 2017 06:56 PM, Shreyansh Jain wrote:
> >> On Monday 18 September 2017 08:27 PM, Ferruh Yigit wrote:
> >>> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
> >>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
> >>>>
> >>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> >>>
> >>> <...>
> >>>
> >>>> +static int
> >>>> +dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat
> >>>> *xstats,
> >>>> +            unsigned int n)
> >>>> +{
> >>>> +    struct dpaa_if *dpaa_intf = dev->data->dev_private;
> >>>> +    unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
> >>>> +    uint64_t values[sizeof(struct dpaa_if_stats) / 8];
> >>>> +
> >>>> +    if (xstats == NULL)
> >>>> +        return 0;
> >>>
> >>> This is a little not clear from API definition, but I guess when xstats
> >>> is NULL, it should return num of available stats, "num" for this case. I
> >>> guess there are PMDs implements both, can you please double check?
> >>
> >> Ok. I will check again.
> >
> > I checked a number of other ethev implementations. Some like i40e/e1000
> > also return 0 when xstats is NULL. Others, like bnx2x and qede don't
> > handle this situation.
> > All return "num" when passed argument is larger than number of elements
> > in the table.
> >
> > Though, I think the logic that get_xstats should return its size (num)
> > when passed with NULL, looks good to me.
> > How does one standardize such semantics for existing APIs?
> 
> Thanks for checking, I guess first we should clarify the API and the
> expected behavior [1] and later update required PMDs.
> 
> So for now I think PMD is OK as it is.
> 
> 
> [1]
> I double checked the rte_eth_xstats_get(). It does:
> 
> If xstats == NULL
> 	xcount = dev_ops->xstats_get(dev, NULL, 0);
> 	return count + xcount;
> 
> Intention looks like to returning number of available stats, otherwise
> returning "count + 0" will be useless.
 
Makes sense. I missed this and kept looking for implementations.
I will at least fix dpaa code.

> 
> So it looks like expectation from eth_xstats_get_t for that case is
> returning xstats size, but this not clear and not documented in API comment.
> 
> >
> > (I can add this info to the API document that you created - but only
> > once we know if others will agree to change)
 
Probably this info should be in Doxygen APIs [2].

[2] http://dpdk.org/doc/api/rte__ethdev_8h.html#adad5c65f659487db1fefba7d7d902973

> >
> >>
> >>>
> >>>> +
> >>>> +    if (n < num)
> >>>> +        return num;
> >>>> +
> >>>> +    fman_if_stats_get_all(dpaa_intf->fif, values,
> >>>> +                  sizeof(struct dpaa_if_stats) / 8);
> >>>> +
> >>>> +    for (i = 0; i < num; i++) {
> >>>> +        xstats[i].id = i;
> >>>> +        xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
> >>>> +    }
> >>>> +    return i;
> >>>> +}
> >>>
> >>> <...>
> >>>
> >>
> >>
> >


^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 41/41] net/dpaa: support for extended statistics
  2017-09-28  2:30                 ` Shreyansh Jain
@ 2017-09-28  2:52                   ` Shreyansh Jain
  2017-09-28  9:26                     ` Ferruh Yigit
  0 siblings, 1 reply; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28  2:52 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Hemant Agrawal

> -----Original Message-----
> From: Shreyansh Jain
> Sent: Thursday, September 28, 2017 7:59 AM
> To: 'Ferruh Yigit' <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Hemant Agrawal <hemant.agrawal@nxp.com>
> Subject: RE: [PATCH v4 41/41] net/dpaa: support for extended statistics
> 
> Hi Ferruh,
> 
> > -----Original Message-----
> > From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> > Sent: Thursday, September 28, 2017 5:07 AM
> > To: Shreyansh Jain <shreyansh.jain@nxp.com>
> > Cc: dev@dpdk.org; Hemant Agrawal <hemant.agrawal@nxp.com>
> > Subject: Re: [PATCH v4 41/41] net/dpaa: support for extended statistics
> >
> > On 9/27/2017 9:26 AM, Shreyansh Jain wrote:
> > > On Thursday 21 September 2017 06:56 PM, Shreyansh Jain wrote:
> > >> On Monday 18 September 2017 08:27 PM, Ferruh Yigit wrote:
> > >>> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
> > >>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
> > >>>>
> > >>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > >>>
> > >>> <...>
> > >>>
> > >>>> +static int
> > >>>> +dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat
> > >>>> *xstats,
> > >>>> +            unsigned int n)
> > >>>> +{
> > >>>> +    struct dpaa_if *dpaa_intf = dev->data->dev_private;
> > >>>> +    unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
> > >>>> +    uint64_t values[sizeof(struct dpaa_if_stats) / 8];
> > >>>> +
> > >>>> +    if (xstats == NULL)
> > >>>> +        return 0;
> > >>>
> > >>> This is a little not clear from API definition, but I guess when xstats
> > >>> is NULL, it should return num of available stats, "num" for this case.
> I
> > >>> guess there are PMDs implements both, can you please double check?
> > >>
> > >> Ok. I will check again.
> > >
> > > I checked a number of other ethev implementations. Some like i40e/e1000
> > > also return 0 when xstats is NULL. Others, like bnx2x and qede don't
> > > handle this situation.
> > > All return "num" when passed argument is larger than number of elements
> > > in the table.
> > >
> > > Though, I think the logic that get_xstats should return its size (num)
> > > when passed with NULL, looks good to me.
> > > How does one standardize such semantics for existing APIs?
> >
> > Thanks for checking, I guess first we should clarify the API and the
> > expected behavior [1] and later update required PMDs.
> >
> > So for now I think PMD is OK as it is.
> >
> >
> > [1]
> > I double checked the rte_eth_xstats_get(). It does:
> >
> > If xstats == NULL
> > 	xcount = dev_ops->xstats_get(dev, NULL, 0);
> > 	return count + xcount;
> >
> > Intention looks like to returning number of available stats, otherwise
> > returning "count + 0" will be useless.
> 
> Makes sense. I missed this and kept looking for implementations.
> I will at least fix dpaa code.
 
On a second though: there might be another issue.
Application calls rte_eth_xstats_get_names and finds that 'N' xstats exist.
Thereafter, in a call to rte_eth_xstats_get, xstats==NULL but n=N, so the API would return:

if (n < count + xcount || xstats == NULL)                               
        return count + xcount;

'count' is size of generic stats. If drivers->xstats_get were to return xcount='N', the application would think that it has got a positive response.
See the doxygen page [3] - it states:

--
Returns:
    * A positive value lower or equal to size: success.
      The return value is the number of entries filled in the
      stats table
--

There might be a case where the generic stats are exactly equal to xstats size and application would attempt to access the array.

I am not even sure why the xstats_get is returning (count + xcount) when the API definition doesn't say that generic+xstat is returned.
Am I missing something?

[3] http://dpdk.org/doc/api/rte__ethdev_8h.html#adad5c65f659487db1fefba7d7d902973

> 
> >
> > So it looks like expectation from eth_xstats_get_t for that case is
> > returning xstats size, but this not clear and not documented in API
> comment.
> >
> > >
> > > (I can add this info to the API document that you created - but only
> > > once we know if others will agree to change)
> 
> Probably this info should be in Doxygen APIs [2].
> 
> [2]
> http://dpdk.org/doc/api/rte__ethdev_8h.html#adad5c65f659487db1fefba7d7d902973
> 


^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 41/41] net/dpaa: support for extended statistics
  2017-09-28  2:52                   ` Shreyansh Jain
@ 2017-09-28  9:26                     ` Ferruh Yigit
  0 siblings, 0 replies; 367+ messages in thread
From: Ferruh Yigit @ 2017-09-28  9:26 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev, Hemant Agrawal

On 9/28/2017 3:52 AM, Shreyansh Jain wrote:
>> -----Original Message-----
>> From: Shreyansh Jain
>> Sent: Thursday, September 28, 2017 7:59 AM
>> To: 'Ferruh Yigit' <ferruh.yigit@intel.com>
>> Cc: dev@dpdk.org; Hemant Agrawal <hemant.agrawal@nxp.com>
>> Subject: RE: [PATCH v4 41/41] net/dpaa: support for extended statistics
>>
>> Hi Ferruh,
>>
>>> -----Original Message-----
>>> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
>>> Sent: Thursday, September 28, 2017 5:07 AM
>>> To: Shreyansh Jain <shreyansh.jain@nxp.com>
>>> Cc: dev@dpdk.org; Hemant Agrawal <hemant.agrawal@nxp.com>
>>> Subject: Re: [PATCH v4 41/41] net/dpaa: support for extended statistics
>>>
>>> On 9/27/2017 9:26 AM, Shreyansh Jain wrote:
>>>> On Thursday 21 September 2017 06:56 PM, Shreyansh Jain wrote:
>>>>> On Monday 18 September 2017 08:27 PM, Ferruh Yigit wrote:
>>>>>> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>>>>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>>>
>>>>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>>>>
>>>>>> <...>
>>>>>>
>>>>>>> +static int
>>>>>>> +dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat
>>>>>>> *xstats,
>>>>>>> +            unsigned int n)
>>>>>>> +{
>>>>>>> +    struct dpaa_if *dpaa_intf = dev->data->dev_private;
>>>>>>> +    unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
>>>>>>> +    uint64_t values[sizeof(struct dpaa_if_stats) / 8];
>>>>>>> +
>>>>>>> +    if (xstats == NULL)
>>>>>>> +        return 0;
>>>>>>
>>>>>> This is a little not clear from API definition, but I guess when xstats
>>>>>> is NULL, it should return num of available stats, "num" for this case.
>> I
>>>>>> guess there are PMDs implements both, can you please double check?
>>>>>
>>>>> Ok. I will check again.
>>>>
>>>> I checked a number of other ethev implementations. Some like i40e/e1000
>>>> also return 0 when xstats is NULL. Others, like bnx2x and qede don't
>>>> handle this situation.
>>>> All return "num" when passed argument is larger than number of elements
>>>> in the table.
>>>>
>>>> Though, I think the logic that get_xstats should return its size (num)
>>>> when passed with NULL, looks good to me.
>>>> How does one standardize such semantics for existing APIs?
>>>
>>> Thanks for checking, I guess first we should clarify the API and the
>>> expected behavior [1] and later update required PMDs.
>>>
>>> So for now I think PMD is OK as it is.
>>>
>>>
>>> [1]
>>> I double checked the rte_eth_xstats_get(). It does:
>>>
>>> If xstats == NULL
>>> 	xcount = dev_ops->xstats_get(dev, NULL, 0);
>>> 	return count + xcount;
>>>
>>> Intention looks like to returning number of available stats, otherwise
>>> returning "count + 0" will be useless.
>>
>> Makes sense. I missed this and kept looking for implementations.
>> I will at least fix dpaa code.
>  
> On a second though: there might be another issue.
> Application calls rte_eth_xstats_get_names and finds that 'N' xstats exist.
> Thereafter, in a call to rte_eth_xstats_get, xstats==NULL but n=N, so the API would return:
> 
> if (n < count + xcount || xstats == NULL)                               
>         return count + xcount;
> 
> 'count' is size of generic stats. If drivers->xstats_get were to return xcount='N', the application would think that it has got a positive response.
> See the doxygen page [3] - it states:
> 
> --
> Returns:
>     * A positive value lower or equal to size: success.
>       The return value is the number of entries filled in the
>       stats table
> --
> 
> There might be a case where the generic stats are exactly equal to xstats size and application would attempt to access the array.
> 
> I am not even sure why the xstats_get is returning (count + xcount) when the API definition doesn't say that generic+xstat is returned.
> Am I missing something?

Even for rte_eth_xstats_get_names(), returned N is generic + xstat.

dev_ops->xstats_get() manages xstat only, rte_eth_xstats_xxx() on top of
them manages generic + xstat, this seems how it is designed.

for rte_eth_xstats_get(), I guess there is an assumption that when app
provides xstats == NULL, n also should be 0. Perhaps this should be
implemented into API.

> 
> [3] http://dpdk.org/doc/api/rte__ethdev_8h.html#adad5c65f659487db1fefba7d7d902973
> 
>>
>>>
>>> So it looks like expectation from eth_xstats_get_t for that case is
>>> returning xstats size, but this not clear and not documented in API
>> comment.
>>>
>>>>
>>>> (I can add this info to the API document that you created - but only
>>>> once we know if others will agree to change)
>>
>> Probably this info should be in Doxygen APIs [2].
>>
>> [2]
>> http://dpdk.org/doc/api/rte__ethdev_8h.html#adad5c65f659487db1fefba7d7d902973
>>
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* [PATCH v5 00/40] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                         ` (42 preceding siblings ...)
  2017-09-21 22:10       ` Thomas Monjalon
@ 2017-09-28 11:33       ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
                           ` (40 more replies)
  43 siblings, 41 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Change Log:
============

v5:
 - rebased over net-next/master (9d660ac)	
 - restructuring debugging macros. Removed a few and combined
   others. DPAA now reflects the dynamic logging with segragated
   DP logging
 - updated documentation for missing configuration option
 - fixed map file; shared build was broken earlier
 - other minor fixes from review comments

v4:
 - Some checkpatch fixes which were reported by checkpatch@dpdk
 - adding extra stats feature patch (patch 41)

v3:
 - Rebasing over 17.11-rc0 (85238f50)
 - Checkpatch fixes
   (There are still 2 errors which I think are false positives)
 - Implement rte_bus.find_device() interface
 - Various other minor updates/cleanups

v2:
 - Fixing various comments from Ferruh, but broadly:
  -) Logging is been changed to reflect rte_log_register
  -) Logs across Bus, Mempool and PMD updated
  -) fixed incorrect feature claimed in dpaa.ini
 - Removed 24/40/48 bit swapping macro from EAL.
   These are defined in dpaa/bus now (compat.h)
 - Added missing memory cleanup operation
 - Updated documentation with some missing information

Introduction
============

RFC was posted here -> [R3]
V4 was posted here  -> [R7]

This patch series adds NXP's QorIQ-Layerscape DPAA Architecture based
bus driver, mempool driver and PMD. This version of driver supports NXP
LS1043A/LS1023A, LS1046A/LS1026A family of network SoCs. [R1]

DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
components designed for high-speed network packet processing. This
architecture provides the infrastructure to support simplified sharing of
networking interfaces and accelerators by multiple CPU cores, and the
accelerators themselves.

This patchset introduces the following:
1. DPAA Bus (drivers/bus/dpaa)
 The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
 or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
 The patches introduce necessary layers to expose the DPAA hardware
 blocks for interfacing with RTE framework.

2. DPAA Mempool (drivers/mempool/dpaa)
 BMan, or Buffer Manager, block of DPAA features a hardware offloaded
 mempool. These patches add support for a driver to manage the BMan
 block. This driver allows for mempool creation, deletion, buffer
 acquire and release, as per the RTE APIs.

3. DPAA PMD (drivers/net/dpaa)
 The Poll Mode Driver for DPAA NIC Interfaces.

Patch Layout
============

01: Add DPAA SoC build configuration
02~16: Add DPAA Bus support and features, incrementally
17: Add Documentation
18~21: Add DPAA Mempool support
22~40: Add PMD and its various features, incrementally

References
==========

[R1] http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-layerscape-arm-processors:QORIQ-ARM
[R2] http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf
[R3] RFC: http://dpdk.org/ml/archives/dev/2017-May/066675.html
[R4] v1: http://dpdk.org/ml/archives/dev/2017-June/068020.html
[R5] v2: http://dpdk.org/ml/archives/dev/2017-July/070113.html
[R6] v3: http://dpdk.org/ml/archives/dev/2017-August/073269.html
[R7] v4: http://dpdk.org/ml/archives/dev/2017-September/074936.html

Hemant Agrawal (3):
  bus/dpaa: add compatibility and helper macros
  net/dpaa: support firmware version get API
  net/dpaa: support extended statistics

Shreyansh Jain (37):
  config: add NXP DPAA SoC build configuration
  bus/dpaa: introduce NXP DPAA Bus driver skeleton
  bus/dpaa: add OF parser for device scanning
  bus/dpaa: introducing FMan configurations
  bus/dpaa: add FMan hardware operations
  bus/dpaa: enable DPAA IOCTL portal driver
  bus/dpaa: add layer for interrupt emulation using pthread
  bus/dpaa: add routines for managing a RB tree
  bus/dpaa: add QMAN interface driver
  bus/dpaa: add QMan driver core routines
  bus/dpaa: add BMAN driver core
  bus/dpaa: support FMAN frame queue lookup
  bus/dpaa: add BMan hardware interfaces
  bus/dpaa: add fman flow control threshold setting
  bus/dpaa: integrate DPAA Bus with hardware blocks
  doc: add NXP DPAA PMD documentation
  bus/dpaa: add DPAA mempool logging macros
  mempool/dpaa: support NXP DPAA Mempool
  config: enable compilation of DPAA Mempool driver
  bus/dpaa: add DPAA PMD logging macros
  net/dpaa: add NXP DPAA PMD driver skeleton
  config: enable NXP DPAA PMD compilation
  net/dpaa: support Tx and Rx queue setup
  net/dpaa: support MTU update
  net/dpaa: support jumbo frames
  net/dpaa: support link status update
  net/dpaa: support device info and speed capability
  net/dpaa: support promiscuous toggle
  net/dpaa: support multicast toggle
  net/dpaa: support MAC address update
  net/dpaa: support basic stats
  net/dpaa: support flow control
  net/dpaa: support hashed RSS
  net/dpaa: support packet type parsing
  net/dpaa: support checksum offload
  net/dpaa: support Scattered Rx
  net/dpaa: add packet dump for debugging

 MAINTAINERS                                       |    9 +
 config/common_base                                |    5 +
 config/defconfig_arm64-dpaa-linuxapp-gcc          |   59 +
 doc/guides/nics/dpaa.rst                          |  377 ++++
 doc/guides/nics/features/dpaa.ini                 |   24 +
 doc/guides/nics/index.rst                         |    1 +
 drivers/bus/Makefile                              |    3 +
 drivers/bus/dpaa/Makefile                         |   76 +
 drivers/bus/dpaa/base/fman/fman.c                 |  611 +++++
 drivers/bus/dpaa/base/fman/fman_hw.c              |  590 +++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c         |  214 ++
 drivers/bus/dpaa/base/fman/of.c                   |  576 +++++
 drivers/bus/dpaa/base/qbman/bman.c                |  394 ++++
 drivers/bus/dpaa/base/qbman/bman.h                |  550 +++++
 drivers/bus/dpaa/base/qbman/bman_driver.c         |  323 +++
 drivers/bus/dpaa/base/qbman/bman_priv.h           |  125 ++
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c          |  104 +
 drivers/bus/dpaa/base/qbman/dpaa_sys.c            |  136 ++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h            |   61 +
 drivers/bus/dpaa/base/qbman/process.c             |  331 +++
 drivers/bus/dpaa/base/qbman/qman.c                | 2497 +++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h                |  888 ++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c         |  288 +++
 drivers/bus/dpaa/base/qbman/qman_priv.h           |  310 +++
 drivers/bus/dpaa/dpaa_bus.c                       |  465 ++++
 drivers/bus/dpaa/include/compat.h                 |  385 ++++
 drivers/bus/dpaa/include/dpaa_bits.h              |   65 +
 drivers/bus/dpaa/include/dpaa_list.h              |  101 +
 drivers/bus/dpaa/include/dpaa_rbtree.h            |  143 ++
 drivers/bus/dpaa/include/fman.h                   |  458 ++++
 drivers/bus/dpaa/include/fsl_bman.h               |  375 ++++
 drivers/bus/dpaa/include/fsl_fman.h               |  181 ++
 drivers/bus/dpaa/include/fsl_fman_crc64.h         |  263 +++
 drivers/bus/dpaa/include/fsl_qman.h               | 2021 +++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h                |  107 +
 drivers/bus/dpaa/include/netcfg.h                 |   96 +
 drivers/bus/dpaa/include/of.h                     |  190 ++
 drivers/bus/dpaa/include/process.h                |  107 +
 drivers/bus/dpaa/rte_bus_dpaa_version.map         |   57 +
 drivers/bus/dpaa/rte_dpaa_bus.h                   |  173 ++
 drivers/bus/dpaa/rte_dpaa_logs.h                  |  107 +
 drivers/mempool/Makefile                          |    2 +
 drivers/mempool/dpaa/Makefile                     |   58 +
 drivers/mempool/dpaa/dpaa_mempool.c               |  286 +++
 drivers/mempool/dpaa/dpaa_mempool.h               |   77 +
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map |    8 +
 drivers/net/Makefile                              |    2 +
 drivers/net/dpaa/Makefile                         |   61 +
 drivers/net/dpaa/dpaa_ethdev.c                    | 1112 +++++++++
 drivers/net/dpaa/dpaa_ethdev.h                    |  182 ++
 drivers/net/dpaa/dpaa_rxtx.c                      |  760 +++++++
 drivers/net/dpaa/dpaa_rxtx.h                      |  297 +++
 drivers/net/dpaa/rte_pmd_dpaa_version.map         |    4 +
 mk/machine/dpaa/rte.vars.mk                       |   61 +
 mk/rte.app.mk                                     |    6 +
 55 files changed, 16762 insertions(+)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h
 create mode 100644 drivers/bus/dpaa/include/of.h
 create mode 100644 drivers/bus/dpaa/include/process.h
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map
 create mode 100644 mk/machine/dpaa/rte.vars.mk

-- 
2.9.3

^ permalink raw reply	[flat|nested] 367+ messages in thread

* [PATCH v5 01/40] config: add NXP DPAA SoC build configuration
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
                           ` (39 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch adds skeleton build configuration for DPAA platform.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc | 47 ++++++++++++++++++++++++
 mk/machine/dpaa/rte.vars.mk              | 61 ++++++++++++++++++++++++++++++++
 2 files changed, 108 insertions(+)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 mk/machine/dpaa/rte.vars.mk

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
new file mode 100644
index 0000000..5bca012
--- /dev/null
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -0,0 +1,47 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 Freescale Semiconductor, Inc.
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+#include "defconfig_arm64-armv8a-linuxapp-gcc"
+
+# NXP (Freescale) - Soc Architecture with FMAN, QMAN & BMAN support
+CONFIG_RTE_MACHINE="dpaa"
+CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
+CONFIG_RTE_LIBRTE_VHOST_NUMA=n
+CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
+
+#
+# Compile Environment Abstraction Layer
+#
+CONFIG_RTE_MAX_LCORE=4
+CONFIG_RTE_MAX_NUMA_NODES=1
+CONFIG_RTE_CACHE_LINE_SIZE=64
+CONFIG_RTE_PKTMBUF_HEADROOM=128
diff --git a/mk/machine/dpaa/rte.vars.mk b/mk/machine/dpaa/rte.vars.mk
new file mode 100644
index 0000000..356a6af
--- /dev/null
+++ b/mk/machine/dpaa/rte.vars.mk
@@ -0,0 +1,61 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# machine:
+#
+#   - can define ARCH variable (overridden by cmdline value)
+#   - can define CROSS variable (overridden by cmdline value)
+#   - define MACHINE_CFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+#   - can define CPU_CFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_LDFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_ASFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - may override any previously defined variable
+#
+
+# ARCH =
+# CROSS =
+# MACHINE_CFLAGS =
+# MACHINE_LDFLAGS =
+# MACHINE_ASFLAGS =
+# CPU_CFLAGS =
+# CPU_LDFLAGS =
+# CPU_ASFLAGS =
+MACHINE_CFLAGS += -march=armv8-a+crc
+
+ifdef CONFIG_RTE_ARCH_ARM_TUNE
+MACHINE_CFLAGS += -mtune=$(CONFIG_RTE_ARCH_ARM_TUNE:"%"=%)
+endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
                           ` (38 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 MAINTAINERS                               |   5 +
 config/common_base                        |   3 +
 config/defconfig_arm64-dpaa-linuxapp-gcc  |   4 +
 drivers/bus/Makefile                      |   3 +
 drivers/bus/dpaa/Makefile                 |  58 +++++++++
 drivers/bus/dpaa/dpaa_bus.c               | 207 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |   8 ++
 drivers/bus/dpaa/rte_dpaa_bus.h           | 148 +++++++++++++++++++++
 drivers/bus/dpaa/rte_dpaa_logs.h          |  65 ++++++++++
 9 files changed, 501 insertions(+)
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 8df2a7f..c566962 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -408,6 +408,11 @@ F: drivers/net/nfp/
 F: doc/guides/nics/nfp.rst
 F: doc/guides/nics/features/nfp.ini
 
+NXP dpaa
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+M: Shreyansh Jain <shreyansh.jain@nxp.com>
+F: drivers/bus/dpaa/
+
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/config/common_base b/config/common_base
index 439f3cc..fc1cdca 100644
--- a/config/common_base
+++ b/config/common_base
@@ -301,6 +301,9 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_MBOX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=n
+
 #
 # Compile NXP DPAA2 FSL-MC Bus
 #
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 5bca012..8316fc9 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -45,3 +45,7 @@ CONFIG_RTE_MAX_LCORE=4
 CONFIG_RTE_MAX_NUMA_NODES=1
 CONFIG_RTE_CACHE_LINE_SIZE=64
 CONFIG_RTE_PKTMBUF_HEADROOM=128
+
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=y
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 0224214..6cb6466 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -32,6 +32,9 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mbuf librte_mempool librte_ring librte_ether
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += dpaa
+DEPDIRS-dpaa = $(core-libs)
+
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 DEPDIRS-fslmc = $(core-libs)
 
diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
new file mode 100644
index 0000000..28694c0
--- /dev/null
+++ b/drivers/bus/dpaa/Makefile
@@ -0,0 +1,58 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_BUS_DPAA=$(RTE_SDK)/drivers/bus/dpaa
+
+#
+# library name
+#
+LIB = librte_bus_dpaa.a
+
+CFLAGS := -I$(SRCDIR) $(CFLAGS)
+CFLAGS += -O3 $(WERROR_FLAGS)
+CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+
+# versioning export map
+EXPORT_MAP := rte_bus_dpaa_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	dpaa_bus.c
+
+# Link Pthread
+LDLIBS += -lpthread
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
new file mode 100644
index 0000000..cc343b3
--- /dev/null
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -0,0 +1,207 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_bus.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+int dpaa_logtype_bus;
+
+struct rte_dpaa_bus rte_dpaa_bus;
+
+static inline void
+dpaa_add_to_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+
+static inline void
+dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+
+static int
+rte_dpaa_bus_scan(void)
+{
+	BUS_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/* register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
+{
+	RTE_VERIFY(driver);
+
+	BUS_INIT_FUNC_TRACE();
+
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = &rte_dpaa_bus;
+}
+
+/* un-register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
+{
+	struct rte_dpaa_bus *dpaa_bus;
+
+	BUS_INIT_FUNC_TRACE();
+
+	dpaa_bus = driver->dpaa_bus;
+
+	TAILQ_REMOVE(&dpaa_bus->driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = NULL;
+}
+
+static int
+rte_dpaa_device_match(struct rte_dpaa_driver *drv,
+		      struct rte_dpaa_device *dev)
+{
+	int ret = -1;
+
+	BUS_INIT_FUNC_TRACE();
+
+	if (!drv || !dev) {
+		DPAA_BUS_DEBUG("Invalid drv or dev received.");
+		return ret;
+	}
+
+	if (drv->drv_type == dev->device_type) {
+		DPAA_BUS_INFO("Device: %s matches for driver: %s",
+			      dev->name, drv->driver.name);
+		ret = 0; /* Found a match */
+	}
+
+	return ret;
+}
+
+static int
+rte_dpaa_bus_probe(void)
+{
+	int ret = -1;
+	struct rte_dpaa_device *dev;
+	struct rte_dpaa_driver *drv;
+
+	BUS_INIT_FUNC_TRACE();
+
+	/* For each registered driver, and device, call the driver->probe */
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		TAILQ_FOREACH(drv, &rte_dpaa_bus.driver_list, next) {
+			ret = rte_dpaa_device_match(drv, dev);
+			if (ret)
+				continue;
+
+			if (!drv->probe)
+				continue;
+
+			ret = drv->probe(drv, dev);
+			if (ret)
+				DPAA_BUS_ERR("Unable to probe.\n");
+			break;
+		}
+	}
+	return 0;
+}
+
+static struct rte_device *
+rte_dpaa_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
+		     const void *data)
+{
+	struct rte_dpaa_device *dev;
+
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		if (start && &dev->device == start) {
+			start = NULL;  /* starting point found */
+			continue;
+		}
+
+		if (cmp(&dev->device, data) == 0)
+			return &dev->device;
+	}
+
+	return NULL;
+}
+
+struct rte_dpaa_bus rte_dpaa_bus = {
+	.bus = {
+		.scan = rte_dpaa_bus_scan,
+		.probe = rte_dpaa_bus_probe,
+		.find_device = rte_dpaa_find_device,
+	},
+	.device_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.device_list),
+	.driver_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.driver_list),
+	.device_count = 0,
+};
+
+RTE_REGISTER_BUS(FSL_DPAA_BUS_NAME, rte_dpaa_bus.bus);
+
+RTE_INIT(dpaa_init_log);
+static void
+dpaa_init_log(void)
+{
+	dpaa_logtype_bus = rte_log_register("bus.dpaa");
+	if (dpaa_logtype_bus >= 0)
+		rte_log_set_level(dpaa_logtype_bus, RTE_LOG_NOTICE);
+}
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
new file mode 100644
index 0000000..9f41c77
--- /dev/null
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -0,0 +1,8 @@
+DPDK_17.11 {
+	global:
+
+	rte_dpaa_driver_register;
+	rte_dpaa_driver_unregister;
+
+	local: *;
+};
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
new file mode 100644
index 0000000..789882e
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -0,0 +1,148 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __RTE_DPAA_BUS_H__
+#define __RTE_DPAA_BUS_H__
+
+#include <rte_bus.h>
+#include <rte_mempool.h>
+
+#define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
+
+#define DEV_TO_DPAA_DEVICE(ptr)	\
+		container_of(ptr, struct rte_dpaa_device, device)
+
+struct rte_dpaa_device;
+struct rte_dpaa_driver;
+
+/* DPAA Device and Driver lists for DPAA bus */
+TAILQ_HEAD(rte_dpaa_device_list, rte_dpaa_device);
+TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
+
+enum rte_dpaa_type {
+	FSL_DPAA_ETH = 1,
+	FSL_DPAA_CRYPTO,
+};
+
+struct rte_dpaa_bus {
+	struct rte_bus bus;
+	struct rte_dpaa_device_list device_list;
+	struct rte_dpaa_driver_list driver_list;
+	int device_count;
+};
+
+struct dpaa_device_id {
+	uint8_t fman_id; /**< Fman interface ID, for ETH type device */
+	uint8_t mac_id; /**< Fman MAC interface ID, for ETH type device */
+	uint16_t dev_id; /**< Device Identifier from DPDK */
+};
+
+struct rte_dpaa_device {
+	TAILQ_ENTRY(rte_dpaa_device) next;
+	struct rte_device device;
+	union {
+		struct rte_eth_dev *eth_dev;
+		struct rte_cryptodev *crypto_dev;
+	};
+	struct rte_dpaa_driver *driver;
+	struct dpaa_device_id id;
+	enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
+	char name[RTE_ETH_NAME_MAX_LEN];
+};
+
+typedef int (*rte_dpaa_probe_t)(struct rte_dpaa_driver *dpaa_drv,
+				struct rte_dpaa_device *dpaa_dev);
+typedef int (*rte_dpaa_remove_t)(struct rte_dpaa_device *dpaa_dev);
+
+struct rte_dpaa_driver {
+	TAILQ_ENTRY(rte_dpaa_driver) next;
+	struct rte_driver driver;
+	struct rte_dpaa_bus *dpaa_bus;
+	enum rte_dpaa_type drv_type;
+	rte_dpaa_probe_t probe;
+	rte_dpaa_remove_t remove;
+};
+
+struct dpaa_portal {
+	uint32_t bman_idx; /**< BMAN Portal ID*/
+	uint32_t qman_idx; /**< QMAN Portal ID*/
+	uint64_t tid;/**< Parent Thread id for this portal */
+};
+
+/* TODO - this is costly, need to write a fast coversion routine */
+static inline void *rte_dpaa_mem_ptov(phys_addr_t paddr)
+{
+	const struct rte_memseg *memseg = rte_eal_get_physmem_layout();
+	int i;
+
+	for (i = 0; i < RTE_MAX_MEMSEG && memseg[i].addr != NULL; i++) {
+		if (paddr >= memseg[i].phys_addr && paddr <
+			memseg[i].phys_addr + memseg[i].len)
+			return (uint8_t *)(memseg[i].addr) +
+			       (paddr - memseg[i].phys_addr);
+	}
+
+	return NULL;
+}
+
+/**
+ * Register a DPAA driver.
+ *
+ * @param driver
+ *   A pointer to a rte_dpaa_driver structure describing the driver
+ *   to be registered.
+ */
+void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
+
+/**
+ * Unregister a DPAA driver.
+ *
+ * @param driver
+ *	A pointer to a rte_dpaa_driver structure describing the driver
+ *	to be unregistered.
+ */
+void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
+
+/** Helper for DPAA device registration from driver (eth, crypto) instance */
+#define RTE_PMD_REGISTER_DPAA(nm, dpaa_drv) \
+RTE_INIT(dpaainitfn_ ##nm); \
+static void dpaainitfn_ ##nm(void) \
+{\
+	(dpaa_drv).driver.name = RTE_STR(nm);\
+	rte_dpaa_driver_register(&dpaa_drv); \
+} \
+RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_DPAA_BUS_H__ */
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
new file mode 100644
index 0000000..cc10937
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -0,0 +1,65 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA_LOGS_H_
+#define _DPAA_LOGS_H_
+
+#include <rte_log.h>
+
+extern int dpaa_logtype_bus;
+
+#define DPAA_BUS_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define BUS_INIT_FUNC_TRACE() DPAA_BUS_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_BUS
+#define DPAA_BUS_HWWARN(cond, fmt, args...) \
+	do {\
+		if (cond) \
+			DPAA_BUS_LOG(DEBUG, "WARN: " fmt, ##args); \
+	} while (0)
+#else
+#define DPAA_BUS_HWWARN(cond, fmt, args...) do { } while (0)
+#endif
+
+#define DPAA_BUS_DEBUG(fmt, args...) \
+	DPAA_BUS_LOG(DEBUG, fmt, ## args)
+#define DPAA_BUS_INFO(fmt, args...) \
+	DPAA_BUS_LOG(INFO, fmt, ## args)
+#define DPAA_BUS_ERR(fmt, args...) \
+	DPAA_BUS_LOG(ERR, fmt, ## args)
+#define DPAA_BUS_WARN(fmt, args...) \
+	DPAA_BUS_LOG(WARNING, fmt, ## args)
+
+#endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 03/40] bus/dpaa: add compatibility and helper macros
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
                           ` (37 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Linked list, bit operations and compatibility macros.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/dpaa/include/compat.h    | 385 +++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/dpaa_bits.h |  65 ++++++
 drivers/bus/dpaa/include/dpaa_list.h | 101 +++++++++
 3 files changed, 551 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h

diff --git a/drivers/bus/dpaa/include/compat.h b/drivers/bus/dpaa/include/compat.h
new file mode 100644
index 0000000..42733ae
--- /dev/null
+++ b/drivers/bus/dpaa/include/compat.h
@@ -0,0 +1,385 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __COMPAT_H
+#define __COMPAT_H
+
+#include <sched.h>
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+#include <stdint.h>
+#include <stdlib.h>
+#include <stddef.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <pthread.h>
+#include <linux/types.h>
+#include <stdbool.h>
+#include <ctype.h>
+#include <malloc.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/mman.h>
+#include <limits.h>
+#include <assert.h>
+#include <dirent.h>
+#include <inttypes.h>
+#include <error.h>
+#include <rte_byteorder.h>
+#include <rte_atomic.h>
+#include <rte_spinlock.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+
+/* The following definitions are primarily to allow the single-source driver
+ * interfaces to be included by arbitrary program code. Ie. for interfaces that
+ * are also available in kernel-space, these definitions provide compatibility
+ * with certain attributes and types used in those interfaces.
+ */
+
+/* Required compiler attributes */
+#define __maybe_unused	__rte_unused
+#define __always_unused	__rte_unused
+#define __packed	__rte_packed
+#define noinline	__attribute__((noinline))
+
+#define L1_CACHE_BYTES 64
+#define ____cacheline_aligned __attribute__((aligned(L1_CACHE_BYTES)))
+#define __stringify_1(x) #x
+#define __stringify(x)	__stringify_1(x)
+
+#ifdef ARRAY_SIZE
+#undef ARRAY_SIZE
+#endif
+#define ARRAY_SIZE(a) (sizeof(a) / sizeof((a)[0]))
+
+/* Debugging */
+#define prflush(fmt, args...) \
+	do { \
+		printf(fmt, ##args); \
+		fflush(stdout); \
+	} while (0)
+
+#define pr_crit(fmt, args...)	 prflush("CRIT:" fmt, ##args)
+#define pr_err(fmt, args...)	 prflush("ERR:" fmt, ##args)
+#define pr_warn(fmt, args...)	 prflush("WARN:" fmt, ##args)
+#define pr_info(fmt, args...)	 prflush(fmt, ##args)
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_BUS
+#ifdef pr_debug
+#undef pr_debug
+#endif
+#define pr_debug(fmt, args...)	printf(fmt, ##args)
+#else
+#define pr_debug(fmt, args...) {}
+#endif
+
+#define DPAA_BUG_ON(x) RTE_ASSERT(x)
+
+/* Required types */
+typedef uint8_t		u8;
+typedef uint16_t	u16;
+typedef uint32_t	u32;
+typedef uint64_t	u64;
+typedef uint64_t	dma_addr_t;
+typedef cpu_set_t	cpumask_t;
+typedef uint32_t	phandle;
+typedef uint32_t	gfp_t;
+typedef uint32_t	irqreturn_t;
+
+#define IRQ_HANDLED	0
+#define request_irq	qbman_request_irq
+#define free_irq	qbman_free_irq
+
+#define __iomem
+#define GFP_KERNEL	0
+#define __raw_readb(p)	(*(const volatile unsigned char *)(p))
+#define __raw_readl(p)	(*(const volatile unsigned int *)(p))
+#define __raw_writel(v, p) {*(volatile unsigned int *)(p) = (v); }
+
+/* to be used as an upper-limit only */
+#define NR_CPUS			64
+
+/* Waitqueue stuff */
+typedef struct { }		wait_queue_head_t;
+#define DECLARE_WAIT_QUEUE_HEAD(x) int dummy_##x __always_unused
+#define wake_up(x)		do { } while (0)
+
+/* I/O operations */
+static inline u32 in_be32(volatile void *__p)
+{
+	volatile u32 *p = __p;
+	return rte_be_to_cpu_32(*p);
+}
+
+static inline void out_be32(volatile void *__p, u32 val)
+{
+	volatile u32 *p = __p;
+	*p = rte_cpu_to_be_32(val);
+}
+
+#define dcbt_ro(p) __builtin_prefetch(p, 0)
+#define dcbt_rw(p) __builtin_prefetch(p, 1)
+
+#define dcbz(p) { asm volatile("dc zva, %0" : : "r" (p) : "memory"); }
+#define dcbz_64(p) dcbz(p)
+#define hwsync() rte_rmb()
+#define lwsync() rte_wmb()
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+#define dccivac(p) { asm volatile("dc civac, %0" : : "r"(p) : "memory"); }
+
+#define dcbit_ro(p) \
+	do { \
+		dccivac(p);						\
+		asm volatile("prfm pldl1keep, [%0, #64]" : : "r" (p));	\
+	} while (0)
+
+#define barrier() { asm volatile ("" : : : "memory"); }
+#define cpu_relax barrier
+
+static inline uint64_t mfatb(void)
+{
+	uint64_t ret, ret_new, timeout = 200;
+
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret));
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	while (ret != ret_new && timeout--) {
+		ret = ret_new;
+		asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	}
+	DPAA_BUG_ON(!timeout && (ret != ret_new));
+	return ret * 64;
+}
+
+/* Spin for a few cycles without bothering the bus */
+static inline void cpu_spin(int cycles)
+{
+	uint64_t now = mfatb();
+
+	while (mfatb() < (now + cycles))
+		;
+}
+
+/* Qman/Bman API inlines and macros; */
+#ifdef lower_32_bits
+#undef lower_32_bits
+#endif
+#define lower_32_bits(x) ((u32)(x))
+
+#ifdef upper_32_bits
+#undef upper_32_bits
+#endif
+#define upper_32_bits(x) ((u32)(((x) >> 16) >> 16))
+
+/*
+ * Swap bytes of a 48-bit value.
+ */
+static inline uint64_t
+__bswap_48(uint64_t x)
+{
+	return  ((x & 0x0000000000ffULL) << 40) |
+		((x & 0x00000000ff00ULL) << 24) |
+		((x & 0x000000ff0000ULL) <<  8) |
+		((x & 0x0000ff000000ULL) >>  8) |
+		((x & 0x00ff00000000ULL) >> 24) |
+		((x & 0xff0000000000ULL) >> 40);
+}
+
+/*
+ * Swap bytes of a 40-bit value.
+ */
+static inline uint64_t
+__bswap_40(uint64_t x)
+{
+	return  ((x & 0x00000000ffULL) << 32) |
+		((x & 0x000000ff00ULL) << 16) |
+		((x & 0x0000ff0000ULL)) |
+		((x & 0x00ff000000ULL) >> 16) |
+		((x & 0xff00000000ULL) >> 32);
+}
+
+/*
+ * Swap bytes of a 24-bit value.
+ */
+static inline uint32_t
+__bswap_24(uint32_t x)
+{
+	return  ((x & 0x0000ffULL) << 16) |
+		((x & 0x00ff00ULL)) |
+		((x & 0xff0000ULL) >> 16);
+}
+
+#define be64_to_cpu(x) rte_be_to_cpu_64(x)
+#define be32_to_cpu(x) rte_be_to_cpu_32(x)
+#define be16_to_cpu(x) rte_be_to_cpu_16(x)
+
+#define cpu_to_be64(x) rte_cpu_to_be_64(x)
+#define cpu_to_be32(x) rte_cpu_to_be_32(x)
+#define cpu_to_be16(x) rte_cpu_to_be_16(x)
+
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+
+#define cpu_to_be48(x) __bswap_48(x)
+#define be48_to_cpu(x) __bswap_48(x)
+
+#define cpu_to_be40(x) __bswap_40(x)
+#define be40_to_cpu(x) __bswap_40(x)
+
+#define cpu_to_be24(x) __bswap_24(x)
+#define be24_to_cpu(x) __bswap_24(x)
+
+#else /* RTE_BIG_ENDIAN */
+
+#define cpu_to_be48(x) (x)
+#define be48_to_cpu(x) (x)
+
+#define cpu_to_be40(x) (x)
+#define be40_to_cpu(x) (x)
+
+#define cpu_to_be24(x) (x)
+#define be24_to_cpu(x) (x)
+
+#endif /* RTE_BIG_ENDIAN */
+
+/* When copying aligned words or shorts, try to avoid memcpy() */
+/* memcpy() stuff - when you know alignments in advance */
+#define CONFIG_TRY_BETTER_MEMCPY
+
+#ifdef CONFIG_TRY_BETTER_MEMCPY
+static inline void copy_words(void *dest, const void *src, size_t sz)
+{
+	u32 *__dest = dest;
+	const u32 *__src = src;
+	size_t __sz = sz >> 2;
+
+	DPAA_BUG_ON((unsigned long)dest & 0x3);
+	DPAA_BUG_ON((unsigned long)src & 0x3);
+	DPAA_BUG_ON(sz & 0x3);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_shorts(void *dest, const void *src, size_t sz)
+{
+	u16 *__dest = dest;
+	const u16 *__src = src;
+	size_t __sz = sz >> 1;
+
+	DPAA_BUG_ON((unsigned long)dest & 0x1);
+	DPAA_BUG_ON((unsigned long)src & 0x1);
+	DPAA_BUG_ON(sz & 0x1);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_bytes(void *dest, const void *src, size_t sz)
+{
+	u8 *__dest = dest;
+	const u8 *__src = src;
+
+	while (sz--)
+		*(__dest++) = *(__src++);
+}
+#else
+#define copy_words memcpy
+#define copy_shorts memcpy
+#define copy_bytes memcpy
+#endif
+
+/* Allocator stuff */
+#define kmalloc(sz, t)	malloc(sz)
+#define vmalloc(sz)	malloc(sz)
+#define kfree(p)	{ if (p) free(p); }
+static inline void *kzalloc(size_t sz, gfp_t __foo __rte_unused)
+{
+	void *ptr = malloc(sz);
+
+	if (ptr)
+		memset(ptr, 0, sz);
+	return ptr;
+}
+
+static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused)
+{
+	void *p;
+
+	if (posix_memalign(&p, 4096, 4096))
+		return 0;
+	memset(p, 0, 4096);
+	return (unsigned long)p;
+}
+
+/* Spinlock stuff */
+#define spinlock_t		rte_spinlock_t
+#define __SPIN_LOCK_UNLOCKED(x)	RTE_SPINLOCK_INITIALIZER
+#define DEFINE_SPINLOCK(x)	spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
+#define spin_lock_init(x)	rte_spinlock_init(x)
+#define spin_lock_destroy(x)
+#define spin_lock(x)		rte_spinlock_lock(x)
+#define spin_unlock(x)		rte_spinlock_unlock(x)
+#define spin_lock_irq(x)	spin_lock(x)
+#define spin_unlock_irq(x)	spin_unlock(x)
+#define spin_lock_irqsave(x, f) spin_lock_irq(x)
+#define spin_unlock_irqrestore(x, f) spin_unlock_irq(x)
+
+#define atomic_t                rte_atomic32_t
+#define atomic_read(v)          rte_atomic32_read(v)
+#define atomic_set(v, i)        rte_atomic32_set(v, i)
+
+#define atomic_inc(v)           rte_atomic32_add(v, 1)
+#define atomic_dec(v)           rte_atomic32_sub(v, 1)
+
+#define atomic_inc_and_test(v)  rte_atomic32_inc_and_test(v)
+#define atomic_dec_and_test(v)  rte_atomic32_dec_and_test(v)
+
+#define atomic_inc_return(v)    rte_atomic32_add_return(v, 1)
+#define atomic_dec_return(v)    rte_atomic32_sub_return(v, 1)
+#define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0)
+
+#include <dpaa_list.h>
+#include <dpaa_bits.h>
+
+#endif /* __COMPAT_H */
diff --git a/drivers/bus/dpaa/include/dpaa_bits.h b/drivers/bus/dpaa/include/dpaa_bits.h
new file mode 100644
index 0000000..71f2d80
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_bits.h
@@ -0,0 +1,65 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_BITS_H
+#define __DPAA_BITS_H
+
+/* Bitfield stuff. */
+#define BITS_PER_ULONG	(sizeof(unsigned long) << 3)
+#define SHIFT_PER_ULONG	(((1 << 5) == BITS_PER_ULONG) ? 5 : 6)
+#define BITS_MASK(idx)	(1UL << ((idx) & (BITS_PER_ULONG - 1)))
+#define BITS_IDX(idx)	((idx) >> SHIFT_PER_ULONG)
+
+static inline void dpaa_set_bits(unsigned long mask,
+				 volatile unsigned long *p)
+{
+	*p |= mask;
+}
+
+static inline void dpaa_set_bit(int idx, volatile unsigned long *bits)
+{
+	dpaa_set_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+static inline void dpaa_clear_bits(unsigned long mask,
+				   volatile unsigned long *p)
+{
+	*p &= ~mask;
+}
+
+static inline void dpaa_clear_bit(int idx,
+				  volatile unsigned long *bits)
+{
+	dpaa_clear_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+#endif /* __DPAA_BITS_H */
diff --git a/drivers/bus/dpaa/include/dpaa_list.h b/drivers/bus/dpaa/include/dpaa_list.h
new file mode 100644
index 0000000..871e612
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_list.h
@@ -0,0 +1,101 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_LIST_H
+#define __DPAA_LIST_H
+
+/****************/
+/* Linked-lists */
+/****************/
+
+struct list_head {
+	struct list_head *prev;
+	struct list_head *next;
+};
+
+#define COMPAT_LIST_HEAD(n) \
+struct list_head n = { \
+	.prev = &n, \
+	.next = &n \
+}
+
+#define INIT_LIST_HEAD(p) \
+do { \
+	struct list_head *__p298 = (p); \
+	__p298->next = __p298; \
+	__p298->prev = __p298->next; \
+} while (0)
+#define list_entry(node, type, member) \
+	(type *)((void *)node - offsetof(type, member))
+#define list_empty(p) \
+({ \
+	const struct list_head *__p298 = (p); \
+	((__p298->next == __p298) && (__p298->prev == __p298)); \
+})
+#define list_add(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->next = __l298->next; \
+	__p298->prev = __l298; \
+	__l298->next->prev = __p298; \
+	__l298->next = __p298; \
+} while (0)
+#define list_add_tail(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->prev = __l298->prev; \
+	__p298->next = __l298; \
+	__l298->prev->next = __p298; \
+	__l298->prev = __p298; \
+} while (0)
+#define list_for_each(i, l)				\
+	for (i = (l)->next; i != (l); i = i->next)
+#define list_for_each_safe(i, j, l)			\
+	for (i = (l)->next, j = i->next; i != (l);	\
+	     i = j, j = i->next)
+#define list_for_each_entry(i, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name); &i->name != (l); \
+		i = list_entry(i->name.next, typeof(*i), name))
+#define list_for_each_entry_safe(i, j, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name), \
+		j = list_entry(i->name.next, typeof(*j), name); \
+		&i->name != (l); \
+		i = j, j = list_entry(j->name.next, typeof(*j), name))
+#define list_del(i) \
+do { \
+	(i)->next->prev = (i)->prev; \
+	(i)->prev->next = (i)->next; \
+} while (0)
+
+#endif /* __DPAA_LIST_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 04/40] bus/dpaa: add OF parser for device scanning
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (2 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
                           ` (36 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This layer is used by Bus driver's scan function. Devices are parsed
using OF parser and added to DPAA device list.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile       |   7 +
 drivers/bus/dpaa/base/fman/of.c | 576 ++++++++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/of.h   | 190 +++++++++++++
 3 files changed, 773 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/include/of.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 28694c0..30a3a5d 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -38,7 +38,11 @@ LIB = librte_bus_dpaa.a
 
 CFLAGS := -I$(SRCDIR) $(CFLAGS)
 CFLAGS += -O3 $(WERROR_FLAGS)
+CFLAGS += -Wno-pointer-arith
+CFLAGS += -Wno-cast-qual
+CFLAGS += -D _GNU_SOURCE
 CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_BUS_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -52,6 +56,9 @@ LIBABIVER := 1
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/of.c \
+
 # Link Pthread
 LDLIBS += -lpthread
 
diff --git a/drivers/bus/dpaa/base/fman/of.c b/drivers/bus/dpaa/base/fman/of.c
new file mode 100644
index 0000000..b2d7c02
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/of.c
@@ -0,0 +1,576 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <of.h>
+#include <rte_dpaa_logs.h>
+
+static int alive;
+static struct dt_dir root_dir;
+static const char *base_dir;
+static COMPAT_LIST_HEAD(linear);
+
+static int
+of_open_dir(const char *relative_path, struct dirent ***d)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = scandir(full_path, d, 0, versionsort);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+of_close_dir(struct dirent **d, int num)
+{
+	while (num--)
+		free(d[num]);
+	free(d);
+}
+
+static int
+of_open_file(const char *relative_path)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = open(full_path, O_RDONLY);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+process_file(struct dirent *dent, struct dt_dir *parent)
+{
+	int fd;
+	struct dt_file *f = malloc(sizeof(*f));
+
+	if (!f) {
+		DPAA_BUS_LOG(DEBUG, "Unable to allocate memory for file node");
+		return;
+	}
+	f->node.is_file = 1;
+	snprintf(f->node.node.name, NAME_MAX, "%s", dent->d_name);
+	snprintf(f->node.node.full_name, PATH_MAX, "%s/%s",
+		 parent->node.node.full_name, dent->d_name);
+	f->parent = parent;
+	fd = of_open_file(f->node.node.full_name);
+	if (fd < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to open file node");
+		free(f);
+		return;
+	}
+	f->len = read(fd, f->buf, OF_FILE_BUF_MAX);
+	close(fd);
+	if (f->len < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to read file node");
+		free(f);
+		return;
+	}
+	list_add_tail(&f->node.list, &parent->files);
+}
+
+static const struct dt_dir *
+node2dir(const struct device_node *n)
+{
+	struct dt_node *dn = container_of((struct device_node *)n,
+					  struct dt_node, node);
+	const struct dt_dir *d = container_of(dn, struct dt_dir, node);
+
+	assert(!dn->is_file);
+	return d;
+}
+
+/* process_dir() calls iterate_dir(), but the latter will also call the former
+ * when recursing into sub-directories, so a predeclaration is needed.
+ */
+static int process_dir(const char *relative_path, struct dt_dir *dt);
+
+static int
+iterate_dir(struct dirent **d, int num, struct dt_dir *dt)
+{
+	int loop;
+	/* Iterate the directory contents */
+	for (loop = 0; loop < num; loop++) {
+		struct dt_dir *subdir;
+		int ret;
+		/* Ignore dot files of all types (especially "..") */
+		if (d[loop]->d_name[0] == '.')
+			continue;
+		switch (d[loop]->d_type) {
+		case DT_REG:
+			process_file(d[loop], dt);
+			break;
+		case DT_DIR:
+			subdir = malloc(sizeof(*subdir));
+			if (!subdir) {
+				perror("malloc");
+				return -ENOMEM;
+			}
+			snprintf(subdir->node.node.name, NAME_MAX, "%s",
+				 d[loop]->d_name);
+			snprintf(subdir->node.node.full_name, PATH_MAX,
+				 "%s/%s", dt->node.node.full_name,
+				 d[loop]->d_name);
+			subdir->parent = dt;
+			ret = process_dir(subdir->node.node.full_name, subdir);
+			if (ret)
+				return ret;
+			list_add_tail(&subdir->node.list, &dt->subdirs);
+			break;
+		default:
+			DPAA_BUS_LOG(DEBUG, "Ignoring invalid dt entry %s/%s",
+				     dt->node.node.full_name, d[loop]->d_name);
+		}
+	}
+	return 0;
+}
+
+static int
+process_dir(const char *relative_path, struct dt_dir *dt)
+{
+	struct dirent **d;
+	int ret, num;
+
+	dt->node.is_file = 0;
+	INIT_LIST_HEAD(&dt->subdirs);
+	INIT_LIST_HEAD(&dt->files);
+	ret = of_open_dir(relative_path, &d);
+	if (ret < 0)
+		return ret;
+	num = ret;
+	ret = iterate_dir(d, num, dt);
+	of_close_dir(d, num);
+	return (ret < 0) ? ret : 0;
+}
+
+static void
+linear_dir(struct dt_dir *d)
+{
+	struct dt_file *f;
+	struct dt_dir *dd;
+
+	d->compatible = NULL;
+	d->status = NULL;
+	d->lphandle = NULL;
+	d->a_cells = NULL;
+	d->s_cells = NULL;
+	d->reg = NULL;
+	list_for_each_entry(f, &d->files, node.list) {
+		if (!strcmp(f->node.node.name, "compatible")) {
+			if (d->compatible)
+				DPAA_BUS_LOG(DEBUG, "Duplicate compatible in"
+					     " %s", d->node.node.full_name);
+			d->compatible = f;
+		} else if (!strcmp(f->node.node.name, "status")) {
+			if (d->status)
+				DPAA_BUS_LOG(DEBUG, "Duplicate status in %s",
+					     d->node.node.full_name);
+			d->status = f;
+		} else if (!strcmp(f->node.node.name, "linux,phandle")) {
+			if (d->lphandle)
+				DPAA_BUS_LOG(DEBUG, "Duplicate lphandle in %s",
+					     d->node.node.full_name);
+			d->lphandle = f;
+		} else if (!strcmp(f->node.node.name, "#address-cells")) {
+			if (d->a_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate a_cells in %s",
+					     d->node.node.full_name);
+			d->a_cells = f;
+		} else if (!strcmp(f->node.node.name, "#size-cells")) {
+			if (d->s_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate s_cells in %s",
+					     d->node.node.full_name);
+			d->s_cells = f;
+		} else if (!strcmp(f->node.node.name, "reg")) {
+			if (d->reg)
+				DPAA_BUS_LOG(DEBUG, "Duplicate reg in %s",
+					     d->node.node.full_name);
+			d->reg = f;
+		}
+	}
+
+	list_for_each_entry(dd, &d->subdirs, node.list) {
+		list_add_tail(&dd->linear, &linear);
+		linear_dir(dd);
+	}
+}
+
+int
+of_init_path(const char *dt_path)
+{
+	int ret;
+
+	base_dir = dt_path;
+
+	/* This needs to be singleton initialization */
+	DPAA_BUS_HWWARN(alive, "Double-init of device-tree driver!");
+
+	/* Prepare root node (the remaining fields are set in process_dir()) */
+	root_dir.node.node.name[0] = '\0';
+	root_dir.node.node.full_name[0] = '\0';
+	INIT_LIST_HEAD(&root_dir.node.list);
+	root_dir.parent = NULL;
+
+	/* Kick things off... */
+	ret = process_dir("", &root_dir);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to parse device tree");
+		return ret;
+	}
+
+	/* Now make a flat, linear list of directories */
+	linear_dir(&root_dir);
+	alive = 1;
+	return 0;
+}
+
+static void
+destroy_dir(struct dt_dir *d)
+{
+	struct dt_file *f, *tmpf;
+	struct dt_dir *dd, *tmpd;
+
+	list_for_each_entry_safe(f, tmpf, &d->files, node.list) {
+		list_del(&f->node.list);
+		free(f);
+	}
+	list_for_each_entry_safe(dd, tmpd, &d->subdirs, node.list) {
+		destroy_dir(dd);
+		list_del(&dd->node.list);
+		free(dd);
+	}
+}
+
+void
+of_finish(void)
+{
+	DPAA_BUS_HWWARN(!alive, "Double-finish of device-tree driver!");
+
+	destroy_dir(&root_dir);
+	INIT_LIST_HEAD(&linear);
+	alive = 0;
+}
+
+static const struct dt_dir *
+next_linear(const struct dt_dir *f)
+{
+	if (f->linear.next == &linear)
+		return NULL;
+	return list_entry(f->linear.next, struct dt_dir, linear);
+}
+
+static int
+check_compatible(const struct dt_file *f, const char *compatible)
+{
+	const char *c = (char *)f->buf;
+	unsigned int len, remains = f->len;
+
+	while (remains) {
+		len = strlen(c);
+		if (!strcmp(c, compatible))
+			return 1;
+
+		if (remains < len + 1)
+			break;
+
+		c += (len + 1);
+		remains -= (len + 1);
+	}
+	return 0;
+}
+
+const struct device_node *
+of_find_compatible_node(const struct device_node *from,
+			const char *type __always_unused,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (list_empty(&linear))
+		return NULL;
+	if (!from)
+		d = list_entry(linear.next, struct dt_dir, linear);
+	else
+		d = node2dir(from);
+	for (d = next_linear(d); d && (!d->compatible ||
+				       !check_compatible(d->compatible,
+				       compatible));
+			d = next_linear(d))
+		;
+	if (d)
+		return &d->node.node;
+	return NULL;
+}
+
+const void *
+of_get_property(const struct device_node *from, const char *name,
+		size_t *lenp)
+{
+	const struct dt_dir *d;
+	const struct dt_file *f;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	d = node2dir(from);
+	list_for_each_entry(f, &d->files, node.list)
+		if (!strcmp(f->node.node.name, name)) {
+			if (lenp)
+				*lenp = f->len;
+			return f->buf;
+		}
+	return NULL;
+}
+
+bool
+of_device_is_available(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	d = node2dir(dev_node);
+	if (!d->status)
+		return true;
+	if (!strcmp((char *)d->status->buf, "okay"))
+		return true;
+	if (!strcmp((char *)d->status->buf, "ok"))
+		return true;
+	return false;
+}
+
+const struct device_node *
+of_find_node_by_phandle(phandle ph)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	list_for_each_entry(d, &linear, linear)
+		if (d->lphandle && (d->lphandle->len == 4) &&
+		    !memcmp(d->lphandle->buf, &ph, 4))
+			return &d->node.node;
+	return NULL;
+}
+
+const struct device_node *
+of_get_parent(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	d = node2dir(dev_node);
+	if (!d->parent)
+		return NULL;
+	return &d->parent->node.node;
+}
+
+const struct device_node *
+of_get_next_child(const struct device_node *dev_node,
+		  const struct device_node *prev)
+{
+	const struct dt_dir *p, *c;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	p = node2dir(dev_node);
+	if (prev) {
+		c = node2dir(prev);
+		DPAA_BUS_HWWARN((c->parent != p), "Parent/child mismatch");
+		if (c->parent != p)
+			return NULL;
+		if (c->node.list.next == &p->subdirs)
+			/* prev was the last child */
+			return NULL;
+		c = list_entry(c->node.list.next, struct dt_dir, node.list);
+		return &c->node.node;
+	}
+	/* Return first child */
+	if (list_empty(&p->subdirs))
+		return NULL;
+	c = list_entry(p->subdirs.next, struct dt_dir, node.list);
+	return &c->node.node;
+}
+
+uint32_t
+of_n_addr_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->a_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->a_cells->buf[0];
+			assert(d->a_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NA;
+}
+
+uint32_t
+of_n_size_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->s_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->s_cells->buf[0];
+			assert(d->s_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NS;
+}
+
+const uint32_t *
+of_get_address(const struct device_node *dev_node, size_t idx,
+	       uint64_t *size, uint32_t *flags __rte_unused)
+{
+	const struct dt_dir *d;
+	const unsigned char *buf;
+	uint32_t na = of_n_addr_cells(dev_node);
+	uint32_t ns = of_n_size_cells(dev_node);
+
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (!d->reg)
+		return NULL;
+	assert(d->reg->len % ((na + ns) * 4) == 0);
+	assert(d->reg->len / ((na + ns) * 4) > (unsigned int) idx);
+	buf = (const unsigned char *)&d->reg->buf[0];
+	buf += (na + ns) * idx * 4;
+	if (size)
+		for (*size = 0; ns > 0; ns--, na++)
+			*size = (*size << 32) +
+				(((uint32_t)buf[4 * na] << 24) |
+				((uint32_t)buf[4 * na + 1] << 16) |
+				((uint32_t)buf[4 * na + 2] << 8) |
+				(uint32_t)buf[4 * na + 3]);
+	return (const uint32_t *)buf;
+}
+
+uint64_t
+of_translate_address(const struct device_node *dev_node,
+		     const uint32_t *addr)
+{
+	uint64_t phys_addr, tmp_addr;
+	const struct device_node *parent;
+	const uint32_t *ranges;
+	size_t rlen;
+	uint32_t na, pna;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	assert(dev_node != NULL);
+
+	na = of_n_addr_cells(dev_node);
+	phys_addr = of_read_number(addr, na);
+
+	dev_node = of_get_parent(dev_node);
+	if (!dev_node)
+		return 0;
+	else if (node2dir(dev_node) == &root_dir)
+		return phys_addr;
+
+	do {
+		pna = of_n_addr_cells(dev_node);
+		parent = of_get_parent(dev_node);
+		if (!parent)
+			return 0;
+
+		ranges = of_get_property(dev_node, "ranges", &rlen);
+		/* "ranges" property is missing. Translation breaks */
+		if (!ranges)
+			return 0;
+		/* "ranges" property is empty. Do 1:1 translation */
+		else if (rlen == 0)
+			continue;
+		else
+			tmp_addr = of_read_number(ranges + na, pna);
+
+		na = pna;
+		dev_node = parent;
+		phys_addr += tmp_addr;
+	} while (node2dir(parent) != &root_dir);
+
+	return phys_addr;
+}
+
+bool
+of_device_is_compatible(const struct device_node *dev_node,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (d->compatible && check_compatible(d->compatible, compatible))
+		return true;
+	return false;
+}
diff --git a/drivers/bus/dpaa/include/of.h b/drivers/bus/dpaa/include/of.h
new file mode 100644
index 0000000..2984b1e
--- /dev/null
+++ b/drivers/bus/dpaa/include/of.h
@@ -0,0 +1,190 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor, Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __OF_H
+#define	__OF_H
+
+#include <compat.h>
+
+#ifndef OF_INIT_DEFAULT_PATH
+#define OF_INIT_DEFAULT_PATH "/proc/device-tree"
+#endif
+
+#define OF_DEFAULT_NA 1
+#define OF_DEFAULT_NS 1
+
+#define OF_FILE_BUF_MAX 256
+
+/**
+ * Layout of Device Tree:
+ * dt_dir
+ *  |- dt_dir
+ *  |   |- dt_dir
+ *  |   |  |- dt_dir
+ *  |   |  |  |- dt_file
+ *  |   |  |  ``- dt_file
+ *  |   |  ``- dt_file
+ *  |   `-dt_file`
+ *  ``- dt_file
+ *
+ *  +------------------+
+ *  |dt_dir            |
+ *  |+----------------+|
+ *  ||dt_node         ||
+ *  ||+--------------+||
+ *  |||device_node   |||
+ *  ||+--------------+||
+ *  || list_dt_nodes  ||
+ *  |+----------------+|
+ *  | list of subdir   |
+ *  | list of files    |
+ *  +------------------+
+ */
+
+/**
+ * Device description on of a device node in device tree.
+ */
+struct device_node {
+	char name[NAME_MAX];
+	char full_name[PATH_MAX];
+};
+
+/**
+ * List of device nodes available in a device tree layout
+ */
+struct dt_node {
+	struct device_node node; /**< Property of node */
+	int is_file; /**< FALSE==dir, TRUE==file */
+	struct list_head list; /**< Nodes within a parent subdir */
+};
+
+/**
+ * Types we use to represent directories and files
+ */
+struct dt_file;
+struct dt_dir {
+	struct dt_node node;
+	struct list_head subdirs;
+	struct list_head files;
+	struct list_head linear;
+	struct dt_dir *parent;
+	struct dt_file *compatible;
+	struct dt_file *status;
+	struct dt_file *lphandle;
+	struct dt_file *a_cells;
+	struct dt_file *s_cells;
+	struct dt_file *reg;
+};
+
+struct dt_file {
+	struct dt_node node;
+	struct dt_dir *parent;
+	ssize_t len;
+	uint64_t buf[OF_FILE_BUF_MAX >> 3];
+};
+
+const struct device_node *of_find_compatible_node(
+					const struct device_node *from,
+					const char *type __always_unused,
+					const char *compatible)
+	__attribute__((nonnull(3)));
+
+#define for_each_compatible_node(dev_node, type, compatible) \
+	for (dev_node = of_find_compatible_node(NULL, type, compatible); \
+		dev_node != NULL; \
+		dev_node = of_find_compatible_node(dev_node, type, compatible))
+
+const void *of_get_property(const struct device_node *from, const char *name,
+			    size_t *lenp) __attribute__((nonnull(2)));
+bool of_device_is_available(const struct device_node *dev_node);
+
+const struct device_node *of_find_node_by_phandle(phandle ph);
+
+const struct device_node *of_get_parent(const struct device_node *dev_node);
+
+const struct device_node *of_get_next_child(const struct device_node *dev_node,
+					    const struct device_node *prev);
+
+#define for_each_child_node(parent, child) \
+	for (child = of_get_next_child(parent, NULL); child != NULL; \
+			child = of_get_next_child(parent, child))
+
+uint32_t of_n_addr_cells(const struct device_node *dev_node);
+uint32_t of_n_size_cells(const struct device_node *dev_node);
+
+const uint32_t *of_get_address(const struct device_node *dev_node, size_t idx,
+			       uint64_t *size, uint32_t *flags);
+
+uint64_t of_translate_address(const struct device_node *dev_node,
+			      const u32 *addr) __attribute__((nonnull));
+
+bool of_device_is_compatible(const struct device_node *dev_node,
+			     const char *compatible);
+
+/* of_init() must be called prior to initialisation or use of any driver
+ * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers, etc.
+ * The path should usually be "/proc/device-tree".
+ */
+int of_init_path(const char *dt_path);
+
+/* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
+ * full reload is desired without a process exit.
+ */
+void of_finish(void);
+
+/* Use of this wrapper is recommended. */
+static inline int of_init(void)
+{
+	return of_init_path(OF_INIT_DEFAULT_PATH);
+}
+
+/* Read a numeric property according to its size and return it as a 64-bit
+ * value.
+ */
+static inline uint64_t of_read_number(const __be32 *cell, int size)
+{
+	uint64_t r = 0;
+
+	while (size--)
+		r = (r << 32) | be32toh(*(cell++));
+	return r;
+}
+
+#endif	/*  __OF_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 05/40] bus/dpaa: introducing FMan configurations
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (3 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
                           ` (35 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

FMan or Frame Manager, inspects traffic, splits it into queueson ingress.
It is also responsible for directing traffic on queues on egress.

This patch introduces FMan configurational interfaces. This layer is
used by Bus driver for configuring the hardware block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   2 +
 drivers/bus/dpaa/base/fman/fman.c         | 611 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c | 214 +++++++++++
 drivers/bus/dpaa/include/fman.h           | 458 ++++++++++++++++++++++
 drivers/bus/dpaa/include/netcfg.h         |  96 +++++
 5 files changed, 1381 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 30a3a5d..f6e504d 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -57,7 +57,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/fman.c \
 	base/fman/of.c \
+	base/fman/netcfg_layer.c
 
 # Link Pthread
 LDLIBS += -lpthread
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
new file mode 100644
index 0000000..2c6029e
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -0,0 +1,611 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+
+#include <rte_malloc.h>
+
+/* This header declares the driver interface we implement */
+#include <fman.h>
+#include <of.h>
+#include <rte_dpaa_logs.h>
+
+#define QMI_PORT_REGS_OFFSET		0x400
+
+/* CCSR map address to access ccsr based register */
+void *fman_ccsr_map;
+/* fman version info */
+u16 fman_ip_rev;
+static int get_once;
+u32 fman_dealloc_bufs_mask_hi;
+u32 fman_dealloc_bufs_mask_lo;
+
+int fman_ccsr_map_fd = -1;
+static COMPAT_LIST_HEAD(__ifs);
+
+/* This is the (const) global variable that callers have read-only access to.
+ * Internally, we have read-write access directly to __ifs.
+ */
+const struct list_head *fman_if_list = &__ifs;
+
+static void
+if_destructor(struct __fman_if *__if)
+{
+	struct fman_if_bpool *bp, *tmpbp;
+
+	if (__if->__if.mac_type == fman_offline)
+		goto cleanup;
+
+	list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+cleanup:
+	rte_free(__if);
+}
+
+static int
+fman_get_ip_rev(const struct device_node *fman_node)
+{
+	const uint32_t *fman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	uint32_t ip_rev_1;
+	int _errno;
+
+	fman_addr = of_get_address(fman_node, 0, &regs_size, NULL);
+	if (!fman_addr) {
+		pr_err("of_get_address cannot return fman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(fman_node, fman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+	fman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, fman_ccsr_map_fd, phys_addr);
+	if (fman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map FMan ccsr base");
+		return -EINVAL;
+	}
+
+	ip_rev_1 = in_be32(fman_ccsr_map + FMAN_IP_REV_1);
+	fman_ip_rev = (ip_rev_1 & FMAN_IP_REV_1_MAJOR_MASK) >>
+			FMAN_IP_REV_1_MAJOR_SHIFT;
+
+	_errno = munmap(fman_ccsr_map, regs_size);
+	if (_errno)
+		pr_err("munmap() of FMan ccsr failed");
+
+	return 0;
+}
+
+static int
+fman_get_mac_index(uint64_t regs_addr_host, uint8_t *mac_idx)
+{
+	int ret = 0;
+
+	/*
+	 * MAC1 : E_0000h
+	 * MAC2 : E_2000h
+	 * MAC3 : E_4000h
+	 * MAC4 : E_6000h
+	 * MAC5 : E_8000h
+	 * MAC6 : E_A000h
+	 * MAC7 : E_C000h
+	 * MAC8 : E_E000h
+	 * MAC9 : F_0000h
+	 * MAC10: F_2000h
+	 */
+	switch (regs_addr_host) {
+	case 0xE0000:
+		*mac_idx = 1;
+		break;
+	case 0xE2000:
+		*mac_idx = 2;
+		break;
+	case 0xE4000:
+		*mac_idx = 3;
+		break;
+	case 0xE6000:
+		*mac_idx = 4;
+		break;
+	case 0xE8000:
+		*mac_idx = 5;
+		break;
+	case 0xEA000:
+		*mac_idx = 6;
+		break;
+	case 0xEC000:
+		*mac_idx = 7;
+		break;
+	case 0xEE000:
+		*mac_idx = 8;
+		break;
+	case 0xF0000:
+		*mac_idx = 9;
+		break;
+	case 0xF2000:
+		*mac_idx = 10;
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+fman_if_init(const struct device_node *dpa_node)
+{
+	const char *rprop, *mprop;
+	uint64_t phys_addr;
+	struct __fman_if *__if;
+	struct fman_if_bpool *bpool;
+
+	const phandle *mac_phandle, *ports_phandle, *pools_phandle;
+	const phandle *tx_channel_id = NULL, *mac_addr, *cell_idx;
+	const phandle *rx_phandle, *tx_phandle;
+	uint64_t tx_phandle_host[4] = {0};
+	uint64_t rx_phandle_host[4] = {0};
+	uint64_t regs_addr_host = 0;
+	uint64_t cell_idx_host = 0;
+
+	const struct device_node *mac_node = NULL, *tx_node;
+	const struct device_node *pool_node, *fman_node, *rx_node;
+	const uint32_t *regs_addr = NULL;
+	const char *mname, *fname;
+	const char *dname = dpa_node->full_name;
+	size_t lenp;
+	int _errno;
+	const char *char_prop;
+	uint32_t na;
+
+	if (of_device_is_available(dpa_node) == false)
+		return 0;
+
+	rprop = "fsl,qman-frame-queues-rx";
+	mprop = "fsl,fman-mac";
+
+	/* Allocate an object for this network interface */
+	__if = rte_malloc(NULL, sizeof(*__if), RTE_CACHE_LINE_SIZE);
+	if (!__if) {
+		FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*__if));
+		goto err;
+	}
+	memset(__if, 0, sizeof(*__if));
+	INIT_LIST_HEAD(&__if->__if.bpool_list);
+	strncpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
+	__if->node_path[PATH_MAX - 1] = '\0';
+
+	/* Obtain the MAC node used by this interface except macless */
+	mac_phandle = of_get_property(dpa_node, mprop, &lenp);
+	if (!mac_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no %s\n", dname, mprop);
+		goto err;
+	}
+	assert(lenp == sizeof(phandle));
+	mac_node = of_find_node_by_phandle(*mac_phandle);
+	if (!mac_node) {
+		FMAN_ERR(-ENXIO, "%s: bad 'fsl,fman-mac\n", dname);
+		goto err;
+	}
+	mname = mac_node->full_name;
+
+	/* Map the CCSR regs for the MAC node */
+	regs_addr = of_get_address(mac_node, 0, &__if->regs_size, NULL);
+	if (!regs_addr) {
+		FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+		goto err;
+	}
+	phys_addr = of_translate_address(mac_node, regs_addr);
+	if (!phys_addr) {
+		FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+			 mname, regs_addr);
+			 __if->ccsr_map = mmap(NULL, __if->regs_size,
+			 PROT_READ | PROT_WRITE, MAP_SHARED,
+			 fman_ccsr_map_fd, phys_addr);
+		goto err;
+	}
+	if (__if->ccsr_map == MAP_FAILED) {
+		FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+		goto err;
+	}
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	regs_addr_host = of_read_number(regs_addr, na);
+
+
+	/* Get the index of the Fman this i/f belongs to */
+	fman_node = of_get_parent(mac_node);
+	na = of_n_addr_cells(mac_node);
+	if (!fman_node) {
+		FMAN_ERR(-ENXIO, "of_get_parent(%s)\n", mname);
+		goto err;
+	}
+	fname = fman_node->full_name;
+	cell_idx = of_get_property(fman_node, "cell-index", &lenp);
+	if (!cell_idx) {
+		FMAN_ERR(-ENXIO, "%s: no cell-index)\n", fname);
+		goto err;
+	}
+	assert(lenp == sizeof(*cell_idx));
+	cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+	__if->__if.fman_idx = cell_idx_host;
+	if (!get_once) {
+		_errno = fman_get_ip_rev(fman_node);
+		if (_errno) {
+			FMAN_ERR(-ENXIO, "%s: ip_rev is not available\n",
+				 fname);
+			goto err;
+		}
+	}
+
+	if (fman_ip_rev >= FMAN_V3) {
+		/*
+		 * Set A2V, OVOM, EBD bits in contextA to allow external
+		 * buffer deallocation by fman.
+		 */
+		fman_dealloc_bufs_mask_hi = FMAN_V3_CONTEXTA_EN_A2V |
+						FMAN_V3_CONTEXTA_EN_OVOM;
+		fman_dealloc_bufs_mask_lo = FMAN_V3_CONTEXTA_EN_EBD;
+	} else {
+		fman_dealloc_bufs_mask_hi = 0;
+		fman_dealloc_bufs_mask_lo = 0;
+	}
+	/* Is the MAC node 1G, 10G? */
+	__if->__if.is_memac = 0;
+
+	if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
+		__if->__if.mac_type = fman_mac_1g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
+		__if->__if.mac_type = fman_mac_10g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-memac")) {
+		__if->__if.is_memac = 1;
+		char_prop = of_get_property(mac_node, "phy-connection-type",
+					    NULL);
+		if (!char_prop) {
+			printf("memac: unknown MII type assuming 1G\n");
+			/* Right now forcing memac to 1g in case of error*/
+			__if->__if.mac_type = fman_mac_1g;
+		} else {
+			if (strstr(char_prop, "sgmii"))
+				__if->__if.mac_type = fman_mac_1g;
+			else if (strstr(char_prop, "rgmii")) {
+				__if->__if.mac_type = fman_mac_1g;
+				__if->__if.is_rgmii = 1;
+			} else if (strstr(char_prop, "xgmii"))
+				__if->__if.mac_type = fman_mac_10g;
+		}
+	} else {
+		FMAN_ERR(-EINVAL, "%s: unknown MAC type\n", mname);
+		goto err;
+	}
+
+	/*
+	 * For MAC ports, we cannot rely on cell-index. In
+	 * T2080, two of the 10G ports on single FMAN have same
+	 * duplicate cell-indexes as the other two 10G ports on
+	 * same FMAN. Hence, we now rely upon addresses of the
+	 * ports from device tree to deduce the index.
+	 */
+
+	_errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
+	if (_errno) {
+		FMAN_ERR(-EINVAL, "Invalid register address: %lu",
+			 regs_addr_host);
+		goto err;
+	}
+
+	/* Extract the MAC address for private and shared interfaces */
+	mac_addr = of_get_property(mac_node, "local-mac-address",
+				   &lenp);
+	if (!mac_addr) {
+		FMAN_ERR(-EINVAL, "%s: no local-mac-address\n",
+			 mname);
+		goto err;
+	}
+	memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
+
+	/* Extract the Tx port (it's the second of the two port handles)
+	 * and get its channel ID
+	 */
+	ports_phandle = of_get_property(mac_node, "fsl,port-handles",
+					&lenp);
+	if (!ports_phandle)
+		ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+						&lenp);
+	if (!ports_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,port-handles\n",
+			 mname);
+		goto err;
+	}
+	assert(lenp == (2 * sizeof(phandle)));
+	tx_node = of_find_node_by_phandle(ports_phandle[1]);
+	if (!tx_node) {
+		FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]\n", mname);
+		goto err;
+	}
+	/* Extract the channel ID (from tx-port-handle) */
+	tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
+					&lenp);
+	if (!tx_channel_id) {
+		FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id\n",
+			 tx_node->full_name);
+		goto err;
+	}
+
+	rx_node = of_find_node_by_phandle(ports_phandle[0]);
+	if (!rx_node) {
+		FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]\n", mname);
+		goto err;
+	}
+	regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+	if (!regs_addr) {
+		FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+		goto err;
+	}
+	phys_addr = of_translate_address(rx_node, regs_addr);
+	if (!phys_addr) {
+		FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+			 mname, regs_addr);
+		goto err;
+	}
+	__if->bmi_map = mmap(NULL, __if->regs_size,
+				 PROT_READ | PROT_WRITE, MAP_SHARED,
+				 fman_ccsr_map_fd, phys_addr);
+	if (__if->bmi_map == MAP_FAILED) {
+		FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+		goto err;
+	}
+
+	/* No channel ID for MAC-less */
+	assert(lenp == sizeof(*tx_channel_id));
+	na = of_n_addr_cells(mac_node);
+	__if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+	/* Extract the Rx FQIDs. (Note, the device representation is silly,
+	 * there are "counts" that must always be 1.)
+	 */
+	rx_phandle = of_get_property(dpa_node, rprop, &lenp);
+	if (!rx_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-rx\n", dname);
+		goto err;
+	}
+
+	assert(lenp == (4 * sizeof(phandle)));
+
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
+	rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
+	rx_phandle_host[2] = of_read_number(&rx_phandle[2], na);
+	rx_phandle_host[3] = of_read_number(&rx_phandle[3], na);
+
+	assert((rx_phandle_host[1] == 1) && (rx_phandle_host[3] == 1));
+	__if->__if.fqid_rx_err = rx_phandle_host[0];
+	__if->__if.fqid_rx_def = rx_phandle_host[2];
+
+	/* Extract the Tx FQIDs */
+	tx_phandle = of_get_property(dpa_node,
+				     "fsl,qman-frame-queues-tx", &lenp);
+	if (!tx_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-tx\n", dname);
+		goto err;
+	}
+
+	assert(lenp == (4 * sizeof(phandle)));
+	/*TODO: Fix for other cases also */
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	tx_phandle_host[0] = of_read_number(&tx_phandle[0], na);
+	tx_phandle_host[1] = of_read_number(&tx_phandle[1], na);
+	tx_phandle_host[2] = of_read_number(&tx_phandle[2], na);
+	tx_phandle_host[3] = of_read_number(&tx_phandle[3], na);
+	assert((tx_phandle_host[1] == 1) && (tx_phandle_host[3] == 1));
+	__if->__if.fqid_tx_err = tx_phandle_host[0];
+	__if->__if.fqid_tx_confirm = tx_phandle_host[2];
+
+	/* Obtain the buffer pool nodes used by this interface */
+	pools_phandle = of_get_property(dpa_node, "fsl,bman-buffer-pools",
+					&lenp);
+	if (!pools_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,bman-buffer-pools\n", dname);
+		goto err;
+	}
+	/* For each pool, parse the corresponding node and add a pool object
+	 * to the interface's "bpool_list"
+	 */
+	assert(lenp && !(lenp % sizeof(phandle)));
+	while (lenp) {
+		size_t proplen;
+		const phandle *prop;
+		uint64_t bpid_host = 0;
+		uint64_t bpool_host[6] = {0};
+		const char *pname;
+		/* Allocate an object for the pool */
+		bpool = rte_malloc(NULL, sizeof(*bpool), RTE_CACHE_LINE_SIZE);
+		if (!bpool) {
+			FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*bpool));
+			goto err;
+		}
+		/* Find the pool node */
+		pool_node = of_find_node_by_phandle(*pools_phandle);
+		if (!pool_node) {
+			FMAN_ERR(-ENXIO, "%s: bad fsl,bman-buffer-pools\n",
+				 dname);
+			goto err;
+		}
+		pname = pool_node->full_name;
+		/* Extract the BPID property */
+		prop = of_get_property(pool_node, "fsl,bpid", &proplen);
+		if (!prop) {
+			FMAN_ERR(-EINVAL, "%s: no fsl,bpid\n", pname);
+			goto err;
+		}
+		assert(proplen == sizeof(*prop));
+		na = of_n_addr_cells(mac_node);
+		/* Get rid of endianness (issues).
+		 * Convert to host byte-order
+		 */
+		bpid_host = of_read_number(prop, na);
+		bpool->bpid = bpid_host;
+		/* Extract the cfg property (count/size/addr). "fsl,bpool-cfg"
+		 * indicates for the Bman driver to seed the pool.
+		 * "fsl,bpool-ethernet-cfg" is used by the network driver. The
+		 * two are mutually exclusive, so check for either of them.
+		 */
+		prop = of_get_property(pool_node, "fsl,bpool-cfg",
+				       &proplen);
+		if (!prop)
+			prop = of_get_property(pool_node,
+					       "fsl,bpool-ethernet-cfg",
+					       &proplen);
+		if (!prop) {
+			/* It's OK for there to be no bpool-cfg */
+			bpool->count = bpool->size = bpool->addr = 0;
+		} else {
+			assert(proplen == (6 * sizeof(*prop)));
+			na = of_n_addr_cells(mac_node);
+			/* Get rid of endianness (issues).
+			 * Convert to host byte order
+			 */
+			bpool_host[0] = of_read_number(&prop[0], na);
+			bpool_host[1] = of_read_number(&prop[1], na);
+			bpool_host[2] = of_read_number(&prop[2], na);
+			bpool_host[3] = of_read_number(&prop[3], na);
+			bpool_host[4] = of_read_number(&prop[4], na);
+			bpool_host[5] = of_read_number(&prop[5], na);
+
+			bpool->count = ((uint64_t)bpool_host[0] << 32) |
+					bpool_host[1];
+			bpool->size = ((uint64_t)bpool_host[2] << 32) |
+					bpool_host[3];
+			bpool->addr = ((uint64_t)bpool_host[4] << 32) |
+					bpool_host[5];
+		}
+		/* Parsing of the pool is complete, add it to the interface
+		 * list.
+		 */
+		list_add_tail(&bpool->node, &__if->__if.bpool_list);
+		lenp -= sizeof(phandle);
+		pools_phandle++;
+	}
+
+	/* Parsing of the network interface is complete, add it to the list */
+	DPAA_BUS_LOG(DEBUG, "Found %s, Tx Channel = %x, FMAN = %x,"
+		    "Port ID = %x\n",
+		    dname, __if->__if.tx_channel_id, __if->__if.fman_idx,
+		    __if->__if.mac_idx);
+
+	list_add_tail(&__if->__if.node, &__ifs);
+	return 0;
+err:
+	if_destructor(__if);
+	return _errno;
+}
+
+int
+fman_init(void)
+{
+	const struct device_node *dpa_node;
+	int _errno;
+
+	/* If multiple dependencies try to initialise the Fman driver, don't
+	 * panic.
+	 */
+	if (fman_ccsr_map_fd != -1)
+		return 0;
+
+	fman_ccsr_map_fd = open(FMAN_DEVICE_PATH, O_RDWR);
+	if (unlikely(fman_ccsr_map_fd < 0)) {
+		DPAA_BUS_LOG(ERR, "Unable to open (/dev/mem)");
+		return fman_ccsr_map_fd;
+	}
+
+	for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-init") {
+		_errno = fman_if_init(dpa_node);
+		if (_errno) {
+			FMAN_ERR(_errno, "if_init(%s)\n", dpa_node->full_name);
+			goto err;
+		}
+	}
+
+	return 0;
+err:
+	fman_finish();
+	return _errno;
+}
+
+void
+fman_finish(void)
+{
+	struct __fman_if *__if, *tmpif;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	list_for_each_entry_safe(__if, tmpif, &__ifs, __if.node) {
+		int _errno;
+
+		/* disable Rx and Tx */
+		if ((__if->__if.mac_type == fman_mac_1g) &&
+		    (!__if->__if.is_memac))
+			out_be32(__if->ccsr_map + 0x100,
+				 in_be32(__if->ccsr_map + 0x100) & ~(u32)0x5);
+		else
+			out_be32(__if->ccsr_map + 8,
+				 in_be32(__if->ccsr_map + 8) & ~(u32)3);
+		/* release the mapping */
+		_errno = munmap(__if->ccsr_map, __if->regs_size);
+		if (unlikely(_errno < 0))
+			fprintf(stderr, "%s:%hu:%s(): munmap() = %d (%s)\n",
+				__FILE__, __LINE__, __func__,
+				-errno, strerror(errno));
+		printf("Tearing down %s\n", __if->node_path);
+		list_del(&__if->__if.node);
+		rte_free(__if);
+	}
+
+	close(fman_ccsr_map_fd);
+	fman_ccsr_map_fd = -1;
+}
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
new file mode 100644
index 0000000..26cff84
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -0,0 +1,214 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <inttypes.h>
+#include <of.h>
+#include <net/if.h>
+#include <sys/ioctl.h>
+#include <error.h>
+#include <net/if_arp.h>
+#include <assert.h>
+#include <unistd.h>
+
+#include <rte_malloc.h>
+
+#include <rte_dpaa_logs.h>
+#include <netcfg.h>
+
+/* Structure contains information about all the interfaces given by user
+ * on command line.
+ */
+struct netcfg_interface *netcfg_interface;
+
+/* This data structure contaings all configurations information
+ * related to usages of DPA devices.
+ */
+struct netcfg_info *netcfg;
+/* fd to open a socket for making ioctl request to disable/enable shared
+ *  interfaces.
+ */
+static int skfd = -1;
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dump_netcfg(struct netcfg_info *cfg_ptr)
+{
+	int i;
+
+	printf("..........  DPAA Configuration  ..........\n\n");
+
+	/* Network interfaces */
+	printf("Network interfaces: %d\n", cfg_ptr->num_ethports);
+	for (i = 0; i < cfg_ptr->num_ethports; i++) {
+		struct fman_if_bpool *bpool;
+		struct fm_eth_port_cfg *p_cfg = &cfg_ptr->port_cfg[i];
+		struct fman_if *__if = p_cfg->fman_if;
+
+		printf("\n+ Fman %d, MAC %d (%s);\n",
+		       __if->fman_idx, __if->mac_idx,
+		       (__if->mac_type == fman_mac_1g) ? "1G" : "10G");
+
+		printf("\tmac_addr: %02x:%02x:%02x:%02x:%02x:%02x\n",
+		       (&__if->mac_addr)->addr_bytes[0],
+		       (&__if->mac_addr)->addr_bytes[1],
+		       (&__if->mac_addr)->addr_bytes[2],
+		       (&__if->mac_addr)->addr_bytes[3],
+		       (&__if->mac_addr)->addr_bytes[4],
+		       (&__if->mac_addr)->addr_bytes[5]);
+
+		printf("\ttx_channel_id: 0x%02x\n",
+		       __if->tx_channel_id);
+
+		printf("\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
+		printf("\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
+
+		printf("\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
+		printf("\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
+		fman_if_for_each_bpool(bpool, __if)
+			printf("\tbuffer pool: (bpid=%d, count=%"PRId64
+			       " size=%"PRId64", addr=0x%"PRIx64")\n",
+			       bpool->bpid, bpool->count, bpool->size,
+			       bpool->addr);
+	}
+}
+#endif /* RTE_LIBRTE_DPAA_DEBUG_DRIVER */
+
+static inline int
+get_num_netcfg_interfaces(char *str)
+{
+	char *pch;
+	uint8_t count = 0;
+
+	if (str == NULL)
+		return -EINVAL;
+	pch = strtok(str, ",");
+	while (pch != NULL) {
+		count++;
+		pch = strtok(NULL, ",");
+	}
+	return count;
+}
+
+struct netcfg_info *
+netcfg_acquire(void)
+{
+	struct fman_if *__if;
+	int _errno, idx = 0;
+	uint8_t num_ports = 0;
+	uint8_t num_cfg_ports = 0;
+	size_t size;
+
+	/* Extract dpa configuration from fman driver and FMC configuration
+	 * for command-line interfaces.
+	 */
+
+	/* Open a basic socket to enable/disable shared
+	 * interfaces.
+	 */
+	skfd = socket(AF_PACKET, SOCK_RAW, 0);
+	if (unlikely(skfd < 0)) {
+		error(0, errno, "%s(): open(SOCK_RAW)", __func__);
+		return NULL;
+	}
+
+	/* Initialise the Fman driver */
+	_errno = fman_init();
+	if (_errno) {
+		DPAA_BUS_LOG(ERR, "FMAN driver init failed (%d)", errno);
+		close(skfd);
+		skfd = -1;
+		return NULL;
+	}
+
+	/* Number of MAC ports */
+	list_for_each_entry(__if, fman_if_list, node)
+		num_ports++;
+
+	if (!num_ports) {
+		DPAA_BUS_LOG(ERR, "FMAN ports not available");
+		return NULL;
+	}
+	/* Allocate space for all enabled mac ports */
+	size = sizeof(*netcfg) +
+		(num_ports * sizeof(struct fm_eth_port_cfg));
+
+	netcfg = calloc(1, size);
+	if (unlikely(netcfg == NULL)) {
+		DPAA_BUS_LOG(ERR, "Unable to allocat mem for netcfg");
+		goto error;
+	}
+
+	netcfg->num_ethports = num_ports;
+
+	list_for_each_entry(__if, fman_if_list, node) {
+		struct fm_eth_port_cfg *cfg = &netcfg->port_cfg[idx];
+		/* Hook in the fman driver interface */
+		cfg->fman_if = __if;
+		cfg->rx_def = __if->fqid_rx_def;
+		num_cfg_ports++;
+		idx++;
+	}
+
+	if (!num_cfg_ports) {
+		DPAA_BUS_LOG(ERR, "No FMAN ports found");
+		goto error;
+	} else if (num_ports != num_cfg_ports)
+		netcfg->num_ethports = num_cfg_ports;
+
+	return netcfg;
+
+error:
+	if (netcfg) {
+		free(netcfg);
+		netcfg = NULL;
+	}
+
+	return NULL;
+}
+
+void
+netcfg_release(struct netcfg_info *cfg_ptr)
+{
+	free(cfg_ptr);
+	/* Close socket for shared interfaces */
+	if (skfd >= 0) {
+		close(skfd);
+		skfd = -1;
+	}
+}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
new file mode 100644
index 0000000..9890e09
--- /dev/null
+++ b/drivers/bus/dpaa/include/fman.h
@@ -0,0 +1,458 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FMAN_H
+#define __FMAN_H
+
+#include <stdbool.h>
+#include <net/if.h>
+
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+
+#include <compat.h>
+
+#ifndef FMAN_DEVICE_PATH
+#define FMAN_DEVICE_PATH "/dev/mem"
+#endif
+
+#define MEMAC_NUM_OF_PADDRS 7 /* Num of additional exact match MAC adr regs */
+
+/* Control and Configuration Register (COMMAND_CONFIG) for MEMAC */
+#define CMD_CFG_LOOPBACK_EN	0x00000400
+/**< 21 XGMII/GMII loopback enable */
+#define CMD_CFG_PROMIS_EN	0x00000010
+/**< 27 Promiscuous operation enable */
+#define CMD_CFG_PAUSE_IGNORE	0x00000100
+/**< 23 Ignore Pause frame quanta */
+
+/* Statistics Configuration Register (STATN_CONFIG) */
+#define STATS_CFG_CLR           0x00000004
+/**< 29 Reset all counters */
+#define STATS_CFG_CLR_ON_RD     0x00000002
+/**< 30 Clear on read */
+#define STATS_CFG_SATURATE      0x00000001
+/**< 31 Saturate at the maximum val */
+
+/**< Max receive frame length mask */
+#define MAXFRM_SIZE_MEMAC	0x00007fe0
+#define MAXFRM_RX_MASK		0x0000ffff
+
+/**< Interface Mode Register Register for MEMAC */
+#define IF_MODE_RLP 0x00000820
+
+/**< Pool Limits */
+#define FMAN_PORT_MAX_EXT_POOLS_NUM	8
+#define FMAN_PORT_OBS_EXT_POOLS_NUM	2
+
+#define FMAN_PORT_CG_MAP_NUM		8
+#define FMAN_PORT_PRS_RESULT_WORDS_NUM	8
+#define FMAN_PORT_BMI_FIFO_UNITS	0x100
+#define FMAN_PORT_IC_OFFSET_UNITS	0x10
+
+#define FMAN_ENABLE_BPOOL_DEPLETION	0xF00000F0
+
+#define HASH_CTRL_MCAST_EN	0x00000100
+#define GROUP_ADDRESS		0x0000010000000000LL
+#define HASH_CTRL_ADDR_MASK	0x0000003F
+
+/* Pre definitions of FMAN interface and Bpool structures */
+struct __fman_if;
+struct fman_if_bpool;
+/* Lists of fman interfaces and bpools */
+TAILQ_HEAD(rte_fman_if_list, __fman_if);
+
+/* Represents the different flavour of network interface */
+enum fman_mac_type {
+	fman_offline = 0,
+	fman_mac_1g,
+	fman_mac_10g,
+};
+
+struct mac_addr {
+	uint32_t   mac_addr_l;	/**< Lower 32 bits of 48-bit MAC address */
+	uint32_t   mac_addr_u;	/**< Upper 16 bits of 48-bit MAC address */
+};
+
+struct memac_regs {
+	/* General Control and Status */
+	uint32_t res0000[2];
+	uint32_t command_config;	/**< 0x008 Ctrl and cfg */
+	struct mac_addr mac_addr0;	/**< 0x00C-0x010 MAC_ADDR_0...1 */
+	uint32_t maxfrm;		/**< 0x014 Max frame length */
+	uint32_t res0018[5];
+	uint32_t hashtable_ctrl;	/**< 0x02C Hash table control */
+	uint32_t res0030[4];
+	uint32_t ievent;		/**< 0x040 Interrupt event */
+	uint32_t tx_ipg_length;
+	/**< 0x044 Transmitter inter-packet-gap */
+	uint32_t res0048;
+	uint32_t imask;			/**< 0x04C Interrupt mask */
+	uint32_t res0050;
+	uint32_t pause_quanta[4];	/**< 0x054 Pause quanta */
+	uint32_t pause_thresh[4];	/**< 0x064 Pause quanta threshold */
+	uint32_t rx_pause_status;	/**< 0x074 Receive pause status */
+	uint32_t res0078[2];
+	struct mac_addr mac_addr[MEMAC_NUM_OF_PADDRS];
+	/**< 0x80-0x0B4 mac padr */
+	uint32_t lpwake_timer;
+	/**< 0x0B8 Low Power Wakeup Timer */
+	uint32_t sleep_timer;
+	/**< 0x0BC Transmit EEE Low Power Timer */
+	uint32_t res00c0[8];
+	uint32_t statn_config;
+	/**< 0x0E0 Statistics configuration */
+	uint32_t res00e4[7];
+	/* Rx Statistics Counter */
+	uint32_t reoct_l;		/**<Rx Eth Octets Counter */
+	uint32_t reoct_u;
+	uint32_t roct_l;		/**<Rx Octet Counters */
+	uint32_t roct_u;
+	uint32_t raln_l;		/**<Rx Alignment Error Counter */
+	uint32_t raln_u;
+	uint32_t rxpf_l;		/**<Rx valid Pause Frame */
+	uint32_t rxpf_u;
+	uint32_t rfrm_l;		/**<Rx Frame counter */
+	uint32_t rfrm_u;
+	uint32_t rfcs_l;		/**<Rx frame check seq error */
+	uint32_t rfcs_u;
+	uint32_t rvlan_l;		/**<Rx Vlan Frame Counter */
+	uint32_t rvlan_u;
+	uint32_t rerr_l;		/**<Rx Frame error */
+	uint32_t rerr_u;
+	uint32_t ruca_l;		/**<Rx Unicast */
+	uint32_t ruca_u;
+	uint32_t rmca_l;		/**<Rx Multicast */
+	uint32_t rmca_u;
+	uint32_t rbca_l;		/**<Rx Broadcast */
+	uint32_t rbca_u;
+	uint32_t rdrp_l;		/**<Rx Dropper Packet */
+	uint32_t rdrp_u;
+	uint32_t rpkt_l;		/**<Rx packet */
+	uint32_t rpkt_u;
+	uint32_t rund_l;		/**<Rx undersized packets */
+	uint32_t rund_u;
+	uint32_t r64_l;			/**<Rx 64 byte */
+	uint32_t r64_u;
+	uint32_t r127_l;
+	uint32_t r127_u;
+	uint32_t r255_l;
+	uint32_t r255_u;
+	uint32_t r511_l;
+	uint32_t r511_u;
+	uint32_t r1023_l;
+	uint32_t r1023_u;
+	uint32_t r1518_l;
+	uint32_t r1518_u;
+	uint32_t r1519x_l;
+	uint32_t r1519x_u;
+	uint32_t rovr_l;		/**<Rx oversized but good */
+	uint32_t rovr_u;
+	uint32_t rjbr_l;		/**<Rx oversized with bad csum */
+	uint32_t rjbr_u;
+	uint32_t rfrg_l;		/**<Rx fragment Packet */
+	uint32_t rfrg_u;
+	uint32_t rcnp_l;		/**<Rx control packets (0x8808 */
+	uint32_t rcnp_u;
+	uint32_t rdrntp_l;		/**<Rx dropped due to FIFO overflow */
+	uint32_t rdrntp_u;
+	uint32_t res01d0[12];
+	/* Tx Statistics Counter */
+	uint32_t teoct_l;		/**<Tx eth octets */
+	uint32_t teoct_u;
+	uint32_t toct_l;		/**<Tx Octets */
+	uint32_t toct_u;
+	uint32_t res0210[2];
+	uint32_t txpf_l;		/**<Tx valid pause frame */
+	uint32_t txpf_u;
+	uint32_t tfrm_l;		/**<Tx frame counter */
+	uint32_t tfrm_u;
+	uint32_t tfcs_l;		/**<Tx FCS error */
+	uint32_t tfcs_u;
+	uint32_t tvlan_l;		/**<Tx Vlan Frame */
+	uint32_t tvlan_u;
+	uint32_t terr_l;		/**<Tx frame error */
+	uint32_t terr_u;
+	uint32_t tuca_l;		/**<Tx Unicast */
+	uint32_t tuca_u;
+	uint32_t tmca_l;		/**<Tx Multicast */
+	uint32_t tmca_u;
+	uint32_t tbca_l;		/**<Tx Broadcast */
+	uint32_t tbca_u;
+	uint32_t res0258[2];
+	uint32_t tpkt_l;		/**<Tx Packet */
+	uint32_t tpkt_u;
+	uint32_t tund_l;		/**<Tx Undersized */
+	uint32_t tund_u;
+	uint32_t t64_l;
+	uint32_t t64_u;
+	uint32_t t127_l;
+	uint32_t t127_u;
+	uint32_t t255_l;
+	uint32_t t255_u;
+	uint32_t t511_l;
+	uint32_t t511_u;
+	uint32_t t1023_l;
+	uint32_t t1023_u;
+	uint32_t t1518_l;
+	uint32_t t1518_u;
+	uint32_t t1519x_l;
+	uint32_t t1519x_u;
+	uint32_t res02a8[6];
+	uint32_t tcnp_l;		/**<Tx Control Packet type - 0x8808 */
+	uint32_t tcnp_u;
+	uint32_t res02c8[14];
+	/* Line Interface Control */
+	uint32_t if_mode;		/**< 0x300 Interface Mode Control */
+	uint32_t if_status;		/**< 0x304 Interface Status */
+	uint32_t res0308[14];
+	/* HiGig/2 */
+	uint32_t hg_config;		/**< 0x340 Control and cfg */
+	uint32_t res0344[3];
+	uint32_t hg_pause_quanta;	/**< 0x350 Pause quanta */
+	uint32_t res0354[3];
+	uint32_t hg_pause_thresh;	/**< 0x360 Pause quanta threshold */
+	uint32_t res0364[3];
+	uint32_t hgrx_pause_status;	/**< 0x370 Receive pause status */
+	uint32_t hg_fifos_status;	/**< 0x374 fifos status */
+	uint32_t rhm;			/**< 0x378 rx messages counter */
+	uint32_t thm;			/**< 0x37C tx messages counter */
+};
+
+struct rx_bmi_regs {
+	uint32_t fmbm_rcfg;		/**< Rx Configuration */
+	uint32_t fmbm_rst;		/**< Rx Status */
+	uint32_t fmbm_rda;		/**< Rx DMA attributes*/
+	uint32_t fmbm_rfp;		/**< Rx FIFO Parameters*/
+	uint32_t fmbm_rfed;		/**< Rx Frame End Data*/
+	uint32_t fmbm_ricp;		/**< Rx Internal Context Parameters*/
+	uint32_t fmbm_rim;		/**< Rx Internal Buffer Margins*/
+	uint32_t fmbm_rebm;		/**< Rx External Buffer Margins*/
+	uint32_t fmbm_rfne;		/**< Rx Frame Next Engine*/
+	uint32_t fmbm_rfca;		/**< Rx Frame Command Attributes.*/
+	uint32_t fmbm_rfpne;		/**< Rx Frame Parser Next Engine*/
+	uint32_t fmbm_rpso;		/**< Rx Parse Start Offset*/
+	uint32_t fmbm_rpp;		/**< Rx Policer Profile  */
+	uint32_t fmbm_rccb;		/**< Rx Coarse Classification Base */
+	uint32_t fmbm_reth;		/**< Rx Excessive Threshold */
+	uint32_t reserved003c[1];	/**< (0x03C 0x03F) */
+	uint32_t fmbm_rprai[FMAN_PORT_PRS_RESULT_WORDS_NUM];
+					/**< Rx Parse Results Array Init*/
+	uint32_t fmbm_rfqid;		/**< Rx Frame Queue ID*/
+	uint32_t fmbm_refqid;		/**< Rx Error Frame Queue ID*/
+	uint32_t fmbm_rfsdm;		/**< Rx Frame Status Discard Mask*/
+	uint32_t fmbm_rfsem;		/**< Rx Frame Status Error Mask*/
+	uint32_t fmbm_rfene;		/**< Rx Frame Enqueue Next Engine */
+	uint32_t reserved0074[0x2];	/**< (0x074-0x07C)  */
+	uint32_t fmbm_rcmne;
+	/**< Rx Frame Continuous Mode Next Engine */
+	uint32_t reserved0080[0x20];/**< (0x080 0x0FF)  */
+	uint32_t fmbm_ebmpi[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Buffer Manager pool Information-*/
+	uint32_t fmbm_acnt[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Allocate Counter-*/
+	uint32_t reserved0130[8];
+					/**< 0x130/0x140 - 0x15F reserved -*/
+	uint32_t fmbm_rcgm[FMAN_PORT_CG_MAP_NUM];
+					/**< Congestion Group Map*/
+	uint32_t fmbm_mpd;		/**< BM Pool Depletion  */
+	uint32_t reserved0184[0x1F];	/**< (0x184 0x1FF) */
+	uint32_t fmbm_rstc;		/**< Rx Statistics Counters*/
+	uint32_t fmbm_rfrc;		/**< Rx Frame Counter*/
+	uint32_t fmbm_rfbc;		/**< Rx Bad Frames Counter*/
+	uint32_t fmbm_rlfc;		/**< Rx Large Frames Counter*/
+	uint32_t fmbm_rffc;		/**< Rx Filter Frames Counter*/
+	uint32_t fmbm_rfdc;		/**< Rx Frame Discard Counter*/
+	uint32_t fmbm_rfldec;		/**< Rx Frames List DMA Error Counter*/
+	uint32_t fmbm_rodc;		/**< Rx Out of Buffers Discard nntr*/
+	uint32_t fmbm_rbdc;		/**< Rx Buffers Deallocate Counter*/
+	uint32_t reserved0224[0x17];	/**< (0x224 0x27F) */
+	uint32_t fmbm_rpc;		/**< Rx Performance Counters*/
+	uint32_t fmbm_rpcp;		/**< Rx Performance Count Parameters*/
+	uint32_t fmbm_rccn;		/**< Rx Cycle Counter*/
+	uint32_t fmbm_rtuc;		/**< Rx Tasks Utilization Counter*/
+	uint32_t fmbm_rrquc;
+	/**< Rx Receive Queue Utilization cntr*/
+	uint32_t fmbm_rduc;		/**< Rx DMA Utilization Counter*/
+	uint32_t fmbm_rfuc;		/**< Rx FIFO Utilization Counter*/
+	uint32_t fmbm_rpac;		/**< Rx Pause Activation Counter*/
+	uint32_t reserved02a0[0x18];	/**< (0x2A0 0x2FF) */
+	uint32_t fmbm_rdbg;		/**< Rx Debug-*/
+};
+
+struct fman_port_qmi_regs {
+	uint32_t fmqm_pnc;		/**< PortID n Configuration Register */
+	uint32_t fmqm_pns;		/**< PortID n Status Register */
+	uint32_t fmqm_pnts;		/**< PortID n Task Status Register */
+	uint32_t reserved00c[4];	/**< 0xn00C - 0xn01B */
+	uint32_t fmqm_pnen;		/**< PortID n Enqueue NIA Register */
+	uint32_t fmqm_pnetfc;		/**< PortID n Enq Total Frame Counter */
+	uint32_t reserved024[2];	/**< 0xn024 - 0x02B */
+	uint32_t fmqm_pndn;		/**< PortID n Dequeue NIA Register */
+	uint32_t fmqm_pndc;		/**< PortID n Dequeue Config Register */
+	uint32_t fmqm_pndtfc;		/**< PortID n Dequeue tot Frame cntr */
+	uint32_t fmqm_pndfdc;		/**< PortID n Dequeue FQID Dflt Cntr */
+	uint32_t fmqm_pndcc;		/**< PortID n Dequeue Confirm Counter */
+};
+
+/* This struct exports parameters about an Fman network interface, determined
+ * from the device-tree.
+ */
+struct fman_if {
+	/* Which Fman this interface belongs to */
+	uint8_t fman_idx;
+	/* The type/speed of the interface */
+	enum fman_mac_type mac_type;
+	/* Boolean, set when mac type is memac */
+	uint8_t is_memac;
+	/* Boolean, set when PHY is RGMII */
+	uint8_t is_rgmii;
+	/* The index of this MAC (within the Fman it belongs to) */
+	uint8_t mac_idx;
+	/* The MAC address */
+	struct ether_addr mac_addr;
+	/* The Qman channel to schedule Tx FQs to */
+	u16 tx_channel_id;
+	/* The hard-coded FQIDs for this interface. Note: this doesn't cover
+	 * the PCD nor the "Rx default" FQIDs, which are configured via FMC
+	 * and its XML-based configuration.
+	 */
+	uint32_t fqid_rx_def;
+	uint32_t fqid_rx_err;
+	uint32_t fqid_tx_err;
+	uint32_t fqid_tx_confirm;
+
+	struct list_head bpool_list;
+	/* The node for linking this interface into "fman_if_list" */
+	struct list_head node;
+};
+
+/* This struct exposes parameters for buffer pools, extracted from the network
+ * interface settings in the device tree.
+ */
+struct fman_if_bpool {
+	uint32_t bpid;
+	uint64_t count;
+	uint64_t size;
+	uint64_t addr;
+	/* The node for linking this bpool into fman_if::bpool_list */
+	struct list_head node;
+};
+
+/* Internal Context transfer params - FMBM_RICP*/
+struct fman_if_ic_params {
+	/*IC offset in the packet buffer */
+	uint16_t iceof;
+	/*IC internal offset */
+	uint16_t iciof;
+	/*IC size to copy */
+	uint16_t icsz;
+};
+
+/* The exported "struct fman_if" type contains the subset of fields we want
+ * exposed. This struct is embedded in a larger "struct __fman_if" which
+ * contains the extra bits we *don't* want exposed.
+ */
+struct __fman_if {
+	struct fman_if __if;
+	char node_path[PATH_MAX];
+	uint64_t regs_size;
+	void *ccsr_map;
+	void *bmi_map;
+	void *qmi_map;
+	struct list_head node;
+};
+
+/* And this is the base list node that the interfaces are added to. (See
+ * fman_if_enable_all_rx() below for an example of its use.)
+ */
+extern const struct list_head *fman_if_list;
+
+extern int fman_ccsr_map_fd;
+
+/* To iterate the "bpool_list" for an interface. Eg;
+ *        struct fman_if *p = get_ptr_to_some_interface();
+ *        struct fman_if_bpool *bp;
+ *        printf("Interface uses following BPIDs;\n");
+ *        fman_if_for_each_bpool(bp, p) {
+ *            printf("    %d\n", bp->bpid);
+ *            [...]
+ *        }
+ */
+#define fman_if_for_each_bpool(bp, __if) \
+	list_for_each_entry(bp, &(__if)->bpool_list, node)
+
+#define FMAN_ERR(rc, fmt, args...) \
+	do { \
+		_errno = (rc); \
+		DPAA_BUS_LOG(ERR, fmt "(%d)", ##args, errno); \
+	} while (0)
+
+#define FMAN_IP_REV_1	0xC30C4
+#define FMAN_IP_REV_1_MAJOR_MASK 0x0000FF00
+#define FMAN_IP_REV_1_MAJOR_SHIFT 8
+#define FMAN_V3	0x06
+#define FMAN_V3_CONTEXTA_EN_A2V	0x10000000
+#define FMAN_V3_CONTEXTA_EN_OVOM	0x02000000
+#define FMAN_V3_CONTEXTA_EN_EBD	0x80000000
+#define FMAN_CONTEXTA_DIS_CHECKSUM	0x7ull
+#define FMAN_CONTEXTA_SET_OPCODE11 0x2000000b00000000
+extern u16 fman_ip_rev;
+extern u32 fman_dealloc_bufs_mask_hi;
+extern u32 fman_dealloc_bufs_mask_lo;
+
+/**
+ * Initialize the FMAN driver
+ *
+ * @args void
+ * @return
+ *	0 for success; error OTHERWISE
+ */
+int fman_init(void);
+
+/**
+ * Teardown the FMAN driver
+ *
+ * @args void
+ * @return void
+ */
+void fman_finish(void);
+
+#endif	/* __FMAN_H */
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
new file mode 100644
index 0000000..b77a678
--- /dev/null
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -0,0 +1,96 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __NETCFG_H
+#define __NETCFG_H
+
+#include <fman.h>
+#include <argp.h>
+
+/* Configuration information related to a specific ethernet port */
+struct fm_eth_port_cfg {
+	/**< A list of PCD FQ ranges, obtained from FMC configuration */
+	struct list_head *list;
+	/**< The "Rx default" FQID, obtained from FMC configuration */
+	uint32_t rx_def;
+	/**< Other interface details are in the fman driver interface */
+	struct fman_if *fman_if;
+};
+
+struct netcfg_info {
+	uint8_t num_ethports;
+	/**< Number of ports */
+	struct fm_eth_port_cfg port_cfg[0];
+	/**< Variable structure array of size num_ethports */
+};
+
+struct interface_info {
+	char *name;
+	struct ether_addr mac_addr;
+	struct ether_addr peer_mac;
+	int mac_present;
+	int fman_enabled_mac_interface;
+};
+
+struct netcfg_interface {
+	uint8_t numof_netcfg_interface;
+	uint8_t numof_fman_enabled_macless;
+	struct interface_info interface_info[0];
+};
+
+/* pcd_file: FMC netpcd XML ("policy") file, that contains PCD information.
+ * cfg_file: FMC config XML file
+ * Returns the configuration information in newly allocated memory.
+ */
+struct netcfg_info *netcfg_acquire(void);
+
+/* cfg_ptr: configuration information pointer.
+ * Frees the resources allocated by the configuration layer.
+ */
+void netcfg_release(struct netcfg_info *cfg_ptr);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* cfg_ptr: configuration information pointer.
+ * This function dumps configuration data to stdout.
+ */
+void dump_netcfg(struct netcfg_info *cfg_ptr);
+#endif
+
+#endif /* __NETCFG_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 06/40] bus/dpaa: add FMan hardware operations
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (4 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
                           ` (34 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/fman/fman_hw.c      | 562 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_fman.h       | 174 +++++++++
 drivers/bus/dpaa/include/fsl_fman_crc64.h | 263 ++++++++++++++
 4 files changed, 1000 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index f6e504d..fe65276 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -58,6 +58,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
+	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c
 
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
new file mode 100644
index 0000000..a7ca661
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -0,0 +1,562 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+#include <fman.h>
+/* This header declares things about Fman hardware itself (the format of status
+ * words and an inline implementation of CRC64). We include it only in order to
+ * instantiate the one global variable it depends on.
+ */
+#include <fsl_fman.h>
+#include <fsl_fman_crc64.h>
+
+/* Instantiate the global variable that the inline CRC64 implementation (in
+ * <fsl_fman.h>) depends on.
+ */
+DECLARE_FMAN_CRC64_TABLE();
+
+#define ETH_ADDR_TO_UINT64(eth_addr)                  \
+	(uint64_t)(((uint64_t)(eth_addr)[0] << 40) |   \
+	((uint64_t)(eth_addr)[1] << 32) |   \
+	((uint64_t)(eth_addr)[2] << 24) |   \
+	((uint64_t)(eth_addr)[3] << 16) |   \
+	((uint64_t)(eth_addr)[4] << 8) |    \
+	((uint64_t)(eth_addr)[5]))
+
+void
+fman_if_set_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i|HASH_CTRL_MCAST_EN);
+}
+
+void
+fman_if_reset_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i & ~HASH_CTRL_MCAST_EN);
+}
+
+static
+uint32_t get_mac_hash_code(uint64_t eth_addr)
+{
+	uint64_t	mask1, mask2;
+	uint32_t	xorVal = 0;
+	uint8_t		i, j;
+
+	for (i = 0; i < 6; i++) {
+		mask1 = eth_addr & (uint64_t)0x01;
+		eth_addr >>= 1;
+
+		for (j = 0; j < 7; j++) {
+			mask2 = eth_addr & (uint64_t)0x01;
+			mask1 ^= mask2;
+			eth_addr >>= 1;
+		}
+
+		xorVal |= (mask1 << (5 - i));
+	}
+
+	return xorVal;
+}
+
+int
+fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	uint64_t eth_addr;
+	void *hashtable_ctrl;
+	uint32_t hash;
+
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	eth_addr = ETH_ADDR_TO_UINT64(eth);
+
+	if (!(eth_addr & GROUP_ADDRESS))
+		return -1;
+
+	hash = get_mac_hash_code(eth_addr) & HASH_CTRL_ADDR_MASK;
+	hash = hash | HASH_CTRL_MCAST_EN;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	out_be32(hashtable_ctrl, hash);
+
+	return 0;
+}
+
+int
+fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *mac_reg =
+		&((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
+	u32 val = in_be32(mac_reg);
+
+	eth[0] = (val & 0x000000ff) >> 0;
+	eth[1] = (val & 0x0000ff00) >> 8;
+	eth[2] = (val & 0x00ff0000) >> 16;
+	eth[3] = (val & 0xff000000) >> 24;
+
+	mac_reg =  &((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_u;
+	val = in_be32(mac_reg);
+
+	eth[4] = (val & 0x000000ff) >> 0;
+	eth[5] = (val & 0x0000ff00) >> 8;
+
+	return 0;
+}
+
+void
+fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	void *reg;
+
+	if (addr_num) {
+		reg = &((struct memac_regs *)m->ccsr_map)->
+				mac_addr[addr_num-1].mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+		out_be32(reg, 0x0);
+	} else {
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+		out_be32(reg, 0x0);
+	}
+}
+
+int
+fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+
+	void *reg;
+	u32 val;
+
+	memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_l;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+
+	val = (m->__if.mac_addr.addr_bytes[0] |
+	       (m->__if.mac_addr.addr_bytes[1] << 8) |
+	       (m->__if.mac_addr.addr_bytes[2] << 16) |
+	       (m->__if.mac_addr.addr_bytes[3] << 24));
+	out_be32(reg, val);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+
+	val = ((m->__if.mac_addr.addr_bytes[4] << 0) |
+	       (m->__if.mac_addr.addr_bytes[5] << 8));
+	out_be32(reg, val);
+
+	return 0;
+}
+
+void
+fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	u32 value = 0;
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Rx Ignore Pause Frames */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	if (enable)
+		value = in_be32(cmdcfg) | CMD_CFG_PAUSE_IGNORE;
+	else
+		value = in_be32(cmdcfg) & ~CMD_CFG_PAUSE_IGNORE;
+
+	out_be32(cmdcfg, value);
+}
+
+void
+fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	unsigned int *maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Max frame length */
+	maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+	out_be32(maxfrm, (MAXFRM_RX_MASK & max_frame_len));
+}
+
+void
+fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+
+	/* read recved packet count */
+	stats->ipackets = ((u64)in_be32(&regs->rfrm_u)) << 32 |
+			in_be32(&regs->rfrm_l);
+	stats->ibytes = ((u64)in_be32(&regs->roct_u)) << 32 |
+			in_be32(&regs->roct_l);
+	stats->ierrors = ((u64)in_be32(&regs->rerr_u)) << 32 |
+			in_be32(&regs->rerr_l);
+
+	/* read xmited packet count */
+	stats->opackets = ((u64)in_be32(&regs->tfrm_u)) << 32 |
+			in_be32(&regs->tfrm_l);
+	stats->obytes = ((u64)in_be32(&regs->toct_u)) << 32 |
+			in_be32(&regs->toct_l);
+	stats->oerrors = ((u64)in_be32(&regs->terr_u)) << 32 |
+			in_be32(&regs->terr_l);
+}
+
+void
+fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+	int i;
+	uint64_t base_offset = offsetof(struct memac_regs, reoct_l);
+
+	for (i = 0; i < n; i++)
+		value[i] = ((u64)in_be32((char *)regs
+				+ base_offset + 8 * i + 4)) << 32 |
+				((u64)in_be32((char *)regs
+				+ base_offset + 8 * i));
+}
+
+void
+fman_if_stats_reset(struct fman_if *p)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+	uint32_t tmp;
+
+	tmp = in_be32(&regs->statn_config);
+
+	tmp |= STATS_CFG_CLR;
+
+	out_be32(&regs->statn_config, tmp);
+
+	while (in_be32(&regs->statn_config) & STATS_CFG_CLR)
+		;
+}
+
+void
+fman_if_promiscuous_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_PROMIS_EN);
+}
+
+void
+fman_if_promiscuous_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Disable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) & (~CMD_CFG_PROMIS_EN));
+}
+
+void
+fman_if_enable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* enable Rx and Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) | 3);
+}
+
+void
+fman_if_disable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* only disable Rx, not Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) & ~(u32)2);
+}
+
+void
+fman_if_loopback_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) | IF_MODE_RLP);
+	} else{
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_loopback_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+	/* Disable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) & ~IF_MODE_RLP);
+	} else {
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) & ~CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
+		    int bpid, size_t bufsize)
+{
+	u32 fmbm_ebmpi;
+	u32 ebmpi_val_ace = 0xc0000000;
+	u32 ebmpi_mask = 0xffc00000;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ebmpi =
+	       in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0]);
+	fmbm_ebmpi = ebmpi_val_ace | (fmbm_ebmpi & ebmpi_mask) | (bpid << 16) |
+		     (bufsize);
+
+	out_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0],
+		 fmbm_ebmpi);
+}
+
+int
+fman_if_get_fc_quanta(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	return in_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0]);
+}
+
+int
+fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	out_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0],
+		 pause_quanta);
+	return 0;
+}
+
+int
+fman_if_get_fdoff(struct fman_if *fm_if)
+{
+	u32 fmbm_ricp;
+	int fdoff;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ricp =
+		   in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp);
+	/*iceof + icsz*/
+	fdoff = ((fmbm_ricp & iceof_mask) >> 16) * 16 +
+		(fmbm_ricp & icsz_mask) * 16;
+
+	return fdoff;
+}
+
+void
+fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_refqid =
+			&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_refqid;
+	out_be32(fmbm_refqid, err_fqid);
+}
+
+int
+fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	val = in_be32(fmbm_ricp);
+
+	icp->iceof = (val & iceof_mask) >> 12;
+	icp->iciof = (val & iciof_mask) >> 4;
+	icp->icsz = (val & icsz_mask) << 4;
+
+	return 0;
+}
+
+int
+fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	val |= (icp->iceof << 12) & iceof_mask;
+	val |= (icp->iciof << 4) & iciof_mask;
+	val |= (icp->icsz >> 4) & icsz_mask;
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	out_be32(fmbm_ricp, val);
+
+	return 0;
+}
+
+void
+fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rebm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_rebm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rebm;
+
+	out_be32(fmbm_rebm, in_be32(fmbm_rebm) | (fd_offset << 16));
+}
+
+void
+fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	out_be32(reg_maxfrm, (in_be32(reg_maxfrm) & 0xFFFF0000) | max_frm);
+}
+
+uint16_t
+fman_if_get_maxfrm(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	return (in_be32(reg_maxfrm) | 0x0000FFFF);
+}
+
+void
+fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmqm_pndn;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmqm_pndn = &((struct fman_port_qmi_regs *)__if->qmi_map)->fmqm_pndn;
+
+	out_be32(fmqm_pndn, nia);
+}
+
+void
+fman_if_discard_rx_errors(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rfsdm, *fmbm_rfsem;
+
+	fmbm_rfsem = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsem;
+	out_be32(fmbm_rfsem, 0);
+
+	/* Configure the discard mask to discard the error packets which have
+	 * DMA errors, Frame size error, Header error etc. The mask 0x010CE3F0
+	 * is to configured discard all the errors which come in the FD[STATUS]
+	 */
+	fmbm_rfsdm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsdm;
+	out_be32(fmbm_rfsdm, 0x010CE3F0);
+}
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
new file mode 100644
index 0000000..ac38082
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -0,0 +1,174 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_H
+#define __FSL_FMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Status field in FD is updated on Rx side by FMAN with following information.
+ * Refer to field description in FM BG.
+ */
+struct fm_status_t {
+	unsigned int reserved0:3;
+	unsigned int dcl4c:1; /* Don't Check L4 Checksum */
+	unsigned int reserved1:1;
+	unsigned int ufd:1; /* Unsupported Format */
+	unsigned int lge:1; /* Length Error */
+	unsigned int dme:1; /* DMA Error */
+
+	unsigned int reserved2:4;
+	unsigned int fpe:1; /* Frame physical Error */
+	unsigned int fse:1; /* Frame Size Error */
+	unsigned int dis:1; /* Discard by Classification */
+	unsigned int reserved3:1;
+
+	unsigned int eof:1; /* Key Extraction goes out of frame */
+	unsigned int nss:1; /* No Scheme selected */
+	unsigned int kso:1; /* Key Size Overflow */
+	unsigned int reserved4:1;
+	unsigned int fcl:2; /* Frame Color */
+	unsigned int ipp:1; /* Illegal Policer Profile Selected */
+	unsigned int flm:1; /* Frame Length Mismatch */
+	unsigned int pte:1; /* Parser Timeout */
+	unsigned int isp:1; /* Invalid Soft Parser Instruction */
+	unsigned int phe:1; /* Header Error during parsing */
+	unsigned int frdr:1; /* Frame Dropped by disabled port */
+	unsigned int reserved5:4;
+} __attribute__ ((__packed__));
+
+/* Set MAC address for a particular interface */
+int fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num);
+
+/* Remove a MAC address for a particular interface */
+void fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num);
+
+/* Get the FMAN statistics */
+void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);
+
+/* Reset the FMAN statistics */
+void fman_if_stats_reset(struct fman_if *p);
+
+/* Get all of the FMAN statistics */
+void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
+
+/* Set ignore pause option for a specific interface */
+void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
+
+/* Set max frame length */
+void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len);
+
+/* Enable/disable Rx promiscuous mode on specified interface */
+void fman_if_promiscuous_enable(struct fman_if *p);
+void fman_if_promiscuous_disable(struct fman_if *p);
+
+/* Enable/disable Rx on specific interfaces */
+void fman_if_enable_rx(struct fman_if *p);
+void fman_if_disable_rx(struct fman_if *p);
+
+/* Enable/disable loopback on specific interfaces */
+void fman_if_loopback_enable(struct fman_if *p);
+void fman_if_loopback_disable(struct fman_if *p);
+
+/* Set buffer pool on specific interface */
+void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
+		    size_t bufsize);
+
+/* Get Flow Control pause quanta on specific interface */
+int fman_if_get_fc_quanta(struct fman_if *fm_if);
+
+/* Set Flow Control pause quanta on specific interface */
+int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
+
+/* Set default error fqid on specific interface */
+void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
+
+/* Get IC transfer params */
+int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
+
+/* Set IC transfer params */
+int fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp);
+
+/* Get interface fd->offset value */
+int fman_if_get_fdoff(struct fman_if *fm_if);
+
+/* Set interface fd->offset value */
+void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
+
+/* Get interface Max Frame length (MTU) */
+uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
+
+/* Set interface  Max Frame length (MTU) */
+void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
+
+/* Set interface next invoked action for dequeue operation */
+void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
+
+/* discard error packets on rx */
+void fman_if_discard_rx_errors(struct fman_if *fm_if);
+
+void fman_if_set_mcast_filter_table(struct fman_if *p);
+
+void fman_if_reset_mcast_filter_table(struct fman_if *p);
+
+int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
+
+int fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth);
+
+
+/* Enable/disable Rx on all interfaces */
+static inline void fman_if_enable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_enable_rx(__if);
+}
+
+static inline void fman_if_disable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_disable_rx(__if);
+}
+#endif /* __FSL_FMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_fman_crc64.h b/drivers/bus/dpaa/include/fsl_fman_crc64.h
new file mode 100644
index 0000000..af5803f
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman_crc64.h
@@ -0,0 +1,263 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_CRC64_H
+#define __FSL_FMAN_CRC64_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*
+ * This following definitions provide a software implementation of the CRC64
+ * algorithm implemented within Fman.
+ *
+ * The following example shows how to compute a CRC64 hash value based on
+ * SRC_IP, DST_IP and ESP_SPI values
+ *
+ *     #define compute_hash(saddr,daddr,spi) \
+ *        do { \
+ *           uint64_t result; \
+ *           result = fman_crc64_init(); \
+ *           result = fman_crc64_compute_32bit(saddr, result); \
+ *           result = fman_crc64_compute_32bit(daddr, result); \
+ *           result = fman_crc64_compute_32bit(spi, result); \
+ *           return (uint32_t) result & RC_HASH_MASK; \
+ *        } while (0);
+ *
+ * If hashing over a different number of fields (or of different types) is
+ * required, this can be implemented using the following primitives.
+ */
+
+/* The following table provides the constants used by the Fman CRC64
+ * implementation. The table is instantiated within the DPAA fman driver.
+ * However if the application is not going to be linked against the DPAA fman
+ * driver but will use this Fman CRC64 implementation, then it will need to
+ * instantiate this table by using the DECLARE_FMAN_CRC64_TABLE() macro.
+ */
+struct fman_crc64_t {
+	uint64_t initial;
+	uint64_t table[1 << 8];
+};
+extern struct fman_crc64_t FMAN_CRC64_ECMA_182;
+#define DECLARE_FMAN_CRC64_TABLE() \
+struct fman_crc64_t FMAN_CRC64_ECMA_182 = { \
+	0xFFFFFFFFFFFFFFFFULL, \
+	{ \
+		0x0000000000000000ULL, 0xb32e4cbe03a75f6fULL, \
+		0xf4843657a840a05bULL, 0x47aa7ae9abe7ff34ULL, \
+		0x7bd0c384ff8f5e33ULL, 0xc8fe8f3afc28015cULL, \
+		0x8f54f5d357cffe68ULL, 0x3c7ab96d5468a107ULL, \
+		0xf7a18709ff1ebc66ULL, 0x448fcbb7fcb9e309ULL, \
+		0x0325b15e575e1c3dULL, 0xb00bfde054f94352ULL, \
+		0x8c71448d0091e255ULL, 0x3f5f08330336bd3aULL, \
+		0x78f572daa8d1420eULL, 0xcbdb3e64ab761d61ULL, \
+		0x7d9ba13851336649ULL, 0xceb5ed8652943926ULL, \
+		0x891f976ff973c612ULL, 0x3a31dbd1fad4997dULL, \
+		0x064b62bcaebc387aULL, 0xb5652e02ad1b6715ULL, \
+		0xf2cf54eb06fc9821ULL, 0x41e11855055bc74eULL, \
+		0x8a3a2631ae2dda2fULL, 0x39146a8fad8a8540ULL, \
+		0x7ebe1066066d7a74ULL, 0xcd905cd805ca251bULL, \
+		0xf1eae5b551a2841cULL, 0x42c4a90b5205db73ULL, \
+		0x056ed3e2f9e22447ULL, 0xb6409f5cfa457b28ULL, \
+		0xfb374270a266cc92ULL, 0x48190ecea1c193fdULL, \
+		0x0fb374270a266cc9ULL, 0xbc9d3899098133a6ULL, \
+		0x80e781f45de992a1ULL, 0x33c9cd4a5e4ecdceULL, \
+		0x7463b7a3f5a932faULL, 0xc74dfb1df60e6d95ULL, \
+		0x0c96c5795d7870f4ULL, 0xbfb889c75edf2f9bULL, \
+		0xf812f32ef538d0afULL, 0x4b3cbf90f69f8fc0ULL, \
+		0x774606fda2f72ec7ULL, 0xc4684a43a15071a8ULL, \
+		0x83c230aa0ab78e9cULL, 0x30ec7c140910d1f3ULL, \
+		0x86ace348f355aadbULL, 0x3582aff6f0f2f5b4ULL, \
+		0x7228d51f5b150a80ULL, 0xc10699a158b255efULL, \
+		0xfd7c20cc0cdaf4e8ULL, 0x4e526c720f7dab87ULL, \
+		0x09f8169ba49a54b3ULL, 0xbad65a25a73d0bdcULL, \
+		0x710d64410c4b16bdULL, 0xc22328ff0fec49d2ULL, \
+		0x85895216a40bb6e6ULL, 0x36a71ea8a7ace989ULL, \
+		0x0adda7c5f3c4488eULL, 0xb9f3eb7bf06317e1ULL, \
+		0xfe5991925b84e8d5ULL, 0x4d77dd2c5823b7baULL, \
+		0x64b62bcaebc387a1ULL, 0xd7986774e864d8ceULL, \
+		0x90321d9d438327faULL, 0x231c512340247895ULL, \
+		0x1f66e84e144cd992ULL, 0xac48a4f017eb86fdULL, \
+		0xebe2de19bc0c79c9ULL, 0x58cc92a7bfab26a6ULL, \
+		0x9317acc314dd3bc7ULL, 0x2039e07d177a64a8ULL, \
+		0x67939a94bc9d9b9cULL, 0xd4bdd62abf3ac4f3ULL, \
+		0xe8c76f47eb5265f4ULL, 0x5be923f9e8f53a9bULL, \
+		0x1c4359104312c5afULL, 0xaf6d15ae40b59ac0ULL, \
+		0x192d8af2baf0e1e8ULL, 0xaa03c64cb957be87ULL, \
+		0xeda9bca512b041b3ULL, 0x5e87f01b11171edcULL, \
+		0x62fd4976457fbfdbULL, 0xd1d305c846d8e0b4ULL, \
+		0x96797f21ed3f1f80ULL, 0x2557339fee9840efULL, \
+		0xee8c0dfb45ee5d8eULL, 0x5da24145464902e1ULL, \
+		0x1a083bacedaefdd5ULL, 0xa9267712ee09a2baULL, \
+		0x955cce7fba6103bdULL, 0x267282c1b9c65cd2ULL, \
+		0x61d8f8281221a3e6ULL, 0xd2f6b4961186fc89ULL, \
+		0x9f8169ba49a54b33ULL, 0x2caf25044a02145cULL, \
+		0x6b055fede1e5eb68ULL, 0xd82b1353e242b407ULL, \
+		0xe451aa3eb62a1500ULL, 0x577fe680b58d4a6fULL, \
+		0x10d59c691e6ab55bULL, 0xa3fbd0d71dcdea34ULL, \
+		0x6820eeb3b6bbf755ULL, 0xdb0ea20db51ca83aULL, \
+		0x9ca4d8e41efb570eULL, 0x2f8a945a1d5c0861ULL, \
+		0x13f02d374934a966ULL, 0xa0de61894a93f609ULL, \
+		0xe7741b60e174093dULL, 0x545a57dee2d35652ULL, \
+		0xe21ac88218962d7aULL, 0x5134843c1b317215ULL, \
+		0x169efed5b0d68d21ULL, 0xa5b0b26bb371d24eULL, \
+		0x99ca0b06e7197349ULL, 0x2ae447b8e4be2c26ULL, \
+		0x6d4e3d514f59d312ULL, 0xde6071ef4cfe8c7dULL, \
+		0x15bb4f8be788911cULL, 0xa6950335e42fce73ULL, \
+		0xe13f79dc4fc83147ULL, 0x521135624c6f6e28ULL, \
+		0x6e6b8c0f1807cf2fULL, 0xdd45c0b11ba09040ULL, \
+		0x9aefba58b0476f74ULL, 0x29c1f6e6b3e0301bULL, \
+		0xc96c5795d7870f42ULL, 0x7a421b2bd420502dULL, \
+		0x3de861c27fc7af19ULL, 0x8ec62d7c7c60f076ULL, \
+		0xb2bc941128085171ULL, 0x0192d8af2baf0e1eULL, \
+		0x4638a2468048f12aULL, 0xf516eef883efae45ULL, \
+		0x3ecdd09c2899b324ULL, 0x8de39c222b3eec4bULL, \
+		0xca49e6cb80d9137fULL, 0x7967aa75837e4c10ULL, \
+		0x451d1318d716ed17ULL, 0xf6335fa6d4b1b278ULL, \
+		0xb199254f7f564d4cULL, 0x02b769f17cf11223ULL, \
+		0xb4f7f6ad86b4690bULL, 0x07d9ba1385133664ULL, \
+		0x4073c0fa2ef4c950ULL, 0xf35d8c442d53963fULL, \
+		0xcf273529793b3738ULL, 0x7c0979977a9c6857ULL, \
+		0x3ba3037ed17b9763ULL, 0x888d4fc0d2dcc80cULL, \
+		0x435671a479aad56dULL, 0xf0783d1a7a0d8a02ULL, \
+		0xb7d247f3d1ea7536ULL, 0x04fc0b4dd24d2a59ULL, \
+		0x3886b22086258b5eULL, 0x8ba8fe9e8582d431ULL, \
+		0xcc0284772e652b05ULL, 0x7f2cc8c92dc2746aULL, \
+		0x325b15e575e1c3d0ULL, 0x8175595b76469cbfULL, \
+		0xc6df23b2dda1638bULL, 0x75f16f0cde063ce4ULL, \
+		0x498bd6618a6e9de3ULL, 0xfaa59adf89c9c28cULL, \
+		0xbd0fe036222e3db8ULL, 0x0e21ac88218962d7ULL, \
+		0xc5fa92ec8aff7fb6ULL, 0x76d4de52895820d9ULL, \
+		0x317ea4bb22bfdfedULL, 0x8250e80521188082ULL, \
+		0xbe2a516875702185ULL, 0x0d041dd676d77eeaULL, \
+		0x4aae673fdd3081deULL, 0xf9802b81de97deb1ULL, \
+		0x4fc0b4dd24d2a599ULL, 0xfceef8632775faf6ULL, \
+		0xbb44828a8c9205c2ULL, 0x086ace348f355aadULL, \
+		0x34107759db5dfbaaULL, 0x873e3be7d8faa4c5ULL, \
+		0xc094410e731d5bf1ULL, 0x73ba0db070ba049eULL, \
+		0xb86133d4dbcc19ffULL, 0x0b4f7f6ad86b4690ULL, \
+		0x4ce50583738cb9a4ULL, 0xffcb493d702be6cbULL, \
+		0xc3b1f050244347ccULL, 0x709fbcee27e418a3ULL, \
+		0x3735c6078c03e797ULL, 0x841b8ab98fa4b8f8ULL, \
+		0xadda7c5f3c4488e3ULL, 0x1ef430e13fe3d78cULL, \
+		0x595e4a08940428b8ULL, 0xea7006b697a377d7ULL, \
+		0xd60abfdbc3cbd6d0ULL, 0x6524f365c06c89bfULL, \
+		0x228e898c6b8b768bULL, 0x91a0c532682c29e4ULL, \
+		0x5a7bfb56c35a3485ULL, 0xe955b7e8c0fd6beaULL, \
+		0xaeffcd016b1a94deULL, 0x1dd181bf68bdcbb1ULL, \
+		0x21ab38d23cd56ab6ULL, 0x9285746c3f7235d9ULL, \
+		0xd52f0e859495caedULL, 0x6601423b97329582ULL, \
+		0xd041dd676d77eeaaULL, 0x636f91d96ed0b1c5ULL, \
+		0x24c5eb30c5374ef1ULL, 0x97eba78ec690119eULL, \
+		0xab911ee392f8b099ULL, 0x18bf525d915feff6ULL, \
+		0x5f1528b43ab810c2ULL, 0xec3b640a391f4fadULL, \
+		0x27e05a6e926952ccULL, 0x94ce16d091ce0da3ULL, \
+		0xd3646c393a29f297ULL, 0x604a2087398eadf8ULL, \
+		0x5c3099ea6de60cffULL, 0xef1ed5546e415390ULL, \
+		0xa8b4afbdc5a6aca4ULL, 0x1b9ae303c601f3cbULL, \
+		0x56ed3e2f9e224471ULL, 0xe5c372919d851b1eULL, \
+		0xa26908783662e42aULL, 0x114744c635c5bb45ULL, \
+		0x2d3dfdab61ad1a42ULL, 0x9e13b115620a452dULL, \
+		0xd9b9cbfcc9edba19ULL, 0x6a978742ca4ae576ULL, \
+		0xa14cb926613cf817ULL, 0x1262f598629ba778ULL, \
+		0x55c88f71c97c584cULL, 0xe6e6c3cfcadb0723ULL, \
+		0xda9c7aa29eb3a624ULL, 0x69b2361c9d14f94bULL, \
+		0x2e184cf536f3067fULL, 0x9d36004b35545910ULL, \
+		0x2b769f17cf112238ULL, 0x9858d3a9ccb67d57ULL, \
+		0xdff2a94067518263ULL, 0x6cdce5fe64f6dd0cULL, \
+		0x50a65c93309e7c0bULL, 0xe388102d33392364ULL, \
+		0xa4226ac498dedc50ULL, 0x170c267a9b79833fULL, \
+		0xdcd7181e300f9e5eULL, 0x6ff954a033a8c131ULL, \
+		0x28532e49984f3e05ULL, 0x9b7d62f79be8616aULL, \
+		0xa707db9acf80c06dULL, 0x14299724cc279f02ULL, \
+		0x5383edcd67c06036ULL, 0xe0ada17364673f59ULL} \
+}
+
+/*
+ * Return the initial CRC seed. Use the value returned from this API as the
+ * "crc" parameter to the first call to add data.
+ */
+static inline uint64_t fman_crc64_init(void)
+{
+	return FMAN_CRC64_ECMA_182.initial;
+}
+
+/* Updates the CRC with arbitrary data */
+static inline uint64_t fman_crc64_update(uint64_t crc,
+					 void *data, unsigned int len)
+{
+	uint8_t *p = data;
+	while (len--)
+		crc = FMAN_CRC64_ECMA_182.table[(crc ^ *(p++)) & 0xff] ^
+				(crc >> 8);
+	return crc;
+}
+
+/* Shorthands for updating the CRC with 8/16/32 bits of data.
+ * IMPORTANT NOTE: the typed "data" arguments should not be mistaken for
+ * host-endian numerical values, the assumption is that these values contain
+ * big-endian (ie. network byte order) data.
+ */
+static inline uint64_t fman_crc64_compute_32bit(uint32_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_16bit(uint16_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_8bit(uint8_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+
+/*
+ * Finalise the CRC (using 2's complement)
+ */
+static inline uint64_t fman_crc64_finish(uint64_t seed)
+{
+	return ~seed;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_FMAN_CRC64_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 07/40] bus/dpaa: enable DPAA IOCTL portal driver
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (5 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
                           ` (33 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Userspace applications interact with DPAA blocks using this IOCTL driver.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile             |   4 +-
 drivers/bus/dpaa/base/qbman/process.c | 331 ++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h    |  88 +++++++++
 drivers/bus/dpaa/include/process.h    | 107 +++++++++++
 4 files changed, 529 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/process.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index fe65276..f06521c 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -43,6 +43,7 @@ CFLAGS += -Wno-cast-qual
 CFLAGS += -D _GNU_SOURCE
 CFLAGS += -I$(RTE_BUS_DPAA)/
 CFLAGS += -I$(RTE_BUS_DPAA)/include
+CFLAGS += -I$(RTE_BUS_DPAA)/base/qbman
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -60,7 +61,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
-	base/fman/netcfg_layer.c
+	base/fman/netcfg_layer.c \
+	base/qbman/process.c
 
 # Link Pthread
 LDLIBS += -lpthread
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
new file mode 100644
index 0000000..b8ec539
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -0,0 +1,331 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <assert.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+
+#include "process.h"
+
+#include <fsl_usd.h>
+
+/* As higher-level drivers will be built on top of this (dma_mem, qbman, ...),
+ * it's preferable that the process driver itself not provide any exported API.
+ * As such, combined with the fact that none of these operations are
+ * performance critical, it is justified to use lazy initialisation, so that's
+ * what the lock is for.
+ */
+static int fd = -1;
+static pthread_mutex_t fd_init_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static int check_fd(void)
+{
+	int ret;
+
+	if (fd >= 0)
+		return 0;
+	ret = pthread_mutex_lock(&fd_init_lock);
+	assert(!ret);
+	/* check again with the lock held */
+	if (fd < 0)
+		fd = open(PROCESS_PATH, O_RDWR);
+	ret = pthread_mutex_unlock(&fd_init_lock);
+	assert(!ret);
+	return (fd >= 0) ? 0 : -ENODEV;
+}
+
+#define DPAA_IOCTL_MAGIC 'u'
+struct dpaa_ioctl_id_alloc {
+	uint32_t base; /* Return value, the start of the allocated range */
+	enum dpaa_id_type id_type; /* what kind of resource(s) to allocate */
+	uint32_t num; /* how many IDs to allocate (and return value) */
+	uint32_t align; /* must be a power of 2, 0 is treated like 1 */
+	int partial; /* whether to allow less than 'num' */
+};
+
+struct dpaa_ioctl_id_release {
+	/* Input; */
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+struct dpaa_ioctl_id_reserve {
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+#define DPAA_IOCTL_ID_ALLOC \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x01, struct dpaa_ioctl_id_alloc)
+#define DPAA_IOCTL_ID_RELEASE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x02, struct dpaa_ioctl_id_release)
+#define DPAA_IOCTL_ID_RESERVE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x0A, struct dpaa_ioctl_id_reserve)
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial)
+{
+	struct dpaa_ioctl_id_alloc id = {
+		.id_type = id_type,
+		.num = num,
+		.align = align,
+		.partial = partial
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	ret = ioctl(fd, DPAA_IOCTL_ID_ALLOC, &id);
+	if (ret)
+		return ret;
+	for (ret = 0; ret < (int)id.num; ret++)
+		base[ret] = id.base + ret;
+	return id.num;
+}
+
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_release id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret) {
+		fprintf(stderr, "Process FD failure\n");
+		return;
+	}
+	ret = ioctl(fd, DPAA_IOCTL_ID_RELEASE, &id);
+	if (ret)
+		fprintf(stderr, "Process FD ioctl failure type %d base 0x%x num %d\n",
+			id_type, base, num);
+}
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_reserve id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	return ioctl(fd, DPAA_IOCTL_ID_RESERVE, &id);
+}
+
+/***************************************/
+/* Mapping and using QMan/BMan portals */
+/***************************************/
+
+#define DPAA_IOCTL_PORTAL_MAP \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x07, struct dpaa_ioctl_portal_map)
+#define DPAA_IOCTL_PORTAL_UNMAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x08, struct dpaa_portal_map)
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_MAP, params);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_MAP)");
+		return ret;
+	}
+	return 0;
+}
+
+int process_portal_unmap(struct dpaa_portal_map *map)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_UNMAP, map);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_UNMAP)");
+		return ret;
+	}
+	return 0;
+}
+
+#define DPAA_IOCTL_PORTAL_IRQ_MAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x09, struct dpaa_ioctl_irq_map)
+
+int process_portal_irq_map(int ifd, struct dpaa_ioctl_irq_map *map)
+{
+	map->fd = fd;
+	return ioctl(ifd, DPAA_IOCTL_PORTAL_IRQ_MAP, map);
+}
+
+int process_portal_irq_unmap(int ifd)
+{
+	return close(ifd);
+}
+
+struct dpaa_ioctl_raw_portal {
+	/* inputs */
+	enum dpaa_portal_type type; /* Type of portal to allocate */
+
+	uint8_t enable_stash; /* set to non zero to turn on stashing */
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+#define DPAA_IOCTL_ALLOC_RAW_PORTAL \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x0C, struct dpaa_ioctl_raw_portal)
+
+#define DPAA_IOCTL_FREE_RAW_PORTAL \
+	_IOR(DPAA_IOCTL_MAGIC, 0x0D, struct dpaa_ioctl_raw_portal)
+
+static int process_portal_allocate(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_ALLOC_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_ALLOC_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+static int process_portal_free(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_FREE_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_FREE_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.enable_stash = portal->enable_stash;
+	input.cpu = portal->cpu;
+	input.cache = portal->cache;
+	input.window = portal->window;
+	input.sdest = portal->sdest;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int qman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.enable_stash = 0;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int bman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
new file mode 100644
index 0000000..4ff48c6
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_USD_H
+#define __FSL_USD_H
+
+#include <compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define QBMAN_ANY_PORTAL_IDX 0xffffffff
+
+/* Obtain and free raw (unitialized) portals */
+
+struct dpaa_raw_portal {
+	/* inputs */
+
+	/* set to non zero to turn on stashing */
+	uint8_t enable_stash;
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int qman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int bman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_USD_H */
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
new file mode 100644
index 0000000..989ddcd
--- /dev/null
+++ b/drivers/bus/dpaa/include/process.h
@@ -0,0 +1,107 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __PROCESS_H
+#define	__PROCESS_H
+
+#include <compat.h>
+
+/* The process device underlies process-wide user/kernel interactions, such as
+ * mapping dma_mem memory and providing accompanying ioctl()s. (This isn't used
+ * for portals, which use one UIO device each.).
+ */
+#define PROCESS_PATH		"/dev/fsl-usdpaa"
+
+/* Allocation of resource IDs uses a generic interface. This enum is used to
+ * distinguish between the type of underlying object being manipulated.
+ */
+enum dpaa_id_type {
+	dpaa_id_fqid,
+	dpaa_id_bpid,
+	dpaa_id_qpool,
+	dpaa_id_cgrid,
+	dpaa_id_max /* <-- not a valid type, represents the number of types */
+};
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial);
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+/* Mapping and using QMan/BMan portals */
+enum dpaa_portal_type {
+	dpaa_portal_qman,
+	dpaa_portal_bman,
+};
+
+struct dpaa_ioctl_portal_map {
+	/* Input parameter, is a qman or bman portal required. */
+	enum dpaa_portal_type type;
+	/* Specifes a specific portal index to map or 0xffffffff
+	 * for don't care.
+	 */
+	uint32_t index;
+
+	/* Return value if the map succeeds, this gives the mapped
+	 * cache-inhibited (cinh) and cache-enabled (cena) addresses.
+	 */
+	struct dpaa_portal_map {
+		void *cinh;
+		void *cena;
+	} addr;
+	/* Qman-specific return values */
+	u16 channel;
+	uint32_t pools;
+};
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params);
+int process_portal_unmap(struct dpaa_portal_map *map);
+
+struct dpaa_ioctl_irq_map {
+	enum dpaa_portal_type type; /* Type of portal to map */
+	int fd; /* File descriptor that contains the portal */
+	void *portal_cinh; /* Cache inhibited area to identify the portal */
+};
+
+int process_portal_irq_map(int fd,  struct dpaa_ioctl_irq_map *irq);
+int process_portal_irq_unmap(int fd);
+
+#endif	/*  __PROCESS_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 08/40] bus/dpaa: add layer for interrupt emulation using pthread
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (6 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
                           ` (32 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

An interrupt manager is implemented by emulating over pthreads.
Handlers are registered by QBMAN layer for being notified about
any interrupt request from DPAA blocks in userspace.

Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile              |   3 +-
 drivers/bus/dpaa/base/qbman/dpaa_sys.c | 136 +++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h |  61 +++++++++++++++
 3 files changed, 199 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index f06521c..5b76a4b 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -62,7 +62,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
-	base/qbman/process.c
+	base/qbman/process.c \
+	base/qbman/dpaa_sys.c
 
 # Link Pthread
 LDLIBS += -lpthread
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.c b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
new file mode 100644
index 0000000..0017da5
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
@@ -0,0 +1,136 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <process.h>
+#include "dpaa_sys.h"
+
+struct process_interrupt {
+	int irq;
+	irqreturn_t (*isr)(int irq, void *arg);
+	unsigned long flags;
+	const char *name;
+	void *arg;
+	struct list_head node;
+};
+
+static COMPAT_LIST_HEAD(process_irq_list);
+static pthread_mutex_t process_irq_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static void process_interrupt_install(struct process_interrupt *irq)
+{
+	int ret;
+	/* Add the irq to the end of the list */
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_add_tail(&irq->node, &process_irq_list);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static void process_interrupt_remove(struct process_interrupt *irq)
+{
+	int ret;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_del(&irq->node);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static struct process_interrupt *process_interrupt_find(int irq_num)
+{
+	int ret;
+	struct process_interrupt *i = NULL;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_for_each_entry(i, &process_irq_list, node) {
+		if (i->irq == irq_num)
+			goto done;
+	}
+done:
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+	return i;
+}
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name,
+		      void *arg __maybe_unused)
+{
+	struct process_interrupt *irq_node =
+		kmalloc(sizeof(*irq_node), GFP_KERNEL);
+
+	if (!irq_node)
+		return -ENOMEM;
+	irq_node->irq = irq;
+	irq_node->isr = isr;
+	irq_node->flags = flags;
+	irq_node->name = name;
+	irq_node->arg = arg;
+	process_interrupt_install(irq_node);
+	return 0;
+}
+
+int qbman_free_irq(int irq, __maybe_unused void *arg)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (!irq_node)
+		return -EINVAL;
+	process_interrupt_remove(irq_node);
+	kfree(irq_node);
+	return 0;
+}
+
+/* This is the interface from the platform-specific driver code to obtain
+ * interrupt handlers that have been registered.
+ */
+void qbman_invoke_irq(int irq)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (irq_node)
+		irq_node->isr(irq, irq_node->arg);
+}
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.h b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
new file mode 100644
index 0000000..bee9fe5
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
@@ -0,0 +1,61 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_SYS_H
+#define __DPAA_SYS_H
+
+#include <of.h>
+
+/* For 2-element tables related to cache-inhibited and cache-enabled mappings */
+#define DPAA_PORTAL_CE 0
+#define DPAA_PORTAL_CI 1
+
+#define DPAA_ASSERT(x) RTE_ASSERT(x)
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name, void *arg);
+int qbman_free_irq(int irq, void *arg);
+
+void qbman_invoke_irq(int irq);
+
+#endif /* __DPAA_SYS_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 09/40] bus/dpaa: add routines for managing a RB tree
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (7 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
                           ` (31 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

QMAN frames are managed over a RB tree data structure.
This patch introduces necessary routines for implementing a RB tree.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/include/dpaa_rbtree.h | 143 +++++++++++++++++++++++++++++++++
 1 file changed, 143 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h

diff --git a/drivers/bus/dpaa/include/dpaa_rbtree.h b/drivers/bus/dpaa/include/dpaa_rbtree.h
new file mode 100644
index 0000000..f8c9b59
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_rbtree.h
@@ -0,0 +1,143 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_RBTREE_H
+#define __DPAA_RBTREE_H
+
+#include <rte_common.h>
+/************/
+/* RB-trees */
+/************/
+
+/* Linux has a good RB-tree implementation, that we can't use (GPL). It also has
+ * a flat/hooked-in interface that virtually requires license-contamination in
+ * order to write a caller-compatible implementation. Instead, I've created an
+ * RB-tree encapsulation on top of linux's primitives (it does some of the work
+ * the client logic would normally do), and this gives us something we can
+ * reimplement on LWE. Unfortunately there's no good+free RB-tree
+ * implementations out there that are license-compatible and "flat" (ie. no
+ * dynamic allocation). I did find a malloc-based one that I could convert, but
+ * that will be a task for later on. For now, LWE's RB-tree is implemented using
+ * an ordered linked-list.
+ *
+ * Note, the only linux-esque type is "struct rb_node", because it's used
+ * statically in the exported header, so it can't be opaque. Our version doesn't
+ * include a "rb_parent_color" field because we're doing linked-list instead of
+ * a true rb-tree.
+ */
+
+struct rb_node {
+	struct rb_node *prev, *next;
+};
+
+struct dpa_rbtree {
+	struct rb_node *head, *tail;
+};
+
+#define DPAA_RBTREE { NULL, NULL }
+static inline void dpa_rbtree_init(struct dpa_rbtree *tree)
+{
+	tree->head = tree->tail = NULL;
+}
+
+#define QMAN_NODE2OBJ(ptr, type, node_field) \
+	(type *)((char *)ptr - offsetof(type, node_field))
+
+#define IMPLEMENT_DPAA_RBTREE(name, type, node_field, val_field) \
+static inline int name##_push(struct dpa_rbtree *tree, type *obj) \
+{ \
+	struct rb_node *node = tree->head; \
+	if (!node) { \
+		tree->head = tree->tail = &obj->node_field; \
+		obj->node_field.prev = obj->node_field.next = NULL; \
+		return 0; \
+	} \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (obj->val_field == item->val_field) \
+			return -EBUSY; \
+		if (obj->val_field < item->val_field) { \
+			if (tree->head == node) \
+				tree->head = &obj->node_field; \
+			else \
+				node->prev->next = &obj->node_field; \
+			obj->node_field.prev = node->prev; \
+			obj->node_field.next = node; \
+			node->prev = &obj->node_field; \
+			return 0; \
+		} \
+		node = node->next; \
+	} \
+	obj->node_field.prev = tree->tail; \
+	obj->node_field.next = NULL; \
+	tree->tail->next = &obj->node_field; \
+	tree->tail = &obj->node_field; \
+	return 0; \
+} \
+static inline void name##_del(struct dpa_rbtree *tree, type *obj) \
+{ \
+	if (tree->head == &obj->node_field) { \
+		if (tree->tail == &obj->node_field) \
+			/* Only item in the list */ \
+			tree->head = tree->tail = NULL; \
+		else { \
+			/* Is the head, next != NULL */ \
+			tree->head = tree->head->next; \
+			tree->head->prev = NULL; \
+		} \
+	} else { \
+		if (tree->tail == &obj->node_field) { \
+			/* Is the tail, prev != NULL */ \
+			tree->tail = tree->tail->prev; \
+			tree->tail->next = NULL; \
+		} else { \
+			/* Is neither the head nor the tail */ \
+			obj->node_field.prev->next = obj->node_field.next; \
+			obj->node_field.next->prev = obj->node_field.prev; \
+		} \
+	} \
+} \
+static inline type *name##_find(struct dpa_rbtree *tree, u32 val) \
+{ \
+	struct rb_node *node = tree->head; \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (val == item->val_field) \
+			return item; \
+		if (val < item->val_field) \
+			return NULL; \
+		node = node->next; \
+	} \
+	return NULL; \
+}
+
+#endif /* __DPAA_RBTREE_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 10/40] bus/dpaa: add QMAN interface driver
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (8 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
                           ` (30 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Queue Manager (QMan) is a hardware queue management block that
allows software and accelerators on the datapath to enqueue and dequeue
frames in order to communicate.

This part of QBMAN DPAA Block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |    1 +
 drivers/bus/dpaa/base/qbman/qman_driver.c |  271 +++++++
 drivers/bus/dpaa/base/qbman/qman_priv.h   |  303 +++++++
 drivers/bus/dpaa/include/fsl_qman.h       | 1254 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   13 +
 5 files changed, 1842 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 5b76a4b..c9c15f8 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -63,6 +63,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman_driver.c \
 	base/qbman/dpaa_sys.c
 
 # Link Pthread
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
new file mode 100644
index 0000000..80dde20
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -0,0 +1,271 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "qman_priv.h"
+#include <sys/ioctl.h>
+#include <rte_branch_prediction.h>
+
+/* Global variable containing revision id (even on non-control plane systems
+ * where CCSR isn't available).
+ */
+u16 qman_ip_rev;
+u16 qm_channel_pool1 = QMAN_CHANNEL_POOL1;
+u16 qm_channel_caam = QMAN_CHANNEL_CAAM;
+u16 qm_channel_pme = QMAN_CHANNEL_PME;
+
+/* Ccsr map address to access ccsrbased register */
+void *qman_ccsr_map;
+/* The qman clock frequency */
+u32 qman_clk;
+
+static __thread int fd = -1;
+static __thread struct qm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_qman
+};
+
+static int fsl_qman_portal_init(uint32_t index, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu\n");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!\n");
+		return -EINVAL;
+	}
+
+	/* Allocate and map a qman portal */
+	map.index = index;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	pcfg.channel = map.channel;
+	pcfg.pools = map.pools;
+	pcfg.index = map.index;
+
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+
+	fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("QMan irq init failed\n");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
+	pcfg.is_shared = is_shared;
+	pcfg.node = NULL;
+	pcfg.irq = fd;
+
+	irq_map.type = dpaa_portal_qman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_qman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int qman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_qman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int qman_thread_finish(void)
+{
+	return fsl_qman_portal_finish();
+}
+
+void qman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int qman_global_init(void)
+{
+	const struct device_node *dt_node;
+	int ret = 0;
+	size_t lenp;
+	const u32 *chanid;
+	static int ccsr_map_fd;
+	const uint32_t *qman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	const u32 *clk;
+
+	static int done;
+
+	if (done)
+		return -EBUSY;
+
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman-portal");
+	if (!dt_node) {
+		pr_err("No qman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,qman-portal-1.0.0"))
+		pr_err("QMan rev1.0 on P4080 rev1 is not supported!\n");
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.1") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.1.0"))
+		qman_ip_rev = QMAN_REV11;
+	else if	(of_device_is_compatible(dt_node, "fsl,qman-portal-1.2") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.2.0"))
+		qman_ip_rev = QMAN_REV12;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-2.0.0"))
+		qman_ip_rev = QMAN_REV20;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.1"))
+		qman_ip_rev = QMAN_REV30;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.3"))
+		qman_ip_rev = QMAN_REV31;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.1"))
+		qman_ip_rev = QMAN_REV32;
+	else
+		qman_ip_rev = QMAN_REV11;
+
+	if (!qman_ip_rev) {
+		pr_err("Unknown qman portal version\n");
+		return -ENODEV;
+	}
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30) {
+		qm_channel_pool1 = QMAN_CHANNEL_POOL1_REV3;
+		qm_channel_caam = QMAN_CHANNEL_CAAM_REV3;
+		qm_channel_pme = QMAN_CHANNEL_PME_REV3;
+	}
+
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,pool-channel-range");
+	if (!dt_node) {
+		pr_err("No qman pool channel range available\n");
+		return -ENODEV;
+	}
+	chanid = of_get_property(dt_node, "fsl,pool-channel-range", &lenp);
+	if (!chanid) {
+		pr_err("Can not get pool-channel-range property\n");
+		return -EINVAL;
+	}
+
+	/* get ccsr base */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman");
+	if (!dt_node) {
+		pr_err("No qman device node available\n");
+		return -ENODEV;
+	}
+	qman_addr = of_get_address(dt_node, 0, &regs_size, NULL);
+	if (!qman_addr) {
+		pr_err("of_get_address cannot return qman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(dt_node, qman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open("/dev/mem", O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for qman ccsr map\n");
+		return ccsr_map_fd;
+	}
+
+	qman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (qman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map qman ccsr base\n");
+		return -EINVAL;
+	}
+
+	clk = of_get_property(dt_node, "clock-frequency", NULL);
+	if (!clk)
+		pr_warn("Can't find Qman clock frequency\n");
+	else
+		qman_clk = be32_to_cpu(*clk);
+
+	return ret;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
new file mode 100644
index 0000000..4a11e40
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -0,0 +1,303 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __QMAN_PRIV_H
+#define __QMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_qman.h>
+
+/* Congestion Groups */
+/*
+ * This wrapper represents a bit-array for the state of the 256 QMan congestion
+ * groups. Is also used as a *mask* for congestion groups, eg. so we ignore
+ * those that don't concern us. We harness the structure and accessor details
+ * already used in the management command to query congestion groups.
+ */
+struct qman_cgrs {
+	struct __qm_mcr_querycongestion q;
+};
+
+static inline void qman_cgrs_init(struct qman_cgrs *c)
+{
+	memset(c, 0, sizeof(*c));
+}
+
+static inline void qman_cgrs_fill(struct qman_cgrs *c)
+{
+	memset(c, 0xff, sizeof(*c));
+}
+
+static inline int qman_cgrs_get(struct qman_cgrs *c, int num)
+{
+	return QM_MCR_QUERYCONGESTION(&c->q, num);
+}
+
+static inline void qman_cgrs_set(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] |= (0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline void qman_cgrs_unset(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] &= ~(0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline int qman_cgrs_next(struct qman_cgrs *c, int num)
+{
+	while ((++num < (int)__CGR_NUM) && !qman_cgrs_get(c, num))
+		;
+	return num;
+}
+
+static inline void qman_cgrs_cp(struct qman_cgrs *dest,
+				const struct qman_cgrs *src)
+{
+	memcpy(dest, src, sizeof(*dest));
+}
+
+static inline void qman_cgrs_and(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) & *(_b++);
+}
+
+static inline void qman_cgrs_xor(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) ^ *(_b++);
+}
+
+/* used by CCSR and portal interrupt code */
+enum qm_isr_reg {
+	qm_isr_status = 0,
+	qm_isr_enable = 1,
+	qm_isr_disable = 2,
+	qm_isr_inhibit = 3
+};
+
+struct qm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	struct device_node *node;
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* If the caller enables DQRR stashing (and thus wishes to operate the
+	 * portal from only one cpu), this is the logical CPU that the portal
+	 * will stash to. Whether stashing is enabled or not, this setting is
+	 * also used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* The portal's dedicated channel id, use this value for initialising
+	 * frame queues to target this portal when scheduled.
+	 */
+	u16 channel;
+	/* A mask of which pool channels this portal has dequeue access to
+	 * (using QM_SDQCR_CHANNELS_POOL(n) for the bitmask).
+	 */
+	u32 pools;
+
+};
+
+/* Revision info (for errata and feature handling) */
+#define QMAN_REV11 0x0101
+#define QMAN_REV12 0x0102
+#define QMAN_REV20 0x0200
+#define QMAN_REV30 0x0300
+#define QMAN_REV31 0x0301
+#define QMAN_REV32 0x0302
+extern u16 qman_ip_rev; /* 0 if uninitialised, otherwise QMAN_REVx */
+extern u32 qman_clk;
+
+int qm_set_wpm(int wpm);
+int qm_get_wpm(int *wpm);
+
+struct qman_portal *qman_create_affine_portal(
+			const struct qm_portal_config *config,
+			const struct qman_cgrs *cgrs);
+const struct qm_portal_config *qman_destroy_affine_portal(void);
+
+struct qm_portal_config *qm_get_unused_portal(void);
+struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx);
+
+void qm_put_unused_portal(struct qm_portal_config *pcfg);
+void qm_set_liodns(struct qm_portal_config *pcfg);
+
+/* This CGR feature is supported by h/w and required by unit-tests and the
+ * debugfs hooks, so is implemented in the driver. However it allows an explicit
+ * corruption of h/w fields by s/w that are usually incorruptible (because the
+ * counters are usually maintained entirely within h/w). As such, we declare
+ * this API internally.
+ */
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result);
+
+/*   QMan s/w corenet portal, low-level i/face	 */
+
+/*
+ * For Choose one SOURCE. Choose one COUNT. Choose one
+ * dequeue TYPE. Choose TOKEN (8-bit).
+ * If SOURCE == CHANNELS,
+ *   Choose CHANNELS_DEDICATED and/or CHANNELS_POOL(n).
+ *   You can choose DEDICATED_PRECEDENCE if the portal channel should have
+ *   priority.
+ * If SOURCE == SPECIFICWQ,
+ *     Either select the work-queue ID with SPECIFICWQ_WQ(), or select the
+ *     channel (SPECIFICWQ_DEDICATED or SPECIFICWQ_POOL()) and specify the
+ *     work-queue priority (0-7) with SPECIFICWQ_WQ() - either way, you get the
+ *     same value.
+ */
+#define QM_SDQCR_SOURCE_CHANNELS	0x0
+#define QM_SDQCR_SOURCE_SPECIFICWQ	0x40000000
+#define QM_SDQCR_COUNT_EXACT1		0x0
+#define QM_SDQCR_COUNT_UPTO3		0x20000000
+#define QM_SDQCR_DEDICATED_PRECEDENCE	0x10000000
+#define QM_SDQCR_TYPE_MASK		0x03000000
+#define QM_SDQCR_TYPE_NULL		0x0
+#define QM_SDQCR_TYPE_PRIO_QOS		0x01000000
+#define QM_SDQCR_TYPE_ACTIVE_QOS	0x02000000
+#define QM_SDQCR_TYPE_ACTIVE		0x03000000
+#define QM_SDQCR_TOKEN_MASK		0x00ff0000
+#define QM_SDQCR_TOKEN_SET(v)		(((v) & 0xff) << 16)
+#define QM_SDQCR_TOKEN_GET(v)		(((v) >> 16) & 0xff)
+#define QM_SDQCR_CHANNELS_DEDICATED	0x00008000
+#define QM_SDQCR_SPECIFICWQ_MASK	0x000000f7
+#define QM_SDQCR_SPECIFICWQ_DEDICATED	0x00000000
+#define QM_SDQCR_SPECIFICWQ_POOL(n)	((n) << 4)
+#define QM_SDQCR_SPECIFICWQ_WQ(n)	(n)
+
+#define QM_VDQCR_FQID_MASK		0x00ffffff
+#define QM_VDQCR_FQID(n)		((n) & QM_VDQCR_FQID_MASK)
+
+#define QM_EQCR_VERB_VBIT		0x80
+#define QM_EQCR_VERB_CMD_MASK		0x61	/* but only one value; */
+#define QM_EQCR_VERB_CMD_ENQUEUE	0x01
+#define QM_EQCR_VERB_COLOUR_MASK	0x18	/* 4 possible values; */
+#define QM_EQCR_VERB_COLOUR_GREEN	0x00
+#define QM_EQCR_VERB_COLOUR_YELLOW	0x08
+#define QM_EQCR_VERB_COLOUR_RED		0x10
+#define QM_EQCR_VERB_COLOUR_OVERRIDE	0x18
+#define QM_EQCR_VERB_INTERRUPT		0x04	/* on command consumption */
+#define QM_EQCR_VERB_ORP		0x02	/* enable order restoration */
+#define QM_EQCR_DCA_ENABLE		0x80
+#define QM_EQCR_DCA_PARK		0x40
+#define QM_EQCR_DCA_IDXMASK		0x0f	/* "DQRR::idx" goes here */
+#define QM_EQCR_SEQNUM_NESN		0x8000	/* Advance NESN */
+#define QM_EQCR_SEQNUM_NLIS		0x4000	/* More fragments to come */
+#define QM_EQCR_SEQNUM_SEQMASK		0x3fff	/* sequence number goes here */
+#define QM_EQCR_FQID_NULL		0	/* eg. for an ORP seqnum hole */
+
+#define QM_MCC_VERB_VBIT		0x80
+#define QM_MCC_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_MCC_VERB_INITFQ_PARKED	0x40
+#define QM_MCC_VERB_INITFQ_SCHED	0x41
+#define QM_MCC_VERB_QUERYFQ		0x44
+#define QM_MCC_VERB_QUERYFQ_NP		0x45	/* "non-programmable" fields */
+#define QM_MCC_VERB_QUERYWQ		0x46
+#define QM_MCC_VERB_QUERYWQ_DEDICATED	0x47
+#define QM_MCC_VERB_ALTER_SCHED		0x48	/* Schedule FQ */
+#define QM_MCC_VERB_ALTER_FE		0x49	/* Force Eligible FQ */
+#define QM_MCC_VERB_ALTER_RETIRE	0x4a	/* Retire FQ */
+#define QM_MCC_VERB_ALTER_OOS		0x4b	/* Take FQ out of service */
+#define QM_MCC_VERB_ALTER_FQXON		0x4d	/* FQ XON */
+#define QM_MCC_VERB_ALTER_FQXOFF	0x4e	/* FQ XOFF */
+#define QM_MCC_VERB_INITCGR		0x50
+#define QM_MCC_VERB_MODIFYCGR		0x51
+#define QM_MCC_VERB_CGRTESTWRITE	0x52
+#define QM_MCC_VERB_QUERYCGR		0x58
+#define QM_MCC_VERB_QUERYCONGESTION	0x59
+
+/*
+ * Used by all portal interrupt registers except 'inhibit'
+ * Channels with frame availability
+ */
+#define QM_PIRQ_DQAVAIL	0x0000ffff
+
+/* The DQAVAIL interrupt fields break down into these bits; */
+#define QM_DQAVAIL_PORTAL	0x8000		/* Portal channel */
+#define QM_DQAVAIL_POOL(n)	(0x8000 >> (n))	/* Pool channel, n==[1..15] */
+#define QM_DQAVAIL_MASK		0xffff
+/* This mask contains all the "irqsource" bits visible to API users */
+#define QM_PIRQ_VISIBLE	(QM_PIRQ_SLOW | QM_PIRQ_DQRI)
+
+/* These are qm_<reg>_<verb>(). So for example, qm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define qm_isr_status_read(qm)		__qm_isr_read(qm, qm_isr_status)
+#define qm_isr_status_clear(qm, m)	__qm_isr_write(qm, qm_isr_status, m)
+#define qm_isr_enable_read(qm)		__qm_isr_read(qm, qm_isr_enable)
+#define qm_isr_enable_write(qm, v)	__qm_isr_write(qm, qm_isr_enable, v)
+#define qm_isr_disable_read(qm)		__qm_isr_read(qm, qm_isr_disable)
+#define qm_isr_disable_write(qm, v)	__qm_isr_write(qm, qm_isr_disable, v)
+/* TODO: unfortunate name-clash here, reword? */
+#define qm_isr_inhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 1)
+#define qm_isr_uninhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 0)
+
+#define QMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+
+#endif /* _QMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
new file mode 100644
index 0000000..784fe60
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -0,0 +1,1254 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_QMAN_H
+#define __FSL_QMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <dpaa_rbtree.h>
+
+/* Last updated for v00.800 of the BG */
+
+/* Hardware constants */
+#define QM_CHANNEL_SWPORTAL0 0
+#define QMAN_CHANNEL_POOL1 0x21
+#define QMAN_CHANNEL_CAAM 0x80
+#define QMAN_CHANNEL_PME 0xa0
+#define QMAN_CHANNEL_POOL1_REV3 0x401
+#define QMAN_CHANNEL_CAAM_REV3 0x840
+#define QMAN_CHANNEL_PME_REV3 0x860
+extern u16 qm_channel_pool1;
+extern u16 qm_channel_caam;
+extern u16 qm_channel_pme;
+enum qm_dc_portal {
+	qm_dc_portal_fman0 = 0,
+	qm_dc_portal_fman1 = 1,
+	qm_dc_portal_caam = 2,
+	qm_dc_portal_pme = 3
+};
+
+/* Portal processing (interrupt) sources */
+#define QM_PIRQ_CCSCI	0x00200000	/* CEETM Congestion State Change */
+#define QM_PIRQ_CSCI	0x00100000	/* Congestion State Change */
+#define QM_PIRQ_EQCI	0x00080000	/* Enqueue Command Committed */
+#define QM_PIRQ_EQRI	0x00040000	/* EQCR Ring (below threshold) */
+#define QM_PIRQ_DQRI	0x00020000	/* DQRR Ring (non-empty) */
+#define QM_PIRQ_MRI	0x00010000	/* MR Ring (non-empty) */
+/*
+ * This mask contains all the interrupt sources that need handling except DQRI,
+ * ie. that if present should trigger slow-path processing.
+ */
+#define QM_PIRQ_SLOW	(QM_PIRQ_CSCI | QM_PIRQ_EQCI | QM_PIRQ_EQRI | \
+			QM_PIRQ_MRI | QM_PIRQ_CCSCI)
+
+/* For qman_static_dequeue_*** APIs */
+#define QM_SDQCR_CHANNELS_POOL_MASK	0x00007fff
+/* for n in [1,15] */
+#define QM_SDQCR_CHANNELS_POOL(n)	(0x00008000 >> (n))
+/* for conversion from n of qm_channel */
+static inline u32 QM_SDQCR_CHANNELS_POOL_CONV(u16 channel)
+{
+	return QM_SDQCR_CHANNELS_POOL(channel + 1 - qm_channel_pool1);
+}
+
+/* For qman_volatile_dequeue(); Choose one PRECEDENCE. EXACT is optional. Use
+ * NUMFRAMES(n) (6-bit) or NUMFRAMES_TILLEMPTY to fill in the frame-count. Use
+ * FQID(n) to fill in the frame queue ID.
+ */
+#define QM_VDQCR_PRECEDENCE_VDQCR	0x0
+#define QM_VDQCR_PRECEDENCE_SDQCR	0x80000000
+#define QM_VDQCR_EXACT			0x40000000
+#define QM_VDQCR_NUMFRAMES_MASK		0x3f000000
+#define QM_VDQCR_NUMFRAMES_SET(n)	(((n) & 0x3f) << 24)
+#define QM_VDQCR_NUMFRAMES_GET(n)	(((n) >> 24) & 0x3f)
+#define QM_VDQCR_NUMFRAMES_TILLEMPTY	QM_VDQCR_NUMFRAMES_SET(0)
+
+/* --- QMan data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct qm_eqcr_entry;	/* EQCR (EnQueue Command Ring) entries */
+struct qm_dqrr_entry;	/* DQRR (DeQueue Response Ring) entries */
+struct qm_mr_entry;	/* MR (Message Ring) entries */
+struct qm_mc_command;	/* MC (Management Command) command */
+struct qm_mc_result;	/* MC result */
+
+#define QM_FD_FORMAT_SG		0x4
+#define QM_FD_FORMAT_LONG	0x2
+#define QM_FD_FORMAT_COMPOUND	0x1
+enum qm_fd_format {
+	/*
+	 * 'contig' implies a contiguous buffer, whereas 'sg' implies a
+	 * scatter-gather table. 'big' implies a 29-bit length with no offset
+	 * field, otherwise length is 20-bit and offset is 9-bit. 'compound'
+	 * implies a s/g-like table, where each entry itself represents a frame
+	 * (contiguous or scatter-gather) and the 29-bit "length" is
+	 * interpreted purely for congestion calculations, ie. a "congestion
+	 * weight".
+	 */
+	qm_fd_contig = 0,
+	qm_fd_contig_big = QM_FD_FORMAT_LONG,
+	qm_fd_sg = QM_FD_FORMAT_SG,
+	qm_fd_sg_big = QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG,
+	qm_fd_compound = QM_FD_FORMAT_COMPOUND
+};
+
+/* Capitalised versions are un-typed but can be used in static expressions */
+#define QM_FD_CONTIG	0
+#define QM_FD_CONTIG_BIG QM_FD_FORMAT_LONG
+#define QM_FD_SG	QM_FD_FORMAT_SG
+#define QM_FD_SG_BIG	(QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG)
+#define QM_FD_COMPOUND	QM_FD_FORMAT_COMPOUND
+
+/* "Frame Descriptor (FD)" */
+struct qm_fd {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 dd:2;	/* dynamic debug */
+			u8 liodn_offset:6;
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 eliodn_offset:4;
+			u8 __reserved:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u8 liodn_offset:6;
+			u8 dd:2;	/* dynamic debug */
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 __reserved:4;
+			u8 eliodn_offset:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#endif
+		};
+		struct {
+			u64 __notaddress:24;
+			/* More efficient address accessor */
+			u64 addr:40;
+		};
+		u64 opaque_addr;
+	};
+	/* The 'format' field indicates the interpretation of the remaining 29
+	 * bits of the 32-bit word. For packing reasons, it is duplicated in the
+	 * other union elements. Note, union'd structs are difficult to use with
+	 * static initialisation under gcc, in which case use the "opaque" form
+	 * with one of the macros.
+	 */
+	union {
+		/* For easier/faster copying of this part of the fd (eg. from a
+		 * DQRR entry to an EQCR entry) copy 'opaque'
+		 */
+		u32 opaque;
+		/* If 'format' is _contig or _sg, 20b length and 9b offset */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format format:3;
+			u16 offset:9;
+			u32 length20:20;
+#else
+			u32 length20:20;
+			u16 offset:9;
+			enum qm_fd_format format:3;
+#endif
+		};
+		/* If 'format' is _contig_big or _sg_big, 29b length */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format1:3;
+			u32 length29:29;
+#else
+			u32 length29:29;
+			enum qm_fd_format _format1:3;
+#endif
+		};
+		/* If 'format' is _compound, 29b "congestion weight" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format2:3;
+			u32 cong_weight:29;
+#else
+			u32 cong_weight:29;
+			enum qm_fd_format _format2:3;
+#endif
+		};
+	};
+	union {
+		u32 cmd;
+		u32 status;
+	};
+} __attribute__((aligned(8)));
+#define QM_FD_DD_NULL		0x00
+#define QM_FD_PID_MASK		0x3f
+static inline u64 qm_fd_addr_get64(const struct qm_fd *fd)
+{
+	return fd->addr;
+}
+
+static inline dma_addr_t qm_fd_addr(const struct qm_fd *fd)
+{
+	return (dma_addr_t)fd->addr;
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_fd_addr_set64(fd, v) \
+	do { \
+		struct qm_fd *__fd931 = (fd); \
+		__fd931->addr = v; \
+	} while (0)
+
+/* Scatter/Gather table entry */
+struct qm_sg_entry {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1[3];
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u8 __reserved1[3];
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:24;
+			u64 addr:40;
+#else
+			u64 addr:40;
+			u64 __notaddress:24;
+#endif
+		};
+		u64 opaque;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 extension:1;	/* Extension bit */
+			u32 final:1;		/* Final bit */
+			u32 length:30;
+#else
+			u32 length:30;
+			u32 final:1;		/* Final bit */
+			u32 extension:1;	/* Extension bit */
+#endif
+		};
+		u32 val;
+	};
+	u8 __reserved2;
+	u8 bpid;
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved3:3;
+			u16 offset:13;
+#else
+			u16 offset:13;
+			u16 __reserved3:3;
+#endif
+		};
+		u16 val_off;
+	};
+} __packed;
+static inline u64 qm_sg_entry_get64(const struct qm_sg_entry *sg)
+{
+	return sg->addr;
+}
+
+static inline dma_addr_t qm_sg_addr(const struct qm_sg_entry *sg)
+{
+	return (dma_addr_t)sg->addr;
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_sg_entry_set64(sg, v) \
+	do { \
+		struct qm_sg_entry *__sg931 = (sg); \
+		__sg931->addr = v; \
+	} while (0)
+
+/* See 1.5.8.1: "Enqueue Command" */
+struct qm_eqcr_entry {
+	u8 __dont_write_directly__verb;
+	u8 dca;
+	u16 seqnum;
+	u32 orp;	/* 24-bit */
+	u32 fqid;	/* 24-bit */
+	u32 tag;
+	struct qm_fd fd;
+	u8 __reserved3[32];
+} __packed;
+
+
+/* "Frame Dequeue Response" */
+struct qm_dqrr_entry {
+	u8 verb;
+	u8 stat;
+	u16 seqnum;	/* 15-bit */
+	u8 tok;
+	u8 __reserved2[3];
+	u32 fqid;	/* 24-bit */
+	u32 contextB;
+	struct qm_fd fd;
+	u8 __reserved4[32];
+};
+
+#define QM_DQRR_VERB_VBIT		0x80
+#define QM_DQRR_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_DQRR_VERB_FRAME_DEQUEUE	0x60	/* "this format" */
+#define QM_DQRR_STAT_FQ_EMPTY		0x80	/* FQ empty */
+#define QM_DQRR_STAT_FQ_HELDACTIVE	0x40	/* FQ held active */
+#define QM_DQRR_STAT_FQ_FORCEELIGIBLE	0x20	/* FQ was force-eligible'd */
+#define QM_DQRR_STAT_FD_VALID		0x10	/* has a non-NULL FD */
+#define QM_DQRR_STAT_UNSCHEDULED	0x02	/* Unscheduled dequeue */
+#define QM_DQRR_STAT_DQCR_EXPIRED	0x01	/* VDQCR or PDQCR expired*/
+
+
+/* "ERN Message Response" */
+/* "FQ State Change Notification" */
+struct qm_mr_entry {
+	u8 verb;
+	union {
+		struct {
+			u8 dca;
+			u16 seqnum;
+			u8 rc;		/* Rejection Code */
+			u32 orp:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed ern;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+			u8 __reserved1:4;
+			enum qm_dc_portal portal:2;
+#else
+			enum qm_dc_portal portal:3;
+			u8 __reserved1:3;
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+#endif
+			u16 __reserved2;
+			u8 rc;		/* Rejection Code */
+			u32 __reserved3:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed dcern;
+		struct {
+			u8 fqs;		/* Frame Queue Status */
+			u8 __reserved1[6];
+			u32 fqid;	/* 24-bit */
+			u32 contextB;
+			u8 __reserved2[16];
+		} __packed fq;		/* FQRN/FQRNI/FQRL/FQPN */
+	};
+	u8 __reserved2[32];
+} __packed;
+#define QM_MR_VERB_VBIT			0x80
+/*
+ * ERNs originating from direct-connect portals ("dcern") use 0x20 as a verb
+ * which would be invalid as a s/w enqueue verb. A s/w ERN can be distinguished
+ * from the other MR types by noting if the 0x20 bit is unset.
+ */
+#define QM_MR_VERB_TYPE_MASK		0x27
+#define QM_MR_VERB_DC_ERN		0x20
+#define QM_MR_VERB_FQRN			0x21
+#define QM_MR_VERB_FQRNI		0x22
+#define QM_MR_VERB_FQRL			0x23
+#define QM_MR_VERB_FQPN			0x24
+#define QM_MR_RC_MASK			0xf0	/* contains one of; */
+#define QM_MR_RC_CGR_TAILDROP		0x00
+#define QM_MR_RC_WRED			0x10
+#define QM_MR_RC_ERROR			0x20
+#define QM_MR_RC_ORPWINDOW_EARLY	0x30
+#define QM_MR_RC_ORPWINDOW_LATE		0x40
+#define QM_MR_RC_FQ_TAILDROP		0x50
+#define QM_MR_RC_ORPWINDOW_RETIRED	0x60
+#define QM_MR_RC_ORP_ZERO		0x70
+#define QM_MR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+#define QM_MR_DCERN_COLOUR_GREEN	0x00
+#define QM_MR_DCERN_COLOUR_YELLOW	0x01
+#define QM_MR_DCERN_COLOUR_RED		0x02
+#define QM_MR_DCERN_COLOUR_OVERRIDE	0x03
+/*
+ * An identical structure of FQD fields is present in the "Init FQ" command and
+ * the "Query FQ" result, it's suctioned out into the "struct qm_fqd" type.
+ * Within that, the 'stashing' and 'taildrop' pieces are also factored out, the
+ * latter has two inlines to assist with converting to/from the mant+exp
+ * representation.
+ */
+struct qm_fqd_stashing {
+	/* See QM_STASHING_EXCL_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 exclusive;
+	u8 __reserved1:2;
+	/* Numbers of cachelines */
+	u8 annotation_cl:2;
+	u8 data_cl:2;
+	u8 context_cl:2;
+#else
+	u8 context_cl:2;
+	u8 data_cl:2;
+	u8 annotation_cl:2;
+	u8 __reserved1:2;
+	u8 exclusive;
+#endif
+} __packed;
+struct qm_fqd_taildrop {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved1:3;
+	u16 mant:8;
+	u16 exp:5;
+#else
+	u16 exp:5;
+	u16 mant:8;
+	u16 __reserved1:3;
+#endif
+} __packed;
+struct qm_fqd_oac {
+	/* "Overhead Accounting Control", see QM_OAC_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 oac:2; /* "Overhead Accounting Control" */
+	u8 __reserved1:6;
+#else
+	u8 __reserved1:6;
+	u8 oac:2; /* "Overhead Accounting Control" */
+#endif
+	/* Two's-complement value (-128 to +127) */
+	signed char oal; /* "Overhead Accounting Length" */
+} __packed;
+struct qm_fqd {
+	union {
+		u8 orpc;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1:2;
+			u8 orprws:3;
+			u8 oa:1;
+			u8 olws:2;
+#else
+			u8 olws:2;
+			u8 oa:1;
+			u8 orprws:3;
+			u8 __reserved1:2;
+#endif
+		} __packed;
+	};
+	u8 cgid;
+	u16 fq_ctrl;	/* See QM_FQCTRL_<...> */
+	union {
+		u16 dest_wq;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 channel:13; /* qm_channel */
+			u16 wq:3;
+#else
+			u16 wq:3;
+			u16 channel:13; /* qm_channel */
+#endif
+		} __packed dest;
+	};
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#else
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#endif
+	/*
+	 * For "Initialize Frame Queue" commands, the write-enable mask
+	 * determines whether 'td' or 'oac_init' is observed. For query
+	 * commands, this field is always 'td', and 'oac_query' (below) reflects
+	 * the Overhead ACcounting values.
+	 */
+	union {
+		uint16_t opaque_td;
+		struct qm_fqd_taildrop td;
+		struct qm_fqd_oac oac_init;
+	};
+	u32 context_b;
+	union {
+		/* Treat it as 64-bit opaque */
+		u64 opaque;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 hi;
+			u32 lo;
+#else
+			u32 lo;
+			u32 hi;
+#endif
+		};
+		/* Treat it as s/w portal stashing config */
+		/* see "FQD Context_A field used for [...]" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			struct qm_fqd_stashing stashing;
+			/*
+			 * 48-bit address of FQ context to
+			 * stash, must be cacheline-aligned
+			 */
+			u16 context_hi;
+			u32 context_lo;
+#else
+			u32 context_lo;
+			u16 context_hi;
+			struct qm_fqd_stashing stashing;
+#endif
+		} __packed;
+	} context_a;
+	struct qm_fqd_oac oac_query;
+} __packed;
+/* 64-bit converters for context_hi/lo */
+static inline u64 qm_fqd_stashing_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.context_hi << 32) |
+		(u64)fqd->context_a.context_lo;
+}
+
+static inline dma_addr_t qm_fqd_stashing_addr(const struct qm_fqd *fqd)
+{
+	return (dma_addr_t)qm_fqd_stashing_get64(fqd);
+}
+
+static inline u64 qm_fqd_context_a_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.hi << 32) |
+		(u64)fqd->context_a.lo;
+}
+
+static inline void qm_fqd_stashing_set64(struct qm_fqd *fqd, u64 addr)
+{
+		fqd->context_a.context_hi = upper_32_bits(addr);
+		fqd->context_a.context_lo = lower_32_bits(addr);
+}
+
+static inline void qm_fqd_context_a_set64(struct qm_fqd *fqd, u64 addr)
+{
+	fqd->context_a.hi = upper_32_bits(addr);
+	fqd->context_a.lo = lower_32_bits(addr);
+}
+
+/* convert a threshold value into mant+exp representation */
+static inline int qm_fqd_taildrop_set(struct qm_fqd_taildrop *td, u32 val,
+				      int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	if (val > 0xe0000000)
+		return -ERANGE;
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	td->exp = e;
+	td->mant = val;
+	return 0;
+}
+
+/* and the other direction */
+static inline u32 qm_fqd_taildrop_get(const struct qm_fqd_taildrop *td)
+{
+	return (u32)td->mant << td->exp;
+}
+
+
+/* See "Frame Queue Descriptor (FQD)" */
+/* Frame Queue Descriptor (FQD) field 'fq_ctrl' uses these constants */
+#define QM_FQCTRL_MASK		0x07ff	/* 'fq_ctrl' flags; */
+#define QM_FQCTRL_CGE		0x0400	/* Congestion Group Enable */
+#define QM_FQCTRL_TDE		0x0200	/* Tail-Drop Enable */
+#define QM_FQCTRL_ORP		0x0100	/* ORP Enable */
+#define QM_FQCTRL_CTXASTASHING	0x0080	/* Context-A stashing */
+#define QM_FQCTRL_CPCSTASH	0x0040	/* CPC Stash Enable */
+#define QM_FQCTRL_FORCESFDR	0x0008	/* High-priority SFDRs */
+#define QM_FQCTRL_AVOIDBLOCK	0x0004	/* Don't block active */
+#define QM_FQCTRL_HOLDACTIVE	0x0002	/* Hold active in portal */
+#define QM_FQCTRL_PREFERINCACHE	0x0001	/* Aggressively cache FQD */
+#define QM_FQCTRL_LOCKINCACHE	QM_FQCTRL_PREFERINCACHE /* older naming */
+
+/* See "FQD Context_A field used for [...] */
+/* Frame Queue Descriptor (FQD) field 'CONTEXT_A' uses these constants */
+#define QM_STASHING_EXCL_ANNOTATION	0x04
+#define QM_STASHING_EXCL_DATA		0x02
+#define QM_STASHING_EXCL_CTX		0x01
+
+/* See "Intra Class Scheduling" */
+/* FQD field 'OAC' (Overhead ACcounting) uses these constants */
+#define QM_OAC_ICS		0x2 /* Accounting for Intra-Class Scheduling */
+#define QM_OAC_CG		0x1 /* Accounting for Congestion Groups */
+
+/*
+ * This struct represents the 32-bit "WR_PARM_[GYR]" parameters in CGR fields
+ * and associated commands/responses. The WRED parameters are calculated from
+ * these fields as follows;
+ *   MaxTH = MA * (2 ^ Mn)
+ *   Slope = SA / (2 ^ Sn)
+ *    MaxP = 4 * (Pn + 1)
+ */
+struct qm_cgr_wr_parm {
+	union {
+		u32 word;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 MA:8;
+			u32 Mn:5;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Sn:6;
+			u32 Pn:6;
+#else
+			u32 Pn:6;
+			u32 Sn:6;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Mn:5;
+			u32 MA:8;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This struct represents the 13-bit "CS_THRES" CGR field. In the corresponding
+ * management commands, this is padded to a 16-bit structure field, so that's
+ * how we represent it here. The congestion state threshold is calculated from
+ * these fields as follows;
+ *   CS threshold = TA * (2 ^ Tn)
+ */
+struct qm_cgr_cs_thres {
+	union {
+		u16 hword;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved:3;
+			u16 TA:8;
+			u16 Tn:5;
+#else
+			u16 Tn:5;
+			u16 TA:8;
+			u16 __reserved:3;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This identical structure of CGR fields is present in the "Init/Modify CGR"
+ * commands and the "Query CGR" result. It's suctioned out here into its own
+ * struct.
+ */
+struct __qm_mc_cgr {
+	struct qm_cgr_wr_parm wr_parm_g;
+	struct qm_cgr_wr_parm wr_parm_y;
+	struct qm_cgr_wr_parm wr_parm_r;
+	u8 wr_en_g;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_y;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_r;	/* boolean, use QM_CGR_EN */
+	u8 cscn_en;	/* boolean, use QM_CGR_EN */
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+#else
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+#endif
+		};
+		u32 cscn_targ;	/* use QM_CGR_TARG_* */
+	};
+	u8 cstd_en;	/* boolean, use QM_CGR_EN */
+	u8 cs;		/* boolean, only used in query response */
+	union {
+		struct qm_cgr_cs_thres cs_thres;
+		/* use qm_cgr_cs_thres_set64() */
+		u16 __cs_thres;
+	};
+	u8 mode;	/* QMAN_CGR_MODE_FRAME not supported in rev1.0 */
+} __packed;
+#define QM_CGR_EN		0x01 /* For wr_en_*, cscn_en, cstd_en */
+#define QM_CGR_TARG_UDP_CTRL_WRITE_BIT	0x8000 /* value written to portal bit*/
+#define QM_CGR_TARG_UDP_CTRL_DCP	0x4000 /* 0: SWP, 1: DCP */
+#define QM_CGR_TARG_PORTAL(n)	(0x80000000 >> (n)) /* s/w portal, 0-9 */
+#define QM_CGR_TARG_FMAN0	0x00200000 /* direct-connect portal: fman0 */
+#define QM_CGR_TARG_FMAN1	0x00100000 /*			   : fman1 */
+/* Convert CGR thresholds to/from "cs_thres" format */
+static inline u64 qm_cgr_cs_thres_get64(const struct qm_cgr_cs_thres *th)
+{
+	return (u64)th->TA << th->Tn;
+}
+
+static inline int qm_cgr_cs_thres_set64(struct qm_cgr_cs_thres *th, u64 val,
+					int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	th->Tn = e;
+	th->TA = val;
+	return 0;
+}
+
+/* See 1.5.8.5.1: "Initialize FQ" */
+/* See 1.5.8.5.2: "Query FQ" */
+/* See 1.5.8.5.3: "Query FQ Non-Programmable Fields" */
+/* See 1.5.8.5.4: "Alter FQ State Commands " */
+/* See 1.5.8.6.1: "Initialize/Modify CGR" */
+/* See 1.5.8.6.2: "CGR Test Write" */
+/* See 1.5.8.6.3: "Query CGR" */
+/* See 1.5.8.6.4: "Query Congestion Group State" */
+struct qm_mcc_initfq {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	u32 fqid;	/* 24-bit */
+	u16 count;	/* Initialises 'count+1' FQDs */
+	struct qm_fqd fqd; /* the FQD fields go here */
+	u8 __reserved3[30];
+} __packed;
+struct qm_mcc_queryfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_queryfq_np {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_alterfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2;
+	u8 count;	/* number of consecutive FQID */
+	u8 __reserved3[10];
+	u32 context_b;	/* frame queue context b */
+	u8 __reserved4[40];
+} __packed;
+struct qm_mcc_initcgr {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	struct __qm_mc_cgr cgr;	/* CGR fields */
+	u8 __reserved2[2];
+	u8 cgid;
+	u8 __reserved4[32];
+} __packed;
+struct qm_mcc_cgrtestwrite {
+	u8 __reserved1[2];
+	u8 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u8 __reserved2[23];
+	u8 cgid;
+	u8 __reserved3[32];
+} __packed;
+struct qm_mcc_querycgr {
+	u8 __reserved1[30];
+	u8 cgid;
+	u8 __reserved2[32];
+} __packed;
+struct qm_mcc_querycongestion {
+	u8 __reserved[63];
+} __packed;
+struct qm_mcc_querywq {
+	u8 __reserved;
+	/* select channel if verb != QUERYWQ_DEDICATED */
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved1:3;
+#else
+			u16 __reserved1:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved2[60];
+} __packed;
+
+struct qm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct qm_mcc_initfq initfq;
+		struct qm_mcc_queryfq queryfq;
+		struct qm_mcc_queryfq_np queryfq_np;
+		struct qm_mcc_alterfq alterfq;
+		struct qm_mcc_initcgr initcgr;
+		struct qm_mcc_cgrtestwrite cgrtestwrite;
+		struct qm_mcc_querycgr querycgr;
+		struct qm_mcc_querycongestion querycongestion;
+		struct qm_mcc_querywq querywq;
+	};
+} __packed;
+
+/* INITFQ-specific flags */
+#define QM_INITFQ_WE_MASK		0x01ff	/* 'Write Enable' flags; */
+#define QM_INITFQ_WE_OAC		0x0100
+#define QM_INITFQ_WE_ORPC		0x0080
+#define QM_INITFQ_WE_CGID		0x0040
+#define QM_INITFQ_WE_FQCTRL		0x0020
+#define QM_INITFQ_WE_DESTWQ		0x0010
+#define QM_INITFQ_WE_ICSCRED		0x0008
+#define QM_INITFQ_WE_TDTHRESH		0x0004
+#define QM_INITFQ_WE_CONTEXTB		0x0002
+#define QM_INITFQ_WE_CONTEXTA		0x0001
+/* INITCGR/MODIFYCGR-specific flags */
+#define QM_CGR_WE_MASK			0x07ff	/* 'Write Enable Mask'; */
+#define QM_CGR_WE_WR_PARM_G		0x0400
+#define QM_CGR_WE_WR_PARM_Y		0x0200
+#define QM_CGR_WE_WR_PARM_R		0x0100
+#define QM_CGR_WE_WR_EN_G		0x0080
+#define QM_CGR_WE_WR_EN_Y		0x0040
+#define QM_CGR_WE_WR_EN_R		0x0020
+#define QM_CGR_WE_CSCN_EN		0x0010
+#define QM_CGR_WE_CSCN_TARG		0x0008
+#define QM_CGR_WE_CSTD_EN		0x0004
+#define QM_CGR_WE_CS_THRES		0x0002
+#define QM_CGR_WE_MODE			0x0001
+
+struct qm_mcr_initfq {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_queryfq {
+	u8 __reserved1[8];
+	struct qm_fqd fqd;	/* the FQD fields are here */
+	u8 __reserved2[30];
+} __packed;
+struct qm_mcr_queryfq_np {
+	u8 __reserved1;
+	u8 state;	/* QM_MCR_NP_STATE_*** */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 __reserved2;
+	u32 fqd_link:24;
+	u16 __reserved3:2;
+	u16 odp_seq:14;
+	u16 __reserved4:2;
+	u16 orp_nesn:14;
+	u16 __reserved5:1;
+	u16 orp_ea_hseq:15;
+	u16 __reserved6:1;
+	u16 orp_ea_tseq:15;
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+	u8 __reserved11[5];
+	u8 __reserved12:7;
+	u8 is:1;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#else
+	u8 __reserved2;
+	u32 fqd_link:24;
+
+	u16 odp_seq:14;
+	u16 __reserved3:2;
+
+	u16 orp_nesn:14;
+	u16 __reserved4:2;
+
+	u16 orp_ea_hseq:15;
+	u16 __reserved5:1;
+
+	u16 orp_ea_tseq:15;
+	u16 __reserved6:1;
+
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+
+	u8 __reserved11[5];
+	u8 is:1;
+	u8 __reserved12:7;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#endif
+} __packed;
+
+struct qm_mcr_alterfq {
+	u8 fqs;		/* Frame Queue Status */
+	u8 __reserved1[61];
+} __packed;
+struct qm_mcr_initcgr {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_cgrtestwrite {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	u32 __reserved3:24;
+	u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u32 __reserved4:24;
+	u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+	u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+	u16 lgt;	/* Last Group Tick */
+	u16 wr_prob_g;
+	u16 wr_prob_y;
+	u16 wr_prob_r;
+	u8 __reserved5[8];
+} __packed;
+struct qm_mcr_querycgr {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved3:24;
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 __reserved3:24;
+#endif
+		};
+		u64 i_bcnt;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved4:24;
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 __reserved4:24;
+#endif
+		};
+		u64 a_bcnt;
+	};
+	union {
+		u32 cscn_targ_swp[4];
+		u8 __reserved5[16];
+	};
+} __packed;
+
+struct __qm_mcr_querycongestion {
+	u32 state[8];
+};
+
+struct qm_mcr_querycongestion {
+	u8 __reserved[30];
+	/* Access this struct using QM_MCR_QUERYCONGESTION() */
+	struct __qm_mcr_querycongestion state;
+} __packed;
+struct qm_mcr_querywq {
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved:3;
+#else
+			u16 __reserved:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved[28];
+	u32 wq_len[8];
+} __packed;
+
+struct qm_mc_result {
+	u8 verb;
+	u8 result;
+	union {
+		struct qm_mcr_initfq initfq;
+		struct qm_mcr_queryfq queryfq;
+		struct qm_mcr_queryfq_np queryfq_np;
+		struct qm_mcr_alterfq alterfq;
+		struct qm_mcr_initcgr initcgr;
+		struct qm_mcr_cgrtestwrite cgrtestwrite;
+		struct qm_mcr_querycgr querycgr;
+		struct qm_mcr_querycongestion querycongestion;
+		struct qm_mcr_querywq querywq;
+	};
+} __packed;
+
+#define QM_MCR_VERB_RRID		0x80
+#define QM_MCR_VERB_MASK		QM_MCC_VERB_MASK
+#define QM_MCR_VERB_INITFQ_PARKED	QM_MCC_VERB_INITFQ_PARKED
+#define QM_MCR_VERB_INITFQ_SCHED	QM_MCC_VERB_INITFQ_SCHED
+#define QM_MCR_VERB_QUERYFQ		QM_MCC_VERB_QUERYFQ
+#define QM_MCR_VERB_QUERYFQ_NP		QM_MCC_VERB_QUERYFQ_NP
+#define QM_MCR_VERB_QUERYWQ		QM_MCC_VERB_QUERYWQ
+#define QM_MCR_VERB_QUERYWQ_DEDICATED	QM_MCC_VERB_QUERYWQ_DEDICATED
+#define QM_MCR_VERB_ALTER_SCHED		QM_MCC_VERB_ALTER_SCHED
+#define QM_MCR_VERB_ALTER_FE		QM_MCC_VERB_ALTER_FE
+#define QM_MCR_VERB_ALTER_RETIRE	QM_MCC_VERB_ALTER_RETIRE
+#define QM_MCR_VERB_ALTER_OOS		QM_MCC_VERB_ALTER_OOS
+#define QM_MCR_RESULT_NULL		0x00
+#define QM_MCR_RESULT_OK		0xf0
+#define QM_MCR_RESULT_ERR_FQID		0xf1
+#define QM_MCR_RESULT_ERR_FQSTATE	0xf2
+#define QM_MCR_RESULT_ERR_NOTEMPTY	0xf3	/* OOS fails if FQ is !empty */
+#define QM_MCR_RESULT_ERR_BADCHANNEL	0xf4
+#define QM_MCR_RESULT_PENDING		0xf8
+#define QM_MCR_RESULT_ERR_BADCOMMAND	0xff
+#define QM_MCR_NP_STATE_FE		0x10
+#define QM_MCR_NP_STATE_R		0x08
+#define QM_MCR_NP_STATE_MASK		0x07	/* Reads FQD::STATE; */
+#define QM_MCR_NP_STATE_OOS		0x00
+#define QM_MCR_NP_STATE_RETIRED		0x01
+#define QM_MCR_NP_STATE_TEN_SCHED	0x02
+#define QM_MCR_NP_STATE_TRU_SCHED	0x03
+#define QM_MCR_NP_STATE_PARKED		0x04
+#define QM_MCR_NP_STATE_ACTIVE		0x05
+#define QM_MCR_NP_PTR_MASK		0x07ff	/* for RA[12] & OD[123] */
+#define QM_MCR_NP_RA1_NRA(v)		(((v) >> 14) & 0x3)	/* FQD::NRA */
+#define QM_MCR_NP_RA2_IT(v)		(((v) >> 14) & 0x1)	/* FQD::IT */
+#define QM_MCR_NP_OD1_NOD(v)		(((v) >> 14) & 0x3)	/* FQD::NOD */
+#define QM_MCR_NP_OD3_NPC(v)		(((v) >> 14) & 0x3)	/* FQD::NPC */
+#define QM_MCR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MCR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+/* This extracts the state for congestion group 'n' from a query response.
+ * Eg.
+ *   u8 cgr = [...];
+ *   struct qm_mc_result *res = [...];
+ *   printf("congestion group %d congestion state: %d\n", cgr,
+ *       QM_MCR_QUERYCONGESTION(&res->querycongestion.state, cgr));
+ */
+#define __CGR_WORD(num)		(num >> 5)
+#define __CGR_SHIFT(num)	(num & 0x1f)
+#define __CGR_NUM		(sizeof(struct __qm_mcr_querycongestion) << 3)
+static inline int QM_MCR_QUERYCONGESTION(struct __qm_mcr_querycongestion *p,
+					 u8 cgr)
+{
+	return p->state[__CGR_WORD(cgr)] & (0x80000000 >> __CGR_SHIFT(cgr));
+}
+
+	/* Portal and Frame Queues */
+/* Represents a managed portal */
+struct qman_portal;
+
+/*
+ * This object type represents QMan frame queue descriptors (FQD), it is
+ * cacheline-aligned, and initialised by qman_create_fq(). The structure is
+ * defined further down.
+ */
+struct qman_fq;
+
+/*
+ * This object type represents a QMan congestion group, it is defined further
+ * down.
+ */
+struct qman_cgr;
+
+/*
+ * This enum, and the callback type that returns it, are used when handling
+ * dequeued frames via DQRR. Note that for "null" callbacks registered with the
+ * portal object (for handling dequeues that do not demux because context_b is
+ * NULL), the return value *MUST* be qman_cb_dqrr_consume.
+ */
+enum qman_cb_dqrr_result {
+	/* DQRR entry can be consumed */
+	qman_cb_dqrr_consume,
+	/* Like _consume, but requests parking - FQ must be held-active */
+	qman_cb_dqrr_park,
+	/* Does not consume, for DCA mode only. This allows out-of-order
+	 * consumes by explicit calls to qman_dca() and/or the use of implicit
+	 * DCA via EQCR entries.
+	 */
+	qman_cb_dqrr_defer,
+	/*
+	 * Stop processing without consuming this ring entry. Exits the current
+	 * qman_p_poll_dqrr() or interrupt-handling, as appropriate. If within
+	 * an interrupt handler, the callback would typically call
+	 * qman_irqsource_remove(QM_PIRQ_DQRI) before returning this value,
+	 * otherwise the interrupt will reassert immediately.
+	 */
+	qman_cb_dqrr_stop,
+	/* Like qman_cb_dqrr_stop, but consumes the current entry. */
+	qman_cb_dqrr_consume_stop
+};
+
+typedef enum qman_cb_dqrr_result (*qman_cb_dqrr)(struct qman_portal *qm,
+					struct qman_fq *fq,
+					const struct qm_dqrr_entry *dqrr);
+
+/*
+ * This callback type is used when handling ERNs, FQRNs and FQRLs via MR. They
+ * are always consumed after the callback returns.
+ */
+typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq,
+				const struct qm_mr_entry *msg);
+
+/* This callback type is used when handling DCP ERNs */
+typedef void (*qman_cb_dc_ern)(struct qman_portal *qm,
+				const struct qm_mr_entry *msg);
+/*
+ * s/w-visible states. Ie. tentatively scheduled + truly scheduled + active +
+ * held-active + held-suspended are just "sched". Things like "retired" will not
+ * be assumed until it is complete (ie. QMAN_FQ_STATE_CHANGING is set until
+ * then, to indicate it's completing and to gate attempts to retry the retire
+ * command). Note, park commands do not set QMAN_FQ_STATE_CHANGING because it's
+ * technically impossible in the case of enqueue DCAs (which refer to DQRR ring
+ * index rather than the FQ that ring entry corresponds to), so repeated park
+ * commands are allowed (if you're silly enough to try) but won't change FQ
+ * state, and the resulting park notifications move FQs from "sched" to
+ * "parked".
+ */
+enum qman_fq_state {
+	qman_fq_state_oos,
+	qman_fq_state_parked,
+	qman_fq_state_sched,
+	qman_fq_state_retired
+};
+
+
+/*
+ * Frame queue objects (struct qman_fq) are stored within memory passed to
+ * qman_create_fq(), as this allows stashing of caller-provided demux callback
+ * pointers at no extra cost to stashing of (driver-internal) FQ state. If the
+ * caller wishes to add per-FQ state and have it benefit from dequeue-stashing,
+ * they should;
+ *
+ * (a) extend the qman_fq structure with their state; eg.
+ *
+ *     // myfq is allocated and driver_fq callbacks filled in;
+ *     struct my_fq {
+ *	   struct qman_fq base;
+ *	   int an_extra_field;
+ *	   [ ... add other fields to be associated with each FQ ...]
+ *     } *myfq = some_my_fq_allocator();
+ *     struct qman_fq *fq = qman_create_fq(fqid, flags, &myfq->base);
+ *
+ *     // in a dequeue callback, access extra fields from 'fq' via a cast;
+ *     struct my_fq *myfq = (struct my_fq *)fq;
+ *     do_something_with(myfq->an_extra_field);
+ *     [...]
+ *
+ * (b) when and if configuring the FQ for context stashing, specify how ever
+ *     many cachelines are required to stash 'struct my_fq', to accelerate not
+ *     only the QMan driver but the callback as well.
+ */
+
+struct qman_fq_cb {
+	qman_cb_dqrr dqrr;	/* for dequeued frames */
+	qman_cb_mr ern;		/* for s/w ERNs */
+	qman_cb_mr fqs;		/* frame-queue state changes*/
+};
+
+struct qman_fq {
+	/* Caller of qman_create_fq() provides these demux callbacks */
+	struct qman_fq_cb cb;
+	/*
+	 * These are internal to the driver, don't touch. In particular, they
+	 * may change, be removed, or extended (so you shouldn't rely on
+	 * sizeof(qman_fq) being a constant).
+	 */
+	spinlock_t fqlock;
+	u32 fqid;
+	/* DPDK Interface */
+	void *dpaa_intf;
+
+	volatile unsigned long flags;
+	enum qman_fq_state state;
+	int cgr_groupid;
+	struct rb_node node;
+};
+
+/*
+ * This callback type is used when handling congestion group entry/exit.
+ * 'congested' is non-zero on congestion-entry, and zero on congestion-exit.
+ */
+typedef void (*qman_cb_cgr)(struct qman_portal *qm,
+			    struct qman_cgr *cgr, int congested);
+
+struct qman_cgr {
+	/* Set these prior to qman_create_cgr() */
+	u32 cgrid; /* 0..255, but u32 to allow specials like -1, 256, etc.*/
+	qman_cb_cgr cb;
+	/* These are private to the driver */
+	u16 chan; /* portal channel this object is created on */
+	struct list_head node;
+};
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_QMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index 4ff48c6..b0d953f 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -47,6 +47,10 @@
 extern "C" {
 #endif
 
+/* Thread-entry/exit hooks; */
+int qman_thread_init(void);
+int qman_thread_finish(void);
+
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
 /* Obtain and free raw (unitialized) portals */
@@ -81,6 +85,15 @@ int qman_free_raw_portal(struct dpaa_raw_portal *portal);
 int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
 int bman_free_raw_portal(struct dpaa_raw_portal *portal);
 
+/* Post-process interrupts. NB, the kernel IRQ handler disables the interrupt
+ * line before notifying us, and this post-processing re-enables it once
+ * processing is complete. As such, it is essential to call this before going
+ * into another blocking read/select/poll.
+ */
+void qman_thread_irq(void);
+
+/* Global setup */
+int qman_global_init(void);
 #ifdef __cplusplus
 }
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 11/40] bus/dpaa: add QMan driver core routines
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (9 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
                           ` (29 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc  |    1 +
 drivers/bus/dpaa/Makefile                 |    2 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |   88 ++
 drivers/bus/dpaa/base/qbman/qman.c        | 2402 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h        |  888 +++++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c |   12 +
 drivers/bus/dpaa/include/fsl_qman.h       |  755 +++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |    1 +
 8 files changed, 4149 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 8316fc9..4d6b046 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -49,3 +49,4 @@ CONFIG_RTE_PKTMBUF_HEADROOM=128
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA_HWDEBUG=n
diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index c9c15f8..5957c15 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -63,7 +63,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
+	base/qbman/dpaa_alloc.c \
 	base/qbman/dpaa_sys.c
 
 # Link Pthread
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
new file mode 100644
index 0000000..690576a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2009-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "dpaa_sys.h"
+#include <process.h>
+#include <fsl_qman.h>
+
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_fqid, result, count, align, partial);
+}
+
+void qman_release_fqid_range(u32 fqid, u32 count)
+{
+	process_release(dpaa_id_fqid, fqid, count);
+}
+
+int qman_reserve_fqid_range(u32 fqid, unsigned int count)
+{
+	return process_reserve(dpaa_id_fqid, fqid, count);
+}
+
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_qpool, result, count, align, partial);
+}
+
+void qman_release_pool_range(u32 pool, u32 count)
+{
+	process_release(dpaa_id_qpool, pool, count);
+}
+
+int qman_reserve_pool_range(u32 pool, u32 count)
+{
+	return process_reserve(dpaa_id_qpool, pool, count);
+}
+
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_cgrid, result, count, align, partial);
+}
+
+void qman_release_cgrid_range(u32 cgrid, u32 count)
+{
+	process_release(dpaa_id_cgrid, cgrid, count);
+}
+
+int qman_reserve_cgrid_range(u32 cgrid, u32 count)
+{
+	return process_reserve(dpaa_id_cgrid, cgrid, count);
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
new file mode 100644
index 0000000..9b1630b
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -0,0 +1,2402 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define DQRR_MAXFILL	15
+#define EQCR_ITHRESH	4	/* if EQCR congests, interrupt threshold */
+#define IRQNAME		"QMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "QMan portal %d" */
+/* maximum number of DQRR entries to process in qman_poll() */
+#define FSL_QMAN_POLL_LIMIT 8
+
+/* Lock/unlock frame queues, subject to the "LOCKED" flag. This is about
+ * inter-processor locking only. Note, FQLOCK() is always called either under a
+ * local_irq_save() or from interrupt context - hence there's no need for irq
+ * protection (and indeed, attempting to nest irq-protection doesn't work, as
+ * the "irq en/disable" machinery isn't recursive...).
+ */
+#define FQLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_lock(&__fq478->fqlock); \
+	} while (0)
+#define FQUNLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_unlock(&__fq478->fqlock); \
+	} while (0)
+
+static inline void fq_set(struct qman_fq *fq, u32 mask)
+{
+	dpaa_set_bits(mask, &fq->flags);
+}
+
+static inline void fq_clear(struct qman_fq *fq, u32 mask)
+{
+	dpaa_clear_bits(mask, &fq->flags);
+}
+
+static inline int fq_isset(struct qman_fq *fq, u32 mask)
+{
+	return fq->flags & mask;
+}
+
+static inline int fq_isclear(struct qman_fq *fq, u32 mask)
+{
+	return !(fq->flags & mask);
+}
+
+struct qman_portal {
+	struct qm_portal p;
+	/* PORTAL_BITS_*** - dynamic, strictly internal */
+	unsigned long bits;
+	/* interrupt sources processed by portal_isr(), configurable */
+	unsigned long irq_sources;
+	u32 use_eqcr_ci_stashing;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* only 1 volatile dequeue at a time */
+	struct qman_fq *vdqcr_owned;
+	u32 sdqcr;
+	int dqrr_disable_ref;
+	/* A portal-specific handler for DCP ERNs. If this is NULL, the global
+	 * handler is called instead.
+	 */
+	qman_cb_dc_ern cb_dc_ern;
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct qm_portal_config *config;
+	struct dpa_rbtree retire_table;
+	char irqname[MAX_IRQNAME];
+	/* 2-element array. cgrs[0] is mask, cgrs[1] is snapshot. */
+	struct qman_cgrs *cgrs;
+	/* linked-list of CSCN handlers. */
+	struct list_head cgr_cbs;
+	/* list lock */
+	spinlock_t cgr_lock;
+	/* track if memory was allocated by the driver */
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* Keep a shadow copy of the DQRR on LE systems as the SW needs to
+	 * do byte swaps of DQRR read only memory.  First entry must be aligned
+	 * to 2 ** 10 to ensure DQRR index calculations based shadow copy
+	 * address (6 bits for address shift + 4 bits for the DQRR size).
+	 */
+	struct qm_dqrr_entry shadow_dqrr[QM_DQRR_SIZE]
+		    __attribute__((aligned(1024)));
+#endif
+};
+
+/* Global handler for DCP ERNs. Used when the portal receiving the message does
+ * not have a portal-specific handler.
+ */
+static qman_cb_dc_ern cb_dc_ern;
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static u16 affine_channels[NR_CPUS];
+static RTE_DEFINE_PER_LCORE(struct qman_portal, qman_affine_portal);
+
+static inline struct qman_portal *get_affine_portal(void)
+{
+	return &RTE_PER_LCORE(qman_affine_portal);
+}
+
+/* This gives a FQID->FQ lookup to cover the fact that we can't directly demux
+ * retirement notifications (the fact they are sometimes h/w-consumed means that
+ * contextB isn't always a s/w demux - and as we can't know which case it is
+ * when looking at the notification, we have to use the slow lookup for all of
+ * them). NB, it's possible to have multiple FQ objects refer to the same FQID
+ * (though at most one of them should be the consumer), so this table isn't for
+ * all FQs - FQs are added when retirement commands are issued, and removed when
+ * they complete, which also massively reduces the size of this table.
+ */
+IMPLEMENT_DPAA_RBTREE(fqtree, struct qman_fq, node, fqid);
+/*
+ * This is what everything can wait on, even if it migrates to a different cpu
+ * to the one whose affine portal it is waiting on.
+ */
+static DECLARE_WAIT_QUEUE_HEAD(affine_queue);
+
+static inline int table_push_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	int ret = fqtree_push(&p->retire_table, fq);
+
+	if (ret)
+		pr_err("ERROR: double FQ-retirement %d\n", fq->fqid);
+	return ret;
+}
+
+static inline void table_del_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	fqtree_del(&p->retire_table, fq);
+}
+
+static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
+{
+	return fqtree_find(&p->retire_table, fqid);
+}
+
+static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to HW format */
+	fqd->fq_ctrl = cpu_to_be16(fqd->fq_ctrl);
+	fqd->dest_wq = cpu_to_be16(fqd->dest_wq);
+	fqd->ics_cred = cpu_to_be16(fqd->ics_cred);
+	fqd->context_b = cpu_to_be32(fqd->context_b);
+	fqd->context_a.opaque = cpu_to_be64(fqd->context_a.opaque);
+	fqd->opaque_td = cpu_to_be16(fqd->opaque_td);
+}
+
+static inline void hw_fqd_to_cpu(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to CPU format */
+	fqd->fq_ctrl = be16_to_cpu(fqd->fq_ctrl);
+	fqd->dest_wq = be16_to_cpu(fqd->dest_wq);
+	fqd->ics_cred = be16_to_cpu(fqd->ics_cred);
+	fqd->context_b = be32_to_cpu(fqd->context_b);
+	fqd->context_a.opaque = be64_to_cpu(fqd->context_a.opaque);
+}
+
+static inline void cpu_to_hw_fd(struct qm_fd *fd)
+{
+	fd->addr = cpu_to_be40(fd->addr);
+	fd->status = cpu_to_be32(fd->status);
+	fd->opaque = cpu_to_be32(fd->opaque);
+}
+
+static inline void hw_fd_to_cpu(struct qm_fd *fd)
+{
+	fd->addr = be40_to_cpu(fd->addr);
+	fd->status = be32_to_cpu(fd->status);
+	fd->opaque = be32_to_cpu(fd->opaque);
+}
+
+/* In the case that slow- and fast-path handling are both done by qman_poll()
+ * (ie. because there is no interrupt handling), we ought to balance how often
+ * we do the fast-path poll versus the slow-path poll. We'll use two decrementer
+ * sources, so we call the fast poll 'n' times before calling the slow poll
+ * once. The idle decrementer constant is used when the last slow-poll detected
+ * no work to do, and the busy decrementer constant when the last slow-poll had
+ * work to do.
+ */
+#define SLOW_POLL_IDLE   1000
+#define SLOW_POLL_BUSY   10
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is);
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit);
+
+/* Portal interrupt handler */
+static irqreturn_t portal_isr(__always_unused int irq, void *ptr)
+{
+	struct qman_portal *p = ptr;
+	/*
+	 * The CSCI/CCSCI source is cleared inside __poll_portal_slow(), because
+	 * it could race against a Query Congestion State command also given
+	 * as part of the handling of this interrupt source. We mustn't
+	 * clear it a second time in this top-level function.
+	 */
+	u32 clear = QM_DQAVAIL_MASK | (p->irq_sources &
+		~(QM_PIRQ_CSCI | QM_PIRQ_CCSCI));
+	u32 is = qm_isr_status_read(&p->p) & p->irq_sources;
+	/* DQRR-handling if it's interrupt-driven */
+	if (is & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+	/* Handling of anything else that's interrupt-driven */
+	clear |= __poll_portal_slow(p, is);
+	qm_isr_status_clear(&p->p, clear);
+	return IRQ_HANDLED;
+}
+
+/* This inner version is used privately by qman_create_affine_portal(), as well
+ * as by the exported qman_stop_dequeues().
+ */
+static inline void qman_stop_dequeues_ex(struct qman_portal *p)
+{
+	if (!(p->dqrr_disable_ref++))
+		qm_dqrr_set_maxfill(&p->p, 0);
+}
+
+static int drain_mr_fqrni(struct qm_portal *p)
+{
+	const struct qm_mr_entry *msg;
+loop:
+	msg = qm_mr_current(p);
+	if (!msg) {
+		/*
+		 * if MR was full and h/w had other FQRNI entries to produce, we
+		 * need to allow it time to produce those entries once the
+		 * existing entries are consumed. A worst-case situation
+		 * (fully-loaded system) means h/w sequencers may have to do 3-4
+		 * other things before servicing the portal's MR pump, each of
+		 * which (if slow) may take ~50 qman cycles (which is ~200
+		 * processor cycles). So rounding up and then multiplying this
+		 * worst-case estimate by a factor of 10, just to be
+		 * ultra-paranoid, goes as high as 10,000 cycles. NB, we consume
+		 * one entry at a time, so h/w has an opportunity to produce new
+		 * entries well before the ring has been fully consumed, so
+		 * we're being *really* paranoid here.
+		 */
+		u64 now, then = mfatb();
+
+		do {
+			now = mfatb();
+		} while ((then + 10000) > now);
+		msg = qm_mr_current(p);
+		if (!msg)
+			return 0;
+	}
+	if ((msg->verb & QM_MR_VERB_TYPE_MASK) != QM_MR_VERB_FQRNI) {
+		/* We aren't draining anything but FQRNIs */
+		pr_err("Found verb 0x%x in MR\n", msg->verb);
+		return -1;
+	}
+	qm_mr_next(p);
+	qm_mr_cci_consume(p, 1);
+	goto loop;
+}
+
+static inline int qm_eqcr_init(struct qm_portal *portal,
+			       enum qm_eqcr_pmode pmode,
+			       unsigned int eq_stash_thresh,
+			       int eq_stash_prio)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u32 cfg;
+	u8 pi;
+
+	eqcr->ring = portal->addr.ce + QM_CL_EQCR;
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	eqcr->cursor = eqcr->ring + pi;
+	eqcr->vbit = (qm_in(EQCR_PI_CINH) & QM_EQCR_SIZE) ?
+			QM_EQCR_VERB_VBIT : 0;
+	eqcr->available = QM_EQCR_SIZE - 1 -
+			qm_cyc_diff(QM_EQCR_SIZE, eqcr->ci, pi);
+	eqcr->ithresh = qm_in(EQCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 0;
+	eqcr->pmode = pmode;
+#endif
+	cfg = (qm_in(CFG) & 0x00ffffff) |
+		(eq_stash_thresh << 28) | /* QCSP_CFG: EST */
+		(eq_stash_prio << 26)	| /* QCSP_CFG: EP */
+		((pmode & 0x3) << 24);	/* QCSP_CFG::EPM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_eqcr_finish(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 pi, ci;
+	u32 cfg;
+
+	/*
+	 * Disable EQCI stashing because the QMan only
+	 * presents the value it previously stashed to
+	 * maintain coherency.  Setting the stash threshold
+	 * to 1 then 0 ensures that QMan has resyncronized
+	 * its internal copy so that the portal is clean
+	 * when it is reinitialized in the future
+	 */
+	cfg = (qm_in(CFG) & 0x0fffffff) |
+		(1 << 28); /* QCSP_CFG: EST */
+	qm_out(CFG, cfg);
+	cfg &= 0x0fffffff; /* stash threshold = 0 */
+	qm_out(CFG, cfg);
+
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+
+	/* Refresh EQCR CI cache value */
+	qm_cl_invalidate(EQCR_CI);
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (pi != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("losing uncommitted EQCR entries\n");
+	if (ci != eqcr->ci)
+		pr_crit("missing existing EQCR completions\n");
+	if (eqcr->ci != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("EQCR destroyed unquiesced\n");
+}
+
+static inline int qm_dqrr_init(struct qm_portal *portal,
+			__maybe_unused const struct qm_portal_config *config,
+			enum qm_dqrr_dmode dmode,
+			__maybe_unused enum qm_dqrr_pmode pmode,
+			enum qm_dqrr_cmode cmode, u8 max_fill)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u32 cfg;
+
+	/* Make sure the DQRR will be idle when we enable */
+	qm_out(DQRR_SDQCR, 0);
+	qm_out(DQRR_VDQCR, 0);
+	qm_out(DQRR_PDQCR, 0);
+	dqrr->ring = portal->addr.ce + QM_CL_DQRR;
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->cursor = dqrr->ring + dqrr->ci;
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+	dqrr->vbit = (qm_in(DQRR_PI_CINH) & QM_DQRR_SIZE) ?
+			QM_DQRR_VERB_VBIT : 0;
+	dqrr->ithresh = qm_in(DQRR_ITR);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	dqrr->dmode = dmode;
+	dqrr->pmode = pmode;
+	dqrr->cmode = cmode;
+#endif
+	/* Invalidate every ring entry before beginning */
+	for (cfg = 0; cfg < QM_DQRR_SIZE; cfg++)
+		dccivac(qm_cl(dqrr->ring, cfg));
+	cfg = (qm_in(CFG) & 0xff000f00) |
+		((max_fill & (QM_DQRR_SIZE - 1)) << 20) | /* DQRR_MF */
+		((dmode & 1) << 18) |			/* DP */
+		((cmode & 3) << 16) |			/* DCM */
+		0xa0 |					/* RE+SE */
+		(0 ? 0x40 : 0) |			/* Ignore RP */
+		(0 ? 0x10 : 0);				/* Ignore SP */
+	qm_out(CFG, cfg);
+	qm_dqrr_set_maxfill(portal, max_fill);
+	return 0;
+}
+
+static inline void qm_dqrr_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if ((dqrr->cmode != qm_dqrr_cdc) &&
+	    (dqrr->ci != DQRR_PTR2IDX(dqrr->cursor)))
+		pr_crit("Ignoring completed DQRR entries\n");
+#endif
+}
+
+static inline int qm_mr_init(struct qm_portal *portal,
+			     __maybe_unused enum qm_mr_pmode pmode,
+			     enum qm_mr_cmode cmode)
+{
+	register struct qm_mr *mr = &portal->mr;
+	u32 cfg;
+
+	mr->ring = portal->addr.ce + QM_CL_MR;
+	mr->pi = qm_in(MR_PI_CINH) & (QM_MR_SIZE - 1);
+	mr->ci = qm_in(MR_CI_CINH) & (QM_MR_SIZE - 1);
+	mr->cursor = mr->ring + mr->ci;
+	mr->fill = qm_cyc_diff(QM_MR_SIZE, mr->ci, mr->pi);
+	mr->vbit = (qm_in(MR_PI_CINH) & QM_MR_SIZE) ? QM_MR_VERB_VBIT : 0;
+	mr->ithresh = qm_in(MR_ITR);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mr->pmode = pmode;
+	mr->cmode = cmode;
+#endif
+	cfg = (qm_in(CFG) & 0xfffff0ff) |
+		((cmode & 1) << 8);		/* QCSP_CFG:MM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+	const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+	DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+		mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+		if (!mr->pi)
+			mr->vbit ^= QM_MR_VERB_VBIT;
+		mr->fill++;
+		res = MR_INC(res);
+	}
+	dcbit_ro(res);
+}
+
+static inline
+struct qman_portal *qman_create_portal(
+			struct qman_portal *portal,
+			      const struct qm_portal_config *c,
+			      const struct qman_cgrs *cgrs)
+{
+	struct qm_portal *p;
+	char buf[16];
+	int ret;
+	u32 isdr;
+
+	p = &portal->p;
+
+	portal->use_eqcr_ci_stashing = ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	/*
+	 * If CI-stashing is used, the current defaults use a threshold of 3,
+	 * and stash with high-than-DQRR priority.
+	 */
+	if (qm_eqcr_init(p, qm_eqcr_pvb,
+			 portal->use_eqcr_ci_stashing ? 3 : 0, 1)) {
+		pr_err("Qman EQCR initialisation failed\n");
+		goto fail_eqcr;
+	}
+	if (qm_dqrr_init(p, c, qm_dqrr_dpush, qm_dqrr_pvb,
+			 qm_dqrr_cdc, DQRR_MAXFILL)) {
+		pr_err("Qman DQRR initialisation failed\n");
+		goto fail_dqrr;
+	}
+	if (qm_mr_init(p, qm_mr_pvb, qm_mr_cci)) {
+		pr_err("Qman MR initialisation failed\n");
+		goto fail_mr;
+	}
+	if (qm_mc_init(p)) {
+		pr_err("Qman MC initialisation failed\n");
+		goto fail_mc;
+	}
+
+	/* static interrupt-gating controls */
+	qm_dqrr_set_ithresh(p, 0);
+	qm_mr_set_ithresh(p, 0);
+	qm_isr_set_iperiod(p, 0);
+	portal->cgrs = kmalloc(2 * sizeof(*cgrs), GFP_KERNEL);
+	if (!portal->cgrs)
+		goto fail_cgrs;
+	/* initial snapshot is no-depletion */
+	qman_cgrs_init(&portal->cgrs[1]);
+	if (cgrs)
+		portal->cgrs[0] = *cgrs;
+	else
+		/* if the given mask is NULL, assume all CGRs can be seen */
+		qman_cgrs_fill(&portal->cgrs[0]);
+	INIT_LIST_HEAD(&portal->cgr_cbs);
+	spin_lock_init(&portal->cgr_lock);
+	portal->bits = 0;
+	portal->slowpoll = 0;
+	portal->sdqcr = QM_SDQCR_SOURCE_CHANNELS | QM_SDQCR_COUNT_UPTO3 |
+			QM_SDQCR_DEDICATED_PRECEDENCE | QM_SDQCR_TYPE_PRIO_QOS |
+			QM_SDQCR_TOKEN_SET(0xab) | QM_SDQCR_CHANNELS_DEDICATED;
+	portal->dqrr_disable_ref = 0;
+	portal->cb_dc_ern = NULL;
+	sprintf(buf, "qportal-%d", c->channel);
+	dpa_rbtree_init(&portal->retire_table);
+	isdr = 0xffffffff;
+	qm_isr_disable_write(p, isdr);
+	portal->irq_sources = 0;
+	qm_isr_enable_write(p, portal->irq_sources);
+	qm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, portal_isr, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need EQCR to be empty before continuing */
+	isdr &= ~QM_PIRQ_EQCI;
+	qm_isr_disable_write(p, isdr);
+	ret = qm_eqcr_get_fill(p);
+	if (ret) {
+		pr_err("Qman EQCR unclean\n");
+		goto fail_eqcr_empty;
+	}
+	isdr &= ~(QM_PIRQ_DQRI | QM_PIRQ_MRI);
+	qm_isr_disable_write(p, isdr);
+	if (qm_dqrr_current(p)) {
+		pr_err("Qman DQRR unclean\n");
+		qm_dqrr_cdc_consume_n(p, 0xffff);
+	}
+	if (qm_mr_current(p) && drain_mr_fqrni(p)) {
+		/* special handling, drain just in case it's a few FQRNIs */
+		if (drain_mr_fqrni(p))
+			goto fail_dqrr_mr_empty;
+	}
+	/* Success */
+	portal->config = c;
+	qm_isr_disable_write(p, 0);
+	qm_isr_uninhibit(p);
+	/* Write a sane SDQCR */
+	qm_dqrr_sdqcr_set(p, portal->sdqcr);
+	return portal;
+fail_dqrr_mr_empty:
+fail_eqcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->cgrs);
+	spin_lock_destroy(&portal->cgr_lock);
+fail_cgrs:
+	qm_mc_finish(p);
+fail_mc:
+	qm_mr_finish(p);
+fail_mr:
+	qm_dqrr_finish(p);
+fail_dqrr:
+	qm_eqcr_finish(p);
+fail_eqcr:
+	return NULL;
+}
+
+struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c,
+					      const struct qman_cgrs *cgrs)
+{
+	struct qman_portal *res;
+	struct qman_portal *portal = get_affine_portal();
+	/* A criteria for calling this function (from qman_driver.c) is that
+	 * we're already affine to the cpu and won't schedule onto another cpu.
+	 */
+
+	res = qman_create_portal(portal, c, cgrs);
+	if (res) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		affine_channels[c->cpu] =
+			c->channel;
+		spin_unlock(&affine_mask_lock);
+	}
+	return res;
+}
+
+static inline
+void qman_destroy_portal(struct qman_portal *qm)
+{
+	const struct qm_portal_config *pcfg;
+
+	/* Stop dequeues on the portal */
+	qm_dqrr_sdqcr_set(&qm->p, 0);
+
+	/*
+	 * NB we do this to "quiesce" EQCR. If we add enqueue-completions or
+	 * something related to QM_PIRQ_EQCI, this may need fixing.
+	 * Also, due to the prefetching model used for CI updates in the enqueue
+	 * path, this update will only invalidate the CI cacheline *after*
+	 * working on it, so we need to call this twice to ensure a full update
+	 * irrespective of where the enqueue processing was at when the teardown
+	 * began.
+	 */
+	qm_eqcr_cce_update(&qm->p);
+	qm_eqcr_cce_update(&qm->p);
+	pcfg = qm->config;
+
+	free_irq(pcfg->irq, qm);
+
+	kfree(qm->cgrs);
+	qm_mc_finish(&qm->p);
+	qm_mr_finish(&qm->p);
+	qm_dqrr_finish(&qm->p);
+	qm_eqcr_finish(&qm->p);
+
+	qm->config = NULL;
+
+	spin_lock_destroy(&qm->cgr_lock);
+}
+
+const struct qm_portal_config *qman_destroy_affine_portal(void)
+{
+	/* We don't want to redirect if we're a slave, use "raw" */
+	struct qman_portal *qm = get_affine_portal();
+	const struct qm_portal_config *pcfg;
+	int cpu;
+
+	pcfg = qm->config;
+	cpu = pcfg->cpu;
+
+	qman_destroy_portal(qm);
+
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int qman_get_portal_index(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+/* Inline helper to reduce nesting in __poll_portal_slow() */
+static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,
+				   const struct qm_mr_entry *msg, u8 verb)
+{
+	FQLOCK(fq);
+	switch (verb) {
+	case QM_MR_VERB_FQRL:
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_ORL));
+		fq_clear(fq, QMAN_FQ_STATE_ORL);
+		table_del_fq(p, fq);
+		break;
+	case QM_MR_VERB_FQRN:
+		DPAA_ASSERT((fq->state == qman_fq_state_parked) ||
+			    (fq->state == qman_fq_state_sched));
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_CHANGING));
+		fq_clear(fq, QMAN_FQ_STATE_CHANGING);
+		if (msg->fq.fqs & QM_MR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (msg->fq.fqs & QM_MR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		fq->state = qman_fq_state_retired;
+		break;
+	case QM_MR_VERB_FQPN:
+		DPAA_ASSERT(fq->state == qman_fq_state_sched);
+		DPAA_ASSERT(fq_isclear(fq, QMAN_FQ_STATE_CHANGING));
+		fq->state = qman_fq_state_parked;
+	}
+	FQUNLOCK(fq);
+}
+
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
+{
+	const struct qm_mr_entry *msg;
+	struct qm_mr_entry swapped_msg;
+
+	if (is & QM_PIRQ_CSCI) {
+		struct qman_cgrs rr, c;
+		struct qm_mc_result *mcr;
+		struct qman_cgr *cgr;
+
+		spin_lock(&p->cgr_lock);
+		/*
+		 * The CSCI bit must be cleared _before_ issuing the
+		 * Query Congestion State command, to ensure that a long
+		 * CGR State Change callback cannot miss an intervening
+		 * state change.
+		 */
+		qm_isr_status_clear(&p->p, QM_PIRQ_CSCI);
+		qm_mc_start(&p->p);
+		qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+		while (!(mcr = qm_mc_result(&p->p)))
+			cpu_relax();
+		/* mask out the ones I'm not interested in */
+		qman_cgrs_and(&rr, (const struct qman_cgrs *)
+			&mcr->querycongestion.state, &p->cgrs[0]);
+		/* check previous snapshot for delta, enter/exit congestion */
+		qman_cgrs_xor(&c, &rr, &p->cgrs[1]);
+		/* update snapshot */
+		qman_cgrs_cp(&p->cgrs[1], &rr);
+		/* Invoke callback */
+		list_for_each_entry(cgr, &p->cgr_cbs, node)
+			if (cgr->cb && qman_cgrs_get(&c, cgr->cgrid))
+				cgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid));
+		spin_unlock(&p->cgr_lock);
+	}
+
+	if (is & QM_PIRQ_EQRI) {
+		qm_eqcr_cce_update(&p->p);
+		qm_eqcr_set_ithresh(&p->p, 0);
+		wake_up(&affine_queue);
+	}
+
+	if (is & QM_PIRQ_MRI) {
+		struct qman_fq *fq;
+		u8 verb, num = 0;
+mr_loop:
+		qm_mr_pvb_update(&p->p);
+		msg = qm_mr_current(&p->p);
+		if (!msg)
+			goto mr_done;
+		swapped_msg = *msg;
+		hw_fd_to_cpu(&swapped_msg.ern.fd);
+		verb = msg->verb & QM_MR_VERB_TYPE_MASK;
+		/* The message is a software ERN iff the 0x20 bit is set */
+		if (verb & 0x20) {
+			switch (verb) {
+			case QM_MR_VERB_FQRNI:
+				/* nada, we drop FQRNIs on the floor */
+				break;
+			case QM_MR_VERB_FQRN:
+			case QM_MR_VERB_FQRL:
+				/* Lookup in the retirement table */
+				fq = table_find_fq(p,
+						   be32_to_cpu(msg->fq.fqid));
+				DPAA_BUG_ON(!fq);
+				fq_state_change(p, fq, &swapped_msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_FQPN:
+				/* Parked */
+				fq = (void *)(uintptr_t)
+					be32_to_cpu(msg->fq.contextB);
+				fq_state_change(p, fq, msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_DC_ERN:
+				/* DCP ERN */
+				if (p->cb_dc_ern)
+					p->cb_dc_ern(p, msg);
+				else if (cb_dc_ern)
+					cb_dc_ern(p, msg);
+				else {
+					static int warn_once;
+
+					if (!warn_once) {
+						pr_crit("Leaking DCP ERNs!\n");
+						warn_once = 1;
+					}
+				}
+				break;
+			default:
+				pr_crit("Invalid MR verb 0x%02x\n", verb);
+			}
+		} else {
+			/* Its a software ERN */
+			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+			fq->cb.ern(p, fq, &swapped_msg);
+		}
+		num++;
+		qm_mr_next(&p->p);
+		goto mr_loop;
+mr_done:
+		qm_mr_cci_consume(&p->p, num);
+	}
+	/*
+	 * QM_PIRQ_CSCI/CCSCI has already been cleared, as part of its specific
+	 * processing. If that interrupt source has meanwhile been re-asserted,
+	 * we mustn't clear it here (or in the top-level interrupt handler).
+	 */
+	return is & (QM_PIRQ_EQCI | QM_PIRQ_EQRI | QM_PIRQ_MRI);
+}
+
+/*
+ * remove some slowish-path stuff from the "fast path" and make sure it isn't
+ * inlined.
+ */
+static noinline void clear_vdqcr(struct qman_portal *p, struct qman_fq *fq)
+{
+	p->vdqcr_owned = NULL;
+	FQLOCK(fq);
+	fq_clear(fq, QMAN_FQ_STATE_VDQCR);
+	FQUNLOCK(fq);
+	wake_up(&affine_queue);
+}
+
+/*
+ * The only states that would conflict with other things if they ran at the
+ * same time on the same cpu are:
+ *
+ *   (i) setting/clearing vdqcr_owned, and
+ *  (ii) clearing the NE (Not Empty) flag.
+ *
+ * Both are safe. Because;
+ *
+ *   (i) this clearing can only occur after qman_set_vdq() has set the
+ *	 vdqcr_owned field (which it does before setting VDQCR), and
+ *	 qman_volatile_dequeue() blocks interrupts and preemption while this is
+ *	 done so that we can't interfere.
+ *  (ii) the NE flag is only cleared after qman_retire_fq() has set it, and as
+ *	 with (i) that API prevents us from interfering until it's safe.
+ *
+ * The good thing is that qman_set_vdq() and qman_retire_fq() run far
+ * less frequently (ie. per-FQ) than __poll_portal_fast() does, so the nett
+ * advantage comes from this function not having to "lock" anything at all.
+ *
+ * Note also that the callbacks are invoked at points which are safe against the
+ * above potential conflicts, but that this function itself is not re-entrant
+ * (this is because the function tracks one end of each FIFO in the portal and
+ * we do *not* want to lock that). So the consequence is that it is safe for
+ * user callbacks to call into any QMan API.
+ */
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit)
+{
+	const struct qm_dqrr_entry *dq;
+	struct qman_fq *fq;
+	enum qman_cb_dqrr_result res;
+	unsigned int limit = 0;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+	do {
+		qm_dqrr_pvb_update(&p->p);
+		dq = qm_dqrr_current(&p->p);
+		if (!dq)
+			break;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* If running on an LE system the fields of the
+	 * dequeue entry must be swapper.  Because the
+	 * QMan HW will ignore writes the DQRR entry is
+	 * copied and the index stored within the copy
+	 */
+		shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+		*shadow = *dq;
+		dq = shadow;
+		shadow->fqid = be32_to_cpu(shadow->fqid);
+		shadow->contextB = be32_to_cpu(shadow->contextB);
+		shadow->seqnum = be16_to_cpu(shadow->seqnum);
+		hw_fd_to_cpu(&shadow->fd);
+#endif
+
+		if (dq->stat & QM_DQRR_STAT_UNSCHEDULED) {
+			/*
+			 * VDQCR: don't trust context_b as the FQ may have
+			 * been configured for h/w consumption and we're
+			 * draining it post-retirement.
+			 */
+			fq = p->vdqcr_owned;
+			/*
+			 * We only set QMAN_FQ_STATE_NE when retiring, so we
+			 * only need to check for clearing it when doing
+			 * volatile dequeues.  It's one less thing to check
+			 * in the critical path (SDQCR).
+			 */
+			if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+				fq_clear(fq, QMAN_FQ_STATE_NE);
+			/*
+			 * This is duplicated from the SDQCR code, but we
+			 * have stuff to do before *and* after this callback,
+			 * and we don't want multiple if()s in the critical
+			 * path (SDQCR).
+			 */
+			res = fq->cb.dqrr(p, fq, dq);
+			if (res == qman_cb_dqrr_stop)
+				break;
+			/* Check for VDQCR completion */
+			if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+				clear_vdqcr(p, fq);
+		} else {
+			/* SDQCR: context_b points to the FQ */
+			fq = (void *)(uintptr_t)dq->contextB;
+			/* Now let the callback do its stuff */
+			res = fq->cb.dqrr(p, fq, dq);
+			/*
+			 * The callback can request that we exit without
+			 * consuming this entry nor advancing;
+			 */
+			if (res == qman_cb_dqrr_stop)
+				break;
+		}
+		/* Interpret 'dq' from a driver perspective. */
+		/*
+		 * Parking isn't possible unless HELDACTIVE was set. NB,
+		 * FORCEELIGIBLE implies HELDACTIVE, so we only need to
+		 * check for HELDACTIVE to cover both.
+		 */
+		DPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||
+			    (res != qman_cb_dqrr_park));
+		/* just means "skip it, I'll consume it myself later on" */
+		if (res != qman_cb_dqrr_defer)
+			qm_dqrr_cdc_consume_1ptr(&p->p, dq,
+						 res == qman_cb_dqrr_park);
+		/* Move forward */
+		qm_dqrr_next(&p->p);
+		/*
+		 * Entry processed and consumed, increment our counter.  The
+		 * callback can request that we exit after consuming the
+		 * entry, and we also exit if we reach our processing limit,
+		 * so loop back only if neither of these conditions is met.
+		 */
+	} while (++limit < poll_limit && res != qman_cb_dqrr_consume_stop);
+
+	return limit;
+}
+
+u16 qman_affine_channel(int cpu)
+{
+	if (cpu < 0) {
+		struct qman_portal *portal = get_affine_portal();
+
+		cpu = portal->config->cpu;
+	}
+	DPAA_BUG_ON(!CPU_ISSET(cpu, &affine_mask));
+	return affine_channels[cpu];
+}
+
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)
+{
+	struct qman_portal *p = get_affine_portal();
+	const struct qm_dqrr_entry *dq;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+
+	qm_dqrr_pvb_update(&p->p);
+	dq = qm_dqrr_current(&p->p);
+	if (!dq)
+		return NULL;
+
+	if (!(dq->stat & QM_DQRR_STAT_FD_VALID)) {
+		/* Invalid DQRR - put the portal and consume the DQRR.
+		 * Return NULL to user as no packet is seen.
+		 */
+		qman_dqrr_consume(fq, (struct qm_dqrr_entry *)dq);
+		return NULL;
+	}
+
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+	*shadow = *dq;
+	dq = shadow;
+	shadow->fqid = be32_to_cpu(shadow->fqid);
+	shadow->contextB = be32_to_cpu(shadow->contextB);
+	shadow->seqnum = be16_to_cpu(shadow->seqnum);
+	hw_fd_to_cpu(&shadow->fd);
+#endif
+
+	if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+		fq_clear(fq, QMAN_FQ_STATE_NE);
+
+	return (struct qm_dqrr_entry *)dq;
+}
+
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+		clear_vdqcr(p, fq);
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, 0);
+	qm_dqrr_next(&p->p);
+}
+
+int qman_poll_dqrr(unsigned int limit)
+{
+	struct qman_portal *p = get_affine_portal();
+	int ret;
+
+	ret = __poll_portal_fast(p, limit);
+	return ret;
+}
+
+void qman_poll(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if ((~p->irq_sources) & QM_PIRQ_SLOW) {
+		if (!(p->slowpoll--)) {
+			u32 is = qm_isr_status_read(&p->p) & ~p->irq_sources;
+			u32 active = __poll_portal_slow(p, is);
+
+			if (active) {
+				qm_isr_status_clear(&p->p, active);
+				p->slowpoll = SLOW_POLL_BUSY;
+			} else
+				p->slowpoll = SLOW_POLL_IDLE;
+		}
+	}
+	if ((~p->irq_sources) & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+}
+
+void qman_stop_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qman_stop_dequeues_ex(p);
+}
+
+void qman_start_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	DPAA_ASSERT(p->dqrr_disable_ref > 0);
+	if (!(--p->dqrr_disable_ref))
+		qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
+}
+
+void qman_static_dequeue_add(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr |= pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+void qman_static_dequeue_del(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr &= ~pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+u32 qman_static_dequeue_get(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->sdqcr;
+}
+
+void qman_dca(struct qm_dqrr_entry *dq, int park_request)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, park_request);
+}
+
+/* Frame queue API */
+static const char *mcr_result_str(u8 result)
+{
+	switch (result) {
+	case QM_MCR_RESULT_NULL:
+		return "QM_MCR_RESULT_NULL";
+	case QM_MCR_RESULT_OK:
+		return "QM_MCR_RESULT_OK";
+	case QM_MCR_RESULT_ERR_FQID:
+		return "QM_MCR_RESULT_ERR_FQID";
+	case QM_MCR_RESULT_ERR_FQSTATE:
+		return "QM_MCR_RESULT_ERR_FQSTATE";
+	case QM_MCR_RESULT_ERR_NOTEMPTY:
+		return "QM_MCR_RESULT_ERR_NOTEMPTY";
+	case QM_MCR_RESULT_PENDING:
+		return "QM_MCR_RESULT_PENDING";
+	case QM_MCR_RESULT_ERR_BADCOMMAND:
+		return "QM_MCR_RESULT_ERR_BADCOMMAND";
+	}
+	return "<unknown MCR result>";
+}
+
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
+{
+	struct qm_fqd fqd;
+	struct qm_mcr_queryfq_np np;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID) {
+		int ret = qman_alloc_fqid(&fqid);
+
+		if (ret)
+			return ret;
+	}
+	spin_lock_init(&fq->fqlock);
+	fq->fqid = fqid;
+	fq->flags = flags;
+	fq->state = qman_fq_state_oos;
+	fq->cgr_groupid = 0;
+
+	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
+		return 0;
+	/* Everything else is AS_IS support */
+	p = get_affine_portal();
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(&fqd);
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ_NP);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ_NP failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	np = mcr->queryfq_np;
+	/* Phew, have queryfq and queryfq_np results, stitch together
+	 * the FQ object from those.
+	 */
+	fq->cgr_groupid = fqd.cgid;
+	switch (np.state & QM_MCR_NP_STATE_MASK) {
+	case QM_MCR_NP_STATE_OOS:
+		break;
+	case QM_MCR_NP_STATE_RETIRED:
+		fq->state = qman_fq_state_retired;
+		if (np.frm_cnt)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		break;
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+		fq->state = qman_fq_state_sched;
+		if (np.state & QM_MCR_NP_STATE_R)
+			fq_set(fq, QMAN_FQ_STATE_CHANGING);
+		break;
+	case QM_MCR_NP_STATE_PARKED:
+		fq->state = qman_fq_state_parked;
+		break;
+	default:
+		DPAA_ASSERT(NULL == "invalid FQ state");
+	}
+	if (fqd.fq_ctrl & QM_FQCTRL_CGE)
+		fq->state |= QMAN_FQ_STATE_CGR_EN;
+	return 0;
+err:
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID)
+		qman_release_fqid(fqid);
+	return -EIO;
+}
+
+void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
+{
+	/*
+	 * We don't need to lock the FQ as it is a pre-condition that the FQ be
+	 * quiesced. Instead, run some checks.
+	 */
+	switch (fq->state) {
+	case qman_fq_state_parked:
+		DPAA_ASSERT(flags & QMAN_FQ_DESTROY_PARKED);
+	case qman_fq_state_oos:
+		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
+			qman_release_fqid(fq->fqid);
+
+		return;
+	default:
+		break;
+	}
+	DPAA_ASSERT(NULL == "qman_free_fq() on unquiesced FQ!");
+}
+
+u32 qman_fq_fqid(struct qman_fq *fq)
+{
+	return fq->fqid;
+}
+
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags)
+{
+	if (state)
+		*state = fq->state;
+	if (flags)
+		*flags = fq->flags;
+}
+
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	u8 res, myverb = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		QM_MCC_VERB_INITFQ_SCHED : QM_MCC_VERB_INITFQ_PARKED;
+
+	if ((fq->state != qman_fq_state_oos) &&
+	    (fq->state != qman_fq_state_parked))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	if (opts && (opts->we_mask & QM_INITFQ_WE_OAC)) {
+		/* And can't be set at the same time as TDTHRESH */
+		if (opts->we_mask & QM_INITFQ_WE_TDTHRESH)
+			return -EINVAL;
+	}
+	/* Issue an INITFQ_[PARKED|SCHED] management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     ((fq->state != qman_fq_state_oos) &&
+				(fq->state != qman_fq_state_parked)))) {
+		FQUNLOCK(fq);
+		return -EBUSY;
+	}
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initfq = *opts;
+	mcc->initfq.fqid = cpu_to_be32(fq->fqid);
+	mcc->initfq.count = 0;
+	/*
+	 * If the FQ does *not* have the TO_DCPORTAL flag, context_b is set as a
+	 * demux pointer. Otherwise, the caller-provided value is allowed to
+	 * stand, don't overwrite it.
+	 */
+	if (fq_isclear(fq, QMAN_FQ_FLAG_TO_DCPORTAL)) {
+		dma_addr_t phys_fq;
+
+		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+		/*
+		 *  and the physical address - NB, if the user wasn't trying to
+		 * set CONTEXTA, clear the stashing settings.
+		 */
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_CONTEXTA)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;
+			memset(&mcc->initfq.fqd.context_a, 0,
+			       sizeof(mcc->initfq.fqd.context_a));
+		} else {
+			phys_fq = rte_mem_virt2phy(fq);
+			qm_fqd_stashing_set64(&mcc->initfq.fqd, phys_fq);
+		}
+	}
+	if (flags & QMAN_INITFQ_FLAG_LOCAL) {
+		mcc->initfq.fqd.dest.channel = p->config->channel;
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_DESTWQ)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_DESTWQ;
+			mcc->initfq.fqd.dest.wq = 4;
+		}
+	}
+	mcc->initfq.we_mask = cpu_to_be16(mcc->initfq.we_mask);
+	cpu_to_hw_fqd(&mcc->initfq.fqd);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		FQUNLOCK(fq);
+		return -EIO;
+	}
+	if (opts) {
+		if (opts->we_mask & QM_INITFQ_WE_FQCTRL) {
+			if (opts->fqd.fq_ctrl & QM_FQCTRL_CGE)
+				fq_set(fq, QMAN_FQ_STATE_CGR_EN);
+			else
+				fq_clear(fq, QMAN_FQ_STATE_CGR_EN);
+		}
+		if (opts->we_mask & QM_INITFQ_WE_CGID)
+			fq->cgr_groupid = opts->fqd.cgid;
+	}
+	fq->state = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		qman_fq_state_sched : qman_fq_state_parked;
+	FQUNLOCK(fq);
+	return 0;
+}
+
+int qman_schedule_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_parked)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTERFQ_SCHED management command */
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state != qman_fq_state_parked))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_SCHED);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_SCHED);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_sched;
+out:
+	FQUNLOCK(fq);
+
+	return ret;
+}
+
+int qman_retire_fq(struct qman_fq *fq, u32 *flags)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int rval;
+	u8 res;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_sched))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_retired) ||
+				(fq->state == qman_fq_state_oos))) {
+		rval = -EBUSY;
+		goto out;
+	}
+	rval = table_push_fq(p, fq);
+	if (rval)
+		goto out;
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_RETIRE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_RETIRE);
+	res = mcr->result;
+	/*
+	 * "Elegant" would be to treat OK/PENDING the same way; set CHANGING,
+	 * and defer the flags until FQRNI or FQRN (respectively) show up. But
+	 * "Friendly" is to process OK immediately, and not set CHANGING. We do
+	 * friendly, otherwise the caller doesn't necessarily have a fully
+	 * "retired" FQ on return even if the retirement was immediate. However
+	 * this does mean some code duplication between here and
+	 * fq_state_change().
+	 */
+	if (likely(res == QM_MCR_RESULT_OK)) {
+		rval = 0;
+		/* Process 'fq' right away, we'll ignore FQRNI */
+		if (mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		if (flags)
+			*flags = fq->flags;
+		fq->state = qman_fq_state_retired;
+		if (fq->cb.fqs) {
+			/*
+			 * Another issue with supporting "immediate" retirement
+			 * is that we're forced to drop FQRNIs, because by the
+			 * time they're seen it may already be "too late" (the
+			 * fq may have been OOS'd and free()'d already). But if
+			 * the upper layer wants a callback whether it's
+			 * immediate or not, we have to fake a "MR" entry to
+			 * look like an FQRNI...
+			 */
+			struct qm_mr_entry msg;
+
+			msg.verb = QM_MR_VERB_FQRNI;
+			msg.fq.fqs = mcr->alterfq.fqs;
+			msg.fq.fqid = fq->fqid;
+			msg.fq.contextB = (u32)(uintptr_t)fq;
+			fq->cb.fqs(p, fq, &msg);
+		}
+	} else if (res == QM_MCR_RESULT_PENDING) {
+		rval = 1;
+		fq_set(fq, QMAN_FQ_STATE_CHANGING);
+	} else {
+		rval = -EIO;
+		table_del_fq(p, fq);
+	}
+out:
+	FQUNLOCK(fq);
+	return rval;
+}
+
+int qman_oos_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_retired)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_BLOCKOOS)) ||
+		     (fq->state != qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_OOS);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_OOS);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_oos;
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_fq_flow_control(struct qman_fq *fq, int xon)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+	u8 myverb;
+
+	if ((fq->state == qman_fq_state_oos) ||
+	    (fq->state == qman_fq_state_retired) ||
+		(fq->state == qman_fq_state_parked))
+		return -EINVAL;
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTER_FQXON or ALTER_FQXOFF management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_parked) ||
+			(fq->state == qman_fq_state_oos) ||
+			(fq->state == qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = fq->fqid;
+	mcc->alterfq.count = 0;
+	myverb = xon ? QM_MCC_VERB_ALTER_FQXON : QM_MCC_VERB_ALTER_FQXOFF;
+
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(fqd);
+	if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_fq_has_pkts(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	int ret = 0;
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		ret = !!mcr->queryfq_np.frm_cnt;
+	return ret;
+}
+
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		*np = mcr->queryfq_np;
+		np->fqd_link = be24_to_cpu(np->fqd_link);
+		np->odp_seq = be16_to_cpu(np->odp_seq);
+		np->orp_nesn = be16_to_cpu(np->orp_nesn);
+		np->orp_ea_hseq  = be16_to_cpu(np->orp_ea_hseq);
+		np->orp_ea_tseq  = be16_to_cpu(np->orp_ea_tseq);
+		np->orp_ea_hptr = be24_to_cpu(np->orp_ea_hptr);
+		np->orp_ea_tptr = be24_to_cpu(np->orp_ea_tptr);
+		np->pfdr_hptr = be24_to_cpu(np->pfdr_hptr);
+		np->pfdr_tptr = be24_to_cpu(np->pfdr_tptr);
+		np->ics_surp = be16_to_cpu(np->ics_surp);
+		np->byte_cnt = be32_to_cpu(np->byte_cnt);
+		np->frm_cnt = be24_to_cpu(np->frm_cnt);
+		np->ra1_sfdr = be16_to_cpu(np->ra1_sfdr);
+		np->ra2_sfdr = be16_to_cpu(np->ra2_sfdr);
+		np->od1_sfdr = be16_to_cpu(np->od1_sfdr);
+		np->od2_sfdr = be16_to_cpu(np->od2_sfdr);
+		np->od3_sfdr = be16_to_cpu(np->od3_sfdr);
+	}
+	if (res == QM_MCR_RESULT_ERR_FQID)
+		return -ERANGE;
+	else if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res, myverb;
+
+	myverb = (query_dedicated) ? QM_MCR_VERB_QUERYWQ_DEDICATED :
+				 QM_MCR_VERB_QUERYWQ;
+	mcc = qm_mc_start(&p->p);
+	mcc->querywq.channel.id = cpu_to_be16(wq->channel.id);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		int i, array_len;
+
+		wq->channel.id = be16_to_cpu(mcr->querywq.channel.id);
+		array_len = ARRAY_SIZE(mcr->querywq.wq_len);
+		for (i = 0; i < array_len; i++)
+			wq->wq_len[i] = be32_to_cpu(mcr->querywq.wq_len[i]);
+	}
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERYWQ failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->cgrtestwrite.cgid = cgr->cgrid;
+	mcc->cgrtestwrite.i_bcnt_hi = (u8)(i_bcnt >> 32);
+	mcc->cgrtestwrite.i_bcnt_lo = (u32)i_bcnt;
+	qm_mc_commit(&p->p, QM_MCC_VERB_CGRTESTWRITE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_CGRTESTWRITE);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*result = mcr->cgrtestwrite;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("CGR TEST WRITE failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->querycgr.cgid = cgr->cgrid;
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCGR);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYCGR);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*cgrd = mcr->querycgr;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CGR failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	cgrd->cgr.wr_parm_g.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_g.word);
+	cgrd->cgr.wr_parm_y.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_y.word);
+	cgrd->cgr.wr_parm_r.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_r.word);
+	cgrd->cgr.cscn_targ =  be32_to_cpu(cgrd->cgr.cscn_targ);
+	cgrd->cgr.__cs_thres = be16_to_cpu(cgrd->cgr.__cs_thres);
+	for (i = 0; i < ARRAY_SIZE(cgrd->cscn_targ_swp); i++)
+		cgrd->cscn_targ_swp[i] =
+			be32_to_cpu(cgrd->cscn_targ_swp[i]);
+	return 0;
+}
+
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion)
+{
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	qm_mc_start(&p->p);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			QM_MCC_VERB_QUERYCONGESTION);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*congestion = mcr->querycongestion;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CONGESTION failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	for (i = 0; i < ARRAY_SIZE(congestion->state.state); i++)
+		congestion->state.state[i] =
+			be32_to_cpu(congestion->state.state[i]);
+	return 0;
+}
+
+int qman_set_vdq(struct qman_fq *fq, u16 num)
+{
+	struct qman_portal *p = get_affine_portal();
+	uint32_t vdqcr;
+	int ret = -EBUSY;
+
+	vdqcr = QM_VDQCR_EXACT;
+	vdqcr |= QM_VDQCR_NUMFRAMES_SET(num);
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired)) {
+		ret = -EINVAL;
+		goto out;
+	}
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR)) {
+		ret = -EBUSY;
+		goto out;
+	}
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (!ret)
+		qm_dqrr_vdqcr_set(&p->p, vdqcr);
+
+out:
+	return ret;
+}
+
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
+			  u32 vdqcr)
+{
+	struct qman_portal *p;
+	int ret = -EBUSY;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired))
+		return -EINVAL;
+	if (vdqcr & QM_VDQCR_FQID_MASK)
+		return -EINVAL;
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+		return -EBUSY;
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	p = get_affine_portal();
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (ret)
+		return ret;
+
+	/* VDQCR is set */
+	qm_dqrr_vdqcr_set(&p->p, vdqcr);
+	return 0;
+}
+
+static noinline void update_eqcr_ci(struct qman_portal *p, u8 avail)
+{
+	if (avail)
+		qm_eqcr_cce_prefetch(&p->p);
+	else
+		qm_eqcr_cce_update(&p->p);
+}
+
+int qman_eqcr_is_empty(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	u8 avail;
+
+	update_eqcr_ci(p, 0);
+	avail = qm_eqcr_get_fill(&p->p);
+	return (avail == 0);
+}
+
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine)
+{
+	if (affine) {
+		struct qman_portal *p = get_affine_portal();
+
+		p->cb_dc_ern = handler;
+	} else
+		cb_dc_ern = handler;
+}
+
+static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
+					struct qman_fq *fq,
+					const struct qm_fd *fd,
+					u32 flags)
+{
+	struct qm_eqcr_entry *eq;
+	u8 avail;
+
+	if (p->use_eqcr_ci_stashing) {
+		/*
+		 * The stashing case is easy, only update if we need to in
+		 * order to try and liberate ring entries.
+		 */
+		eq = qm_eqcr_start_stash(&p->p);
+	} else {
+		/*
+		 * The non-stashing case is harder, need to prefetch ahead of
+		 * time.
+		 */
+		avail = qm_eqcr_get_avail(&p->p);
+		if (avail < 2)
+			update_eqcr_ci(p, avail);
+		eq = qm_eqcr_start_no_stash(&p->p);
+	}
+
+	if (unlikely(!eq))
+		return NULL;
+
+	if (flags & QMAN_ENQUEUE_FLAG_DCA)
+		eq->dca = QM_EQCR_DCA_ENABLE |
+			((flags & QMAN_ENQUEUE_FLAG_DCA_PARK) ?
+					QM_EQCR_DCA_PARK : 0) |
+			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
+	eq->fqid = cpu_to_be32(fq->fqid);
+	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+	eq->fd = *fd;
+	cpu_to_hw_fd(&eq->fd);
+	return eq;
+}
+
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_CMD_ENQUEUE |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+	/* Factor the below out, it's used from qman_enqueue_orp() too */
+	return 0;
+}
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_portal *portal = &p->p;
+
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
+
+	u8 i, diff, old_ci, sent = 0;
+
+	/* Update the available entries if no entry is free */
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return 0;
+	}
+
+	/* try to send as many frames as possible */
+	while (eqcr->available && frames_to_send--) {
+		eq->fqid = cpu_to_be32(fq->fqid);
+		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+		eq->fd.opaque_addr = fd->opaque_addr;
+		eq->fd.addr = cpu_to_be40(fd->addr);
+		eq->fd.status = cpu_to_be32(fd->status);
+		eq->fd.opaque = cpu_to_be32(fd->opaque);
+
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		eqcr->available--;
+		sent++;
+		fd++;
+	}
+	lwsync();
+
+	/* In order for flushes to complete faster, all lines are recorded in
+	 * 32 bit word.
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		eq->__dont_write_directly__verb =
+			QM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;
+		prev_eq = eq;
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		if (unlikely((prev_eq + 1) != eq))
+			eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+	}
+
+	/* We need  to flush all the lines but without load/store operations
+	 * between them
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		dcbf(eq);
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+	}
+	/* Update cursor for the next call */
+	eqcr->cursor = eq;
+	return sent;
+}
+
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum)
+{
+	struct qman_portal *p  = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Process ORP-specifics here */
+	if (flags & QMAN_ENQUEUE_FLAG_NLIS)
+		orp_seqnum |= QM_EQCR_SEQNUM_NLIS;
+	else {
+		orp_seqnum &= ~QM_EQCR_SEQNUM_NLIS;
+		if (flags & QMAN_ENQUEUE_FLAG_NESN)
+			orp_seqnum |= QM_EQCR_SEQNUM_NESN;
+		else
+			/* No need to check 4 QMAN_ENQUEUE_FLAG_HOLE */
+			orp_seqnum &= ~QM_EQCR_SEQNUM_NESN;
+	}
+	eq->seqnum = cpu_to_be16(orp_seqnum);
+	eq->orp = cpu_to_be32(orp->fqid);
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_ORP |
+		((flags & (QMAN_ENQUEUE_FLAG_HOLE | QMAN_ENQUEUE_FLAG_NESN)) ?
+				0 : QM_EQCR_VERB_CMD_ENQUEUE) |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+
+	return 0;
+}
+
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+	u8 verb = QM_MCC_VERB_MODIFYCGR;
+
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initcgr = *opts;
+	mcc->initcgr.we_mask = cpu_to_be16(mcc->initcgr.we_mask);
+	mcc->initcgr.cgr.wr_parm_g.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_g.word);
+	mcc->initcgr.cgr.wr_parm_y.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_y.word);
+	mcc->initcgr.cgr.wr_parm_r.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_r.word);
+	mcc->initcgr.cgr.cscn_targ =  cpu_to_be32(mcc->initcgr.cgr.cscn_targ);
+	mcc->initcgr.cgr.__cs_thres = cpu_to_be16(mcc->initcgr.cgr.__cs_thres);
+
+	mcc->initcgr.cgid = cgr->cgrid;
+	if (flags & QMAN_CGR_FLAG_USE_INIT)
+		verb = QM_MCC_VERB_INITCGR;
+	qm_mc_commit(&p->p, verb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == verb);
+	res = mcr->result;
+	return (res == QM_MCR_RESULT_OK) ? 0 : -EIO;
+}
+
+#define TARG_MASK(n) (0x80000000 >> (n->config->channel - \
+					QM_CHANNEL_SWPORTAL0))
+#define TARG_DCP_MASK(n) (0x80000000 >> (10 + n))
+#define PORTAL_IDX(n) (n->config->channel - QM_CHANNEL_SWPORTAL0)
+
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret;
+	struct qman_portal *p;
+
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	p = get_affine_portal();
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	cgr->chan = p->config->channel;
+	spin_lock(&p->cgr_lock);
+
+	/* if no opts specified, just add it to the list */
+	if (!opts)
+		goto add_list;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		goto release_lock;
+	if (opts)
+		local_opts = *opts;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+			QM_CGR_TARG_UDP_CTRL_WRITE_BIT | PORTAL_IDX(p);
+	else
+		/* Overwrite TARG */
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+							TARG_MASK(p);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT, &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		goto release_lock;
+add_list:
+	list_add(&cgr->node, &p->cgr_cbs);
+
+	/* Determine if newly added object requires its callback to be called */
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret) {
+		/* we can't go back, so proceed and return success, but screen
+		 * and wail to the log file.
+		 */
+		pr_crit("CGR HW state partially modified\n");
+		ret = 0;
+		goto release_lock;
+	}
+	if (cgr->cb && cgr_state.cgr.cscn_en && qman_cgrs_get(&p->cgrs[1],
+							      cgr->cgrid))
+		cgr->cb(p, cgr, 1);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+	return ret;
+}
+
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcc_initcgr local_opts;
+	struct qm_mcr_querycgr cgr_state;
+	int ret;
+
+	if ((qman_ip_rev & 0xFF00) < QMAN_REV30) {
+		pr_warn("QMan version doesn't support CSCN => DCP portal\n");
+		return -EINVAL;
+	}
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		return ret;
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	if (opts)
+		local_opts = *opts;
+
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+				QM_CGR_TARG_UDP_CTRL_WRITE_BIT |
+				QM_CGR_TARG_UDP_CTRL_DCP | dcp_portal;
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+					TARG_DCP_MASK(dcp_portal);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT,
+				      &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+
+	return ret;
+}
+
+int qman_delete_cgr(struct qman_cgr *cgr)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret = 0;
+	struct qman_cgr *i;
+	struct qman_portal *p = get_affine_portal();
+
+	if (cgr->chan != p->config->channel) {
+		pr_crit("Attempting to delete cgr from different portal than"
+			" it was create: create 0x%x, delete 0x%x\n",
+			cgr->chan, p->config->channel);
+		ret = -EINVAL;
+		goto put_portal;
+	}
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	spin_lock(&p->cgr_lock);
+	list_del(&cgr->node);
+	/*
+	 * If there are no other CGR objects for this CGRID in the list,
+	 * update CSCN_TARG accordingly
+	 */
+	list_for_each_entry(i, &p->cgr_cbs, node)
+		if ((i->cgrid == cgr->cgrid) && i->cb)
+			goto release_lock;
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)  {
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+		goto release_lock;
+	}
+	/* Overwrite TARG */
+	local_opts.we_mask = QM_CGR_WE_CSCN_TARG;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl = PORTAL_IDX(p);
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ &
+							 ~(TARG_MASK(p));
+	ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+put_portal:
+	return ret;
+}
+
+int qman_shutdown_fq(u32 fqid)
+{
+	struct qman_portal *p;
+	struct qm_portal *low_p;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	u8 state;
+	int orl_empty, fq_empty, drain = 0;
+	u32 result;
+	u32 channel, wq;
+	u16 dest_wq;
+
+	p = get_affine_portal();
+	low_p = &p->p;
+
+	/* Determine the state of the FQID */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	state = mcr->queryfq_np.state & QM_MCR_NP_STATE_MASK;
+	if (state == QM_MCR_NP_STATE_OOS)
+		return 0; /* Already OOS, no need to do anymore checks */
+
+	/* Query which channel the FQ is using */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+
+	/* Need to store these since the MCR gets reused */
+	dest_wq = be16_to_cpu(mcr->queryfq.fqd.dest_wq);
+	channel = dest_wq & 0x7;
+	wq = dest_wq >> 3;
+
+	switch (state) {
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+	case QM_MCR_NP_STATE_PARKED:
+		orl_empty = 0;
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_RETIRE);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_RETIRE);
+		result = mcr->result; /* Make a copy as we reuse MCR below */
+
+		if (result == QM_MCR_RESULT_PENDING) {
+			/* Need to wait for the FQRN in the message ring, which
+			 * will only occur once the FQ has been drained.  In
+			 * order for the FQ to drain the portal needs to be set
+			 * to dequeue from the channel the FQ is scheduled on
+			 */
+			const struct qm_mr_entry *msg;
+			const struct qm_dqrr_entry *dqrr = NULL;
+			int found_fqrn = 0;
+			__maybe_unused u16 dequeue_wq = 0;
+
+			/* Flag that we need to drain FQ */
+			drain = 1;
+
+			if (channel >= qm_channel_pool1 &&
+			    channel < (u16)(qm_channel_pool1 + 15)) {
+				/* Pool channel, enable the bit in the portal */
+				dequeue_wq = (channel -
+					      qm_channel_pool1 + 1) << 4 | wq;
+			} else if (channel < qm_channel_pool1) {
+				/* Dedicated channel */
+				dequeue_wq = wq;
+			} else {
+				pr_info("Cannot recover FQ 0x%x,"
+					" it is scheduled on channel 0x%x",
+					fqid, channel);
+				return -EBUSY;
+			}
+			/* Set the sdqcr to drain this channel */
+			if (channel < qm_channel_pool1)
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+					  QM_SDQCR_CHANNELS_DEDICATED);
+			else
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+						  QM_SDQCR_CHANNELS_POOL_CONV
+						  (channel));
+			while (!found_fqrn) {
+				/* Keep draining DQRR while checking the MR*/
+				qm_dqrr_pvb_update(low_p);
+				dqrr = qm_dqrr_current(low_p);
+				while (dqrr) {
+					qm_dqrr_cdc_consume_1ptr(
+						low_p, dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+				/* Process message ring too */
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+				while (msg) {
+					if ((msg->verb &
+					     QM_MR_VERB_TYPE_MASK)
+					    == QM_MR_VERB_FQRN)
+						found_fqrn = 1;
+					qm_mr_next(low_p);
+					qm_mr_cci_consume_to_current(low_p);
+					qm_mr_pvb_update(low_p);
+					msg = qm_mr_current(low_p);
+				}
+				cpu_relax();
+			}
+		}
+		if (result != QM_MCR_RESULT_OK &&
+		    result !=  QM_MCR_RESULT_PENDING) {
+			/* error */
+			pr_err("qman_retire_fq failed on FQ 0x%x,"
+			       " result=0x%x\n", fqid, result);
+			return -1;
+		}
+		if (!(mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)) {
+			/* ORL had no entries, no need to wait until the
+			 * ERNs come in.
+			 */
+			orl_empty = 1;
+		}
+		/* Retirement succeeded, check to see if FQ needs
+		 * to be drained.
+		 */
+		if (drain || mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY) {
+			/* FQ is Not Empty, drain using volatile DQ commands */
+			fq_empty = 0;
+			do {
+				const struct qm_dqrr_entry *dqrr = NULL;
+				u32 vdqcr = fqid | QM_VDQCR_NUMFRAMES_SET(3);
+
+				qm_dqrr_vdqcr_set(low_p, vdqcr);
+
+				/* Wait for a dequeue to occur */
+				while (dqrr == NULL) {
+					qm_dqrr_pvb_update(low_p);
+					dqrr = qm_dqrr_current(low_p);
+					if (!dqrr)
+						cpu_relax();
+				}
+				/* Process the dequeues, making sure to
+				 * empty the ring completely.
+				 */
+				while (dqrr) {
+					if (dqrr->fqid == fqid &&
+					    dqrr->stat & QM_DQRR_STAT_FQ_EMPTY)
+						fq_empty = 1;
+					qm_dqrr_cdc_consume_1ptr(low_p,
+								 dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+			} while (fq_empty == 0);
+		}
+		qm_dqrr_sdqcr_set(low_p, 0);
+
+		/* Wait for the ORL to have been completely drained */
+		while (orl_empty == 0) {
+			const struct qm_mr_entry *msg;
+
+			qm_mr_pvb_update(low_p);
+			msg = qm_mr_current(low_p);
+			while (msg) {
+				if ((msg->verb & QM_MR_VERB_TYPE_MASK) ==
+				    QM_MR_VERB_FQRL)
+					orl_empty = 1;
+				qm_mr_next(low_p);
+				qm_mr_cci_consume_to_current(low_p);
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+			}
+			cpu_relax();
+		}
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result != QM_MCR_RESULT_OK) {
+			pr_err(
+			"OOS after drain Failed on FQID 0x%x, result 0x%x\n",
+			       fqid, mcr->result);
+			return -1;
+		}
+		return 0;
+
+	case QM_MCR_NP_STATE_RETIRED:
+		/* Send OOS Command */
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result) {
+			pr_err("OOS Failed on FQID 0x%x\n", fqid);
+			return -1;
+		}
+		return 0;
+
+	}
+	return -1;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.h b/drivers/bus/dpaa/base/qbman/qman.h
new file mode 100644
index 0000000..7c645f4
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.h
@@ -0,0 +1,888 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman_priv.h"
+
+/***************************/
+/* Portal register assists */
+/***************************/
+#define QM_REG_EQCR_PI_CINH	0x3000
+#define QM_REG_EQCR_CI_CINH	0x3040
+#define QM_REG_EQCR_ITR		0x3080
+#define QM_REG_DQRR_PI_CINH	0x3100
+#define QM_REG_DQRR_CI_CINH	0x3140
+#define QM_REG_DQRR_ITR		0x3180
+#define QM_REG_DQRR_DCAP	0x31C0
+#define QM_REG_DQRR_SDQCR	0x3200
+#define QM_REG_DQRR_VDQCR	0x3240
+#define QM_REG_DQRR_PDQCR	0x3280
+#define QM_REG_MR_PI_CINH	0x3300
+#define QM_REG_MR_CI_CINH	0x3340
+#define QM_REG_MR_ITR		0x3380
+#define QM_REG_CFG		0x3500
+#define QM_REG_ISR		0x3600
+#define QM_REG_IIR              0x36C0
+#define QM_REG_ITPR		0x3740
+
+/* Cache-enabled register offsets */
+#define QM_CL_EQCR		0x0000
+#define QM_CL_DQRR		0x1000
+#define QM_CL_MR		0x2000
+#define QM_CL_EQCR_PI_CENA	0x3000
+#define QM_CL_EQCR_CI_CENA	0x3040
+#define QM_CL_DQRR_PI_CENA	0x3100
+#define QM_CL_DQRR_CI_CENA	0x3140
+#define QM_CL_MR_PI_CENA	0x3300
+#define QM_CL_MR_CI_CENA	0x3340
+#define QM_CL_CR		0x3800
+#define QM_CL_RR0		0x3900
+#define QM_CL_RR1		0x3940
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __qm_in(qm, o)		be32_to_cpu(__raw_readl((qm)->ci  + (o)))
+#define __qm_out(qm, o, val)	__raw_writel((cpu_to_be32(val)), \
+					     (qm)->ci + (o))
+#define qm_in(reg)		__qm_in(&portal->addr, QM_REG_##reg)
+#define qm_out(reg, val)	__qm_out(&portal->addr, QM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __qm_cl_touch_ro(qm, o) dcbt_ro((qm)->ce + (o))
+#define __qm_cl_touch_rw(qm, o) dcbt_rw((qm)->ce + (o))
+#define __qm_cl_in(qm, o)	be32_to_cpu(__raw_readl((qm)->ce + (o)))
+#define __qm_cl_out(qm, o, val) \
+	do { \
+		u32 *__tmpclout = (qm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __qm_cl_invalidate(qm, o) dccivac((qm)->ce + (o))
+#define qm_cl_touch_ro(reg) __qm_cl_touch_ro(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_touch_rw(reg) __qm_cl_touch_rw(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_in(reg)	    __qm_cl_in(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_out(reg, val) __qm_cl_out(&portal->addr, QM_CL_##reg##_CENA, val)
+#define qm_cl_invalidate(reg)\
+	__qm_cl_invalidate(&portal->addr, QM_CL_##reg##_CENA)
+
+/* Cache-enabled ring access */
+#define qm_cl(base, idx)	((void *)base + ((idx) << 6))
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 qm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *     dmode == h/w dequeue mode.
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ *     dc == DCA (Discrete Consumption Acknowledgment), DQRR-only
+ *   As for "enum qm_dqrr_dmode", it should be self-explanatory.
+ */
+enum qm_eqcr_pmode {		/* matches QCSP_CFG::EPM */
+	qm_eqcr_pci = 0,	/* PI index, cache-inhibited */
+	qm_eqcr_pce = 1,	/* PI index, cache-enabled */
+	qm_eqcr_pvb = 2		/* valid-bit */
+};
+
+enum qm_dqrr_dmode {		/* matches QCSP_CFG::DP */
+	qm_dqrr_dpush = 0,	/* SDQCR  + VDQCR */
+	qm_dqrr_dpull = 1	/* PDQCR */
+};
+
+enum qm_dqrr_pmode {		/* s/w-only */
+	qm_dqrr_pci,		/* reads DQRR_PI_CINH */
+	qm_dqrr_pce,		/* reads DQRR_PI_CENA */
+	qm_dqrr_pvb		/* reads valid-bit */
+};
+
+enum qm_dqrr_cmode {		/* matches QCSP_CFG::DCM */
+	qm_dqrr_cci = 0,	/* CI index, cache-inhibited */
+	qm_dqrr_cce = 1,	/* CI index, cache-enabled */
+	qm_dqrr_cdc = 2		/* Discrete Consumption Acknowledgment */
+};
+
+enum qm_mr_pmode {		/* s/w-only */
+	qm_mr_pci,		/* reads MR_PI_CINH */
+	qm_mr_pce,		/* reads MR_PI_CENA */
+	qm_mr_pvb		/* reads valid-bit */
+};
+
+enum qm_mr_cmode {		/* matches QCSP_CFG::MM */
+	qm_mr_cci = 0,		/* CI index, cache-inhibited */
+	qm_mr_cce = 1		/* CI index, cache-enabled */
+};
+
+/* ------------------------- */
+/* --- Portal structures --- */
+
+#define QM_EQCR_SIZE		8
+#define QM_DQRR_SIZE		16
+#define QM_MR_SIZE		8
+
+struct qm_eqcr {
+	struct qm_eqcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	u32 busy;
+	enum qm_eqcr_pmode pmode;
+#endif
+};
+
+struct qm_dqrr {
+	const struct qm_dqrr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	enum qm_dqrr_dmode dmode;
+	enum qm_dqrr_pmode pmode;
+	enum qm_dqrr_cmode cmode;
+#endif
+};
+
+struct qm_mr {
+	const struct qm_mr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	enum qm_mr_pmode pmode;
+	enum qm_mr_cmode cmode;
+#endif
+};
+
+struct qm_mc {
+	struct qm_mc_command *cr;
+	struct qm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	enum {
+		/* Can be _mc_start()ed */
+		qman_mc_idle,
+		/* Can be _mc_commit()ed or _mc_abort()ed */
+		qman_mc_user,
+		/* Can only be _mc_retry()ed */
+		qman_mc_hw
+	} state;
+#endif
+};
+
+#define QM_PORTAL_ALIGNMENT ____cacheline_aligned
+
+struct qm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct qm_portal {
+	struct qm_addr addr;
+	struct qm_eqcr eqcr;
+	struct qm_dqrr dqrr;
+	struct qm_mr mr;
+	struct qm_mc mc;
+} QM_PORTAL_ALIGNMENT;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define EQCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_EQCR_SIZE << 6)))
+
+extern dma_addr_t rte_mem_virt2phy(const void *addr);
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 EQCR_PTR2IDX(struct qm_eqcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_EQCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void EQCR_INC(struct qm_eqcr *eqcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates fast
+	 * code with essentially no branching overheads. We increment to the
+	 * next EQCR pointer and handle overflow and 'vbit'.
+	 */
+	struct qm_eqcr_entry *partial = eqcr->cursor + 1;
+
+	eqcr->cursor = EQCR_CARRYCLEAR(partial);
+	if (partial != eqcr->cursor)
+		eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_no_stash(struct qm_portal
+								 *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available)
+		return NULL;
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 1;
+#endif
+
+	return eqcr->cursor;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_stash(struct qm_portal
+								*portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return NULL;
+	}
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 1;
+#endif
+	return eqcr->cursor;
+}
+
+static inline void qm_eqcr_abort(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 0;
+#endif
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_pend_and_next(
+					struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+	DPAA_ASSERT(eqcr->pmode != qm_eqcr_pvb);
+	if (eqcr->available == 1)
+		return NULL;
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcr->cursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	return eqcr->cursor;
+}
+
+#define EQCR_COMMIT_CHECKS(eqcr) \
+do { \
+	DPAA_ASSERT(eqcr->busy); \
+	DPAA_ASSERT(eqcr->cursor->orp == (eqcr->cursor->orp & 0x00ffffff)); \
+	DPAA_ASSERT(eqcr->cursor->fqid == (eqcr->cursor->fqid & 0x00ffffff)); \
+} while (0)
+
+static inline void qm_eqcr_pci_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pci);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	hwsync();
+	qm_out(EQCR_PI_CINH, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	qm_cl_invalidate(EQCR_PI);
+	qm_cl_touch_rw(EQCR_PI);
+}
+
+static inline void qm_eqcr_pce_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	lwsync();
+	qm_cl_out(EQCR_PI, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pvb_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eqcursor;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pvb);
+	lwsync();
+	eqcursor = eqcr->cursor;
+	eqcursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 0;
+#endif
+}
+
+static inline u8 qm_eqcr_cci_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline void qm_eqcr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	qm_cl_touch_ro(EQCR_CI);
+}
+
+static inline u8 qm_eqcr_cce_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline u8 qm_eqcr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->ithresh;
+}
+
+static inline void qm_eqcr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	eqcr->ithresh = ithresh;
+	qm_out(EQCR_ITR, ithresh);
+}
+
+static inline u8 qm_eqcr_get_avail(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->available;
+}
+
+static inline u8 qm_eqcr_get_fill(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return QM_EQCR_SIZE - 1 - eqcr->available;
+}
+
+#define DQRR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_DQRR_SIZE << 6)))
+
+static inline u8 DQRR_PTR2IDX(const struct qm_dqrr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_DQRR_SIZE - 1);
+}
+
+static inline const struct qm_dqrr_entry *DQRR_INC(
+						const struct qm_dqrr_entry *e)
+{
+	return DQRR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_dqrr_set_maxfill(struct qm_portal *portal, u8 mf)
+{
+	qm_out(CFG, (qm_in(CFG) & 0xff0fffff) |
+		((mf & (QM_DQRR_SIZE - 1)) << 20));
+}
+
+static inline const struct qm_dqrr_entry *qm_dqrr_current(
+						struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	if (!dqrr->fill)
+		return NULL;
+	return dqrr->cursor;
+}
+
+static inline u8 qm_dqrr_cursor(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return DQRR_PTR2IDX(dqrr->cursor);
+}
+
+static inline u8 qm_dqrr_next(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->fill);
+	dqrr->cursor = DQRR_INC(dqrr->cursor);
+	return --dqrr->fill;
+}
+
+static inline u8 qm_dqrr_pci_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pci);
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	qm_cl_invalidate(DQRR_PI);
+	qm_cl_touch_ro(DQRR_PI);
+}
+
+static inline u8 qm_dqrr_pce_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	dqrr->pi = qm_cl_in(DQRR_PI) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	const struct qm_dqrr_entry *res = qm_cl(dqrr->ring, dqrr->pi);
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_DQRR_VERB_VBIT) == dqrr->vbit) {
+		dqrr->pi = (dqrr->pi + 1) & (QM_DQRR_SIZE - 1);
+		if (!dqrr->pi)
+			dqrr->vbit ^= QM_DQRR_VERB_VBIT;
+		dqrr->fill++;
+	}
+}
+
+static inline void qm_dqrr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_rw(DQRR_CI);
+}
+
+static inline void qm_dqrr_cce_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cdc_consume_1(struct qm_portal *portal, u8 idx,
+					 int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |	/* S */
+		((park ? 1 : 0) << 6) |	/* PK */
+		idx);			/* DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_1ptr(struct qm_portal *portal,
+					    const struct qm_dqrr_entry *dq,
+					int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 idx = DQRR_PTR2IDX(dq);
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* DQRR_DCAP::S */
+		((park ? 1 : 0) << 6) |		/* DQRR_DCAP::PK */
+		idx);				/* DQRR_DCAP::DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_n(struct qm_portal *portal, u16 bitmask)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (1 << 8) |		/* DQRR_DCAP::S */
+		((u32)bitmask << 16));		/* DQRR_DCAP::DCAP_CI */
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+}
+
+static inline u8 qm_dqrr_cdc_cci(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+}
+
+static inline void qm_dqrr_cdc_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_ro(DQRR_CI);
+}
+
+static inline u8 qm_dqrr_cdc_cce(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_cl_in(DQRR_CI) & (QM_DQRR_SIZE - 1);
+}
+
+static inline u8 qm_dqrr_get_ci(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	return dqrr->ci;
+}
+
+static inline void qm_dqrr_park(struct qm_portal *portal, u8 idx)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		(idx & (QM_DQRR_SIZE - 1)));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_park_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		DQRR_PTR2IDX(dqrr->cursor));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_sdqcr_set(struct qm_portal *portal, u32 sdqcr)
+{
+	qm_out(DQRR_SDQCR, sdqcr);
+}
+
+static inline u32 qm_dqrr_sdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_SDQCR);
+}
+
+static inline void qm_dqrr_vdqcr_set(struct qm_portal *portal, u32 vdqcr)
+{
+	qm_out(DQRR_VDQCR, vdqcr);
+}
+
+static inline u32 qm_dqrr_vdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_VDQCR);
+}
+
+static inline u8 qm_dqrr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return dqrr->ithresh;
+}
+
+static inline void qm_dqrr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(DQRR_ITR, ithresh);
+}
+
+static inline u8 qm_dqrr_get_maxfill(struct qm_portal *portal)
+{
+	return (qm_in(CFG) & 0x00f00000) >> 20;
+}
+
+/* -------------- */
+/* --- MR API --- */
+
+#define MR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_MR_SIZE << 6)))
+
+static inline u8 MR_PTR2IDX(const struct qm_mr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_MR_SIZE - 1);
+}
+
+static inline const struct qm_mr_entry *MR_INC(const struct qm_mr_entry *e)
+{
+	return MR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_mr_finish(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (mr->ci != MR_PTR2IDX(mr->cursor))
+		pr_crit("Ignoring completed MR entries\n");
+}
+
+static inline const struct qm_mr_entry *qm_mr_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (!mr->fill)
+		return NULL;
+	return mr->cursor;
+}
+
+static inline u8 qm_mr_next(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->fill);
+	mr->cursor = MR_INC(mr->cursor);
+	return --mr->fill;
+}
+
+static inline void qm_mr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = (mr->ci + num) & (QM_MR_SIZE - 1);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = MR_PTR2IDX(mr->cursor);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(MR_ITR, ithresh);
+}
+
+/* ------------------------------ */
+/* --- Management command API --- */
+static inline int qm_mc_init(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + QM_CL_CR;
+	mc->rr = portal->addr.ce + QM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			QM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? QM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = qman_mc_idle;
+#endif
+	return 0;
+}
+
+static inline void qm_mc_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (mc->state != qman_mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct qm_mc_command *qm_mc_start(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = qman_mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void qm_mc_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = qman_mc_hw;
+#endif
+}
+
+static inline struct qm_mc_result *qm_mc_result(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= QM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = qman_mc_idle;
+#endif
+	return rr;
+}
+
+/* Portal interrupt register API */
+static inline void qm_isr_set_iperiod(struct qm_portal *portal, u16 iperiod)
+{
+	qm_out(ITPR, iperiod);
+}
+
+static inline u32 __qm_isr_read(struct qm_portal *portal, enum qm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 6));
+#else
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __qm_isr_write(struct qm_portal *portal, enum qm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 6), val);
+#else
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 2), val);
+#endif
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 80dde20..90fb130 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -66,6 +66,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_qman_portal_init(uint32_t index, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct qman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -116,6 +117,14 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 	pcfg.node = NULL;
 	pcfg.irq = fd;
 
+	portal = qman_create_affine_portal(&pcfg, NULL);
+	if (!portal) {
+		pr_err("Qman portal initialisation failed (%d)\n",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	irq_map.type = dpaa_portal_qman;
 	irq_map.portal_cinh = map.addr.cinh;
 	process_portal_irq_map(fd, &irq_map);
@@ -124,10 +133,13 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 
 static int fsl_qman_portal_finish(void)
 {
+	__maybe_unused const struct qm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = qman_destroy_affine_portal();
+	DPAA_BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 784fe60..85ae13b 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1246,6 +1246,761 @@ struct qman_cgr {
 	struct list_head node;
 };
 
+/* Flags to qman_create_fq() */
+#define QMAN_FQ_FLAG_NO_ENQUEUE      0x00000001 /* can't enqueue */
+#define QMAN_FQ_FLAG_NO_MODIFY       0x00000002 /* can only enqueue */
+#define QMAN_FQ_FLAG_TO_DCPORTAL     0x00000004 /* consumed by CAAM/PME/Fman */
+#define QMAN_FQ_FLAG_LOCKED          0x00000008 /* multi-core locking */
+#define QMAN_FQ_FLAG_AS_IS           0x00000010 /* query h/w state */
+#define QMAN_FQ_FLAG_DYNAMIC_FQID    0x00000020 /* (de)allocate fqid */
+
+/* Flags to qman_destroy_fq() */
+#define QMAN_FQ_DESTROY_PARKED       0x00000001 /* FQ can be parked or OOS */
+
+/* Flags from qman_fq_state() */
+#define QMAN_FQ_STATE_CHANGING       0x80000000 /* 'state' is changing */
+#define QMAN_FQ_STATE_NE             0x40000000 /* retired FQ isn't empty */
+#define QMAN_FQ_STATE_ORL            0x20000000 /* retired FQ has ORL */
+#define QMAN_FQ_STATE_BLOCKOOS       0xe0000000 /* if any are set, no OOS */
+#define QMAN_FQ_STATE_CGR_EN         0x10000000 /* CGR enabled */
+#define QMAN_FQ_STATE_VDQCR          0x08000000 /* being volatile dequeued */
+
+/* Flags to qman_init_fq() */
+#define QMAN_INITFQ_FLAG_SCHED       0x00000001 /* schedule rather than park */
+#define QMAN_INITFQ_FLAG_LOCAL       0x00000004 /* set dest portal */
+
+/* Flags to qman_enqueue(). NB, the strange numbering is to align with hardware,
+ * bit-wise. (NB: the PME API is sensitive to these precise numberings too, so
+ * any change here should be audited in PME.)
+ */
+#define QMAN_ENQUEUE_FLAG_WATCH_CGR  0x00080000 /* watch congestion state */
+#define QMAN_ENQUEUE_FLAG_DCA        0x00008000 /* perform enqueue-DCA */
+#define QMAN_ENQUEUE_FLAG_DCA_PARK   0x00004000 /* If DCA, requests park */
+#define QMAN_ENQUEUE_FLAG_DCA_PTR(p)		/* If DCA, p is DQRR entry */ \
+		(((u32)(p) << 2) & 0x00000f00)
+#define QMAN_ENQUEUE_FLAG_C_GREEN    0x00000000 /* choose one C_*** flag */
+#define QMAN_ENQUEUE_FLAG_C_YELLOW   0x00000008
+#define QMAN_ENQUEUE_FLAG_C_RED      0x00000010
+#define QMAN_ENQUEUE_FLAG_C_OVERRIDE 0x00000018
+/* For the ORP-specific qman_enqueue_orp() variant;
+ * - this flag indicates "Not Last In Sequence", ie. all but the final fragment
+ *   of a frame.
+ */
+#define QMAN_ENQUEUE_FLAG_NLIS       0x01000000
+/* - this flag performs no enqueue but fills in an ORP sequence number that
+ *   would otherwise block it (eg. if a frame has been dropped).
+ */
+#define QMAN_ENQUEUE_FLAG_HOLE       0x02000000
+/* - this flag performs no enqueue but advances NESN to the given sequence
+ *   number.
+ */
+#define QMAN_ENQUEUE_FLAG_NESN       0x04000000
+
+/* Flags to qman_modify_cgr() */
+#define QMAN_CGR_FLAG_USE_INIT       0x00000001
+#define QMAN_CGR_MODE_FRAME          0x00000001
+
+/**
+ * qman_get_portal_index - get portal configuration index
+ */
+int qman_get_portal_index(void);
+
+/**
+ * qman_affine_channel - return the channel ID of an portal
+ * @cpu: the cpu whose affine portal is the subject of the query
+ *
+ * If @cpu is -1, the affine portal for the current CPU will be used. It is a
+ * bug to call this function for any value of @cpu (other than -1) that is not a
+ * member of the cpu mask.
+ */
+u16 qman_affine_channel(int cpu);
+
+/**
+ * qman_set_vdq - Issue a volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @num: Number of Frames requested for volatile dequeue
+ *
+ * This function will issue a volatile dequeue command to the QMAN.
+ */
+int qman_set_vdq(struct qman_fq *fq, u16 num);
+
+/**
+ * qman_dequeue - Get the DQRR entry after volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ *
+ * This function will return the DQRR entry after a volatile dequeue command
+ * is issued. It will keep returning NULL until there is no packet available on
+ * the DQRR.
+ */
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
+
+/**
+ * qman_dqrr_consume - Consume the DQRR entriy after volatile dequeue
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @dq: DQRR entry to consume. This is the one which is provided by the
+ *    'qbman_dequeue' command.
+ *
+ * This will consume the DQRR enrey and make it available for next volatile
+ * dequeue.
+ */
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq);
+
+/**
+ * qman_poll_dqrr - process DQRR (fast-path) entries
+ * @limit: the maximum number of DQRR entries to process
+ *
+ * Use of this function requires that DQRR processing not be interrupt-driven.
+ * Ie. the value returned by qman_irqsource_get() should not include
+ * QM_PIRQ_DQRI. If the current CPU is sharing a portal hosted on another CPU,
+ * this function will return -EINVAL, otherwise the return value is >=0 and
+ * represents the number of DQRR entries processed.
+ */
+int qman_poll_dqrr(unsigned int limit);
+
+/**
+ * qman_poll
+ *
+ * Dispatcher logic on a cpu can use this to trigger any maintenance of the
+ * affine portal. There are two classes of portal processing in question;
+ * fast-path (which involves demuxing dequeue ring (DQRR) entries and tracking
+ * enqueue ring (EQCR) consumption), and slow-path (which involves EQCR
+ * thresholds, congestion state changes, etc). This function does whatever
+ * processing is not triggered by interrupts.
+ *
+ * Note, if DQRR and some slow-path processing are poll-driven (rather than
+ * interrupt-driven) then this function uses a heuristic to determine how often
+ * to run slow-path processing - as slow-path processing introduces at least a
+ * minimum latency each time it is run, whereas fast-path (DQRR) processing is
+ * close to zero-cost if there is no work to be done.
+ */
+void qman_poll(void);
+
+/**
+ * qman_stop_dequeues - Stop h/w dequeuing to the s/w portal
+ *
+ * Disables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_stop_dequeues(void);
+
+/**
+ * qman_start_dequeues - (Re)start h/w dequeuing to the s/w portal
+ *
+ * Enables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_start_dequeues(void);
+
+/**
+ * qman_static_dequeue_add - Add pool channels to the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Adds a set of pool channels to the portal's static dequeue command register
+ * (SDQCR). The requested pools are limited to those the portal has dequeue
+ * access to.
+ */
+void qman_static_dequeue_add(u32 pools);
+
+/**
+ * qman_static_dequeue_del - Remove pool channels from the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Removes a set of pool channels from the portal's static dequeue command
+ * register (SDQCR). The requested pools are limited to those the portal has
+ * dequeue access to.
+ */
+void qman_static_dequeue_del(u32 pools);
+
+/**
+ * qman_static_dequeue_get - return the portal's current SDQCR
+ *
+ * Returns the portal's current static dequeue command register (SDQCR). The
+ * entire register is returned, so if only the currently-enabled pool channels
+ * are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
+ */
+u32 qman_static_dequeue_get(void);
+
+/**
+ * qman_dca - Perform a Discrete Consumption Acknowledgment
+ * @dq: the DQRR entry to be consumed
+ * @park_request: indicates whether the held-active @fq should be parked
+ *
+ * Only allowed in DCA-mode portals, for DQRR entries whose handler callback had
+ * previously returned 'qman_cb_dqrr_defer'. NB, as with the other APIs, this
+ * does not take a 'portal' argument but implies the core affine portal from the
+ * cpu that is currently executing the function. For reasons of locking, this
+ * function must be called from the same CPU as that which processed the DQRR
+ * entry in the first place.
+ */
+void qman_dca(struct qm_dqrr_entry *dq, int park_request);
+
+/**
+ * qman_eqcr_is_empty - Determine if portal's EQCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * enqueues for the local portal have been processed by Qman but can't use the
+ * QMAN_ENQUEUE_FLAG_WAIT_SYNC flag to do this from the final qman_enqueue().
+ * The function forces tracking of EQCR consumption (which normally doesn't
+ * happen until enqueue processing needs to find space to put new enqueue
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int qman_eqcr_is_empty(void);
+
+/**
+ * qman_set_dc_ern - Set the handler for DCP enqueue rejection notifications
+ * @handler: callback for processing DCP ERNs
+ * @affine: whether this handler is specific to the locally affine portal
+ *
+ * If a hardware block's interface to Qman (ie. its direct-connect portal, or
+ * DCP) is configured not to receive enqueue rejections, then any enqueues
+ * through that DCP that are rejected will be sent to a given software portal.
+ * If @affine is non-zero, then this handler will only be used for DCP ERNs
+ * received on the portal affine to the current CPU. If multiple CPUs share a
+ * portal and they all call this function, they will be setting the handler for
+ * the same portal! If @affine is zero, then this handler will be global to all
+ * portals handled by this instance of the driver. Only those portals that do
+ * not have their own affine handler will use the global handler.
+ */
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
+
+	/* FQ management */
+	/* ------------- */
+/**
+ * qman_create_fq - Allocates a FQ
+ * @fqid: the index of the FQD to encapsulate, must be "Out of Service"
+ * @flags: bit-mask of QMAN_FQ_FLAG_*** options
+ * @fq: memory for storing the 'fq', with callbacks filled in
+ *
+ * Creates a frame queue object for the given @fqid, unless the
+ * QMAN_FQ_FLAG_DYNAMIC_FQID flag is set in @flags, in which case a FQID is
+ * dynamically allocated (or the function fails if none are available). Once
+ * created, the caller should not touch the memory at 'fq' except as extended to
+ * adjacent memory for user-defined fields (see the definition of "struct
+ * qman_fq" for more info). NO_MODIFY is only intended for enqueuing to
+ * pre-existing frame-queues that aren't to be otherwise interfered with, it
+ * prevents all other modifications to the frame queue. The TO_DCPORTAL flag
+ * causes the driver to honour any contextB modifications requested in the
+ * qm_init_fq() API, as this indicates the frame queue will be consumed by a
+ * direct-connect portal (PME, CAAM, or Fman). When frame queues are consumed by
+ * software portals, the contextB field is controlled by the driver and can't be
+ * modified by the caller. If the AS_IS flag is specified, management commands
+ * will be used on portal @p to query state for frame queue @fqid and construct
+ * a frame queue object based on that, rather than assuming/requiring that it be
+ * Out of Service.
+ */
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
+
+/**
+ * qman_destroy_fq - Deallocates a FQ
+ * @fq: the frame queue object to release
+ * @flags: bit-mask of QMAN_FQ_FREE_*** options
+ *
+ * The memory for this frame queue object ('fq' provided in qman_create_fq()) is
+ * not deallocated but the caller regains ownership, to do with as desired. The
+ * FQ must be in the 'out-of-service' state unless the QMAN_FQ_FREE_PARKED flag
+ * is specified, in which case it may also be in the 'parked' state.
+ */
+void qman_destroy_fq(struct qman_fq *fq, u32 flags);
+
+/**
+ * qman_fq_fqid - Queries the frame queue ID of a FQ object
+ * @fq: the frame queue object to query
+ */
+u32 qman_fq_fqid(struct qman_fq *fq);
+
+/**
+ * qman_fq_state - Queries the state of a FQ object
+ * @fq: the frame queue object to query
+ * @state: pointer to state enum to return the FQ scheduling state
+ * @flags: pointer to state flags to receive QMAN_FQ_STATE_*** bitmask
+ *
+ * Queries the state of the FQ object, without performing any h/w commands.
+ * This captures the state, as seen by the driver, at the time the function
+ * executes.
+ */
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
+
+/**
+ * qman_init_fq - Initialises FQ fields, leaves the FQ "parked" or "scheduled"
+ * @fq: the frame queue object to modify, must be 'parked' or new.
+ * @flags: bit-mask of QMAN_INITFQ_FLAG_*** options
+ * @opts: the FQ-modification settings, as defined in the low-level API
+ *
+ * The @opts parameter comes from the low-level portal API. Select
+ * QMAN_INITFQ_FLAG_SCHED in @flags to cause the frame queue to be scheduled
+ * rather than parked. NB, @opts can be NULL.
+ *
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver;
+ * 1. the 'count' and 'fqid' fields are always ignored (this operation only
+ * affects one frame queue: @fq).
+ * 2. the QM_INITFQ_WE_CONTEXTB option of the 'we_mask' field and the associated
+ * 'fqd' structure's 'context_b' field are sometimes overwritten;
+ *   - if @fq was not created with QMAN_FQ_FLAG_TO_DCPORTAL, then context_b is
+ *     initialised to a value used by the driver for demux.
+ *   - if context_b is initialised for demux, so is context_a in case stashing
+ *     is requested (see item 4).
+ * (So caller control of context_b is only possible for TO_DCPORTAL frame queue
+ * objects.)
+ * 3. if @flags contains QMAN_INITFQ_FLAG_LOCAL, the 'fqd' structure's
+ * 'dest::channel' field will be overwritten to match the portal used to issue
+ * the command. If the WE_DESTWQ write-enable bit had already been set by the
+ * caller, the channel workqueue will be left as-is, otherwise the write-enable
+ * bit is set and the workqueue is set to a default of 4. If the "LOCAL" flag
+ * isn't set, the destination channel/workqueue fields and the write-enable bit
+ * are left as-is.
+ * 4. if the driver overwrites context_a/b for demux, then if
+ * QM_INITFQ_WE_CONTEXTA is set, the driver will only overwrite
+ * context_a.address fields and will leave the stashing fields provided by the
+ * user alone, otherwise it will zero out the context_a.stashing fields.
+ */
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
+
+/**
+ * qman_schedule_fq - Schedules a FQ
+ * @fq: the frame queue object to schedule, must be 'parked'
+ *
+ * Schedules the frame queue, which must be Parked, which takes it to
+ * Tentatively-Scheduled or Truly-Scheduled depending on its fill-level.
+ */
+int qman_schedule_fq(struct qman_fq *fq);
+
+/**
+ * qman_retire_fq - Retires a FQ
+ * @fq: the frame queue object to retire
+ * @flags: FQ flags (as per qman_fq_state) if retirement completes immediately
+ *
+ * Retires the frame queue. This returns zero if it succeeds immediately, +1 if
+ * the retirement was started asynchronously, otherwise it returns negative for
+ * failure. When this function returns zero, @flags is set to indicate whether
+ * the retired FQ is empty and/or whether it has any ORL fragments (to show up
+ * as ERNs). Otherwise the corresponding flags will be known when a subsequent
+ * FQRN message shows up on the portal's message ring.
+ *
+ * NB, if the retirement is asynchronous (the FQ was in the Truly Scheduled or
+ * Active state), the completion will be via the message ring as a FQRN - but
+ * the corresponding callback may occur before this function returns!! Ie. the
+ * caller should be prepared to accept the callback as the function is called,
+ * not only once it has returned.
+ */
+int qman_retire_fq(struct qman_fq *fq, u32 *flags);
+
+/**
+ * qman_oos_fq - Puts a FQ "out of service"
+ * @fq: the frame queue object to be put out-of-service, must be 'retired'
+ *
+ * The frame queue must be retired and empty, and if any order restoration list
+ * was released as ERNs at the time of retirement, they must all be consumed.
+ */
+int qman_oos_fq(struct qman_fq *fq);
+
+/**
+ * qman_fq_flow_control - Set the XON/XOFF state of a FQ
+ * @fq: the frame queue object to be set to XON/XOFF state, must not be 'oos',
+ * or 'retired' or 'parked' state
+ * @xon: boolean to set fq in XON or XOFF state
+ *
+ * The frame should be in Tentatively Scheduled state or Truly Schedule sate,
+ * otherwise the IFSI interrupt will be asserted.
+ */
+int qman_fq_flow_control(struct qman_fq *fq, int xon);
+
+/**
+ * qman_query_fq - Queries FQD fields (via h/w query command)
+ * @fq: the frame queue object to be queried
+ * @fqd: storage for the queried FQD fields
+ */
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd);
+
+/**
+ * qman_query_fq_has_pkts - Queries non-programmable FQD fields and returns '1'
+ * if packets are in the frame queue. If there are no packets on frame
+ * queue '0' is returned.
+ * @fq: the frame queue object to be queried
+ */
+int qman_query_fq_has_pkts(struct qman_fq *fq);
+
+/**
+ * qman_query_fq_np - Queries non-programmable FQD fields
+ * @fq: the frame queue object to be queried
+ * @np: storage for the queried FQD fields
+ */
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
+
+/**
+ * qman_query_wq - Queries work queue lengths
+ * @query_dedicated: If non-zero, query length of WQs in the channel dedicated
+ *		to this software portal. Otherwise, query length of WQs in a
+ *		channel  specified in wq.
+ * @wq: storage for the queried WQs lengths. Also specified the channel to
+ *	to query if query_dedicated is zero.
+ */
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
+
+/**
+ * qman_volatile_dequeue - Issue a volatile dequeue command
+ * @fq: the frame queue object to dequeue from
+ * @flags: a bit-mask of QMAN_VOLATILE_FLAG_*** options
+ * @vdqcr: bit mask of QM_VDQCR_*** options, as per qm_dqrr_vdqcr_set()
+ *
+ * Attempts to lock access to the portal's VDQCR volatile dequeue functionality.
+ * The function will block and sleep if QMAN_VOLATILE_FLAG_WAIT is specified and
+ * the VDQCR is already in use, otherwise returns non-zero for failure. If
+ * QMAN_VOLATILE_FLAG_FINISH is specified, the function will only return once
+ * the VDQCR command has finished executing (ie. once the callback for the last
+ * DQRR entry resulting from the VDQCR command has been called). If not using
+ * the FINISH flag, completion can be determined either by detecting the
+ * presence of the QM_DQRR_STAT_UNSCHEDULED and QM_DQRR_STAT_DQCR_EXPIRED bits
+ * in the "stat" field of the "struct qm_dqrr_entry" passed to the FQ's dequeue
+ * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
+ * "flags" retrieved from qman_fq_state().
+ */
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
+
+/**
+ * qman_enqueue - Enqueue a frame to a frame queue
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ *
+ * Fills an entry in the EQCR of portal @qm to enqueue the frame described by
+ * @fd. The descriptor details are copied from @fd to the EQCR entry, the 'pid'
+ * field is ignored. The return value is non-zero on error, such as ring full
+ * (and FLAG_WAIT not specified), congestion avoidance (FLAG_WATCH_CGR
+ * specified), etc. If the ring is full and FLAG_WAIT is specified, this
+ * function will block. If FLAG_INTERRUPT is set, the EQCI bit of the portal
+ * interrupt will assert when Qman consumes the EQCR entry (subject to "status
+ * disable", "enable", and "inhibit" registers). If FLAG_DCA is set, Qman will
+ * perform an implied "discrete consumption acknowledgment" on the dequeue
+ * ring's (DQRR) entry, at the ring index specified by the FLAG_DCA_IDX(x)
+ * macro. (As an alternative to issuing explicit DCA actions on DQRR entries,
+ * this implicit DCA can delay the release of a "held active" frame queue
+ * corresponding to a DQRR entry until Qman consumes the EQCR entry - providing
+ * order-preservation semantics in packet-forwarding scenarios.) If FLAG_DCA is
+ * set, then FLAG_DCA_PARK can also be set to imply that the DQRR consumption
+ * acknowledgment should "park request" the "held active" frame queue. Ie.
+ * when the portal eventually releases that frame queue, it will be left in the
+ * Parked state rather than Tentatively Scheduled or Truly Scheduled. If the
+ * portal is watching congestion groups, the QMAN_ENQUEUE_FLAG_WATCH_CGR flag
+ * is requested, and the FQ is a member of a congestion group, then this
+ * function returns -EAGAIN if the congestion group is currently congested.
+ * Note, this does not eliminate ERNs, as the async interface means we can be
+ * sending enqueue commands to an un-congested FQ that becomes congested before
+ * the enqueue commands are processed, but it does minimise needless thrashing
+ * of an already busy hardware resource by throttling many of the to-be-dropped
+ * enqueues "at the source".
+ */
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send);
+
+typedef int (*qman_cb_precommit) (void *arg);
+
+/**
+ * qman_enqueue_orp - Enqueue a frame to a frame queue using an ORP
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ * @orp: the frame queue object used as an order restoration point.
+ * @orp_seqnum: the sequence number of this frame in the order restoration path
+ *
+ * Similar to qman_enqueue(), but with the addition of an Order Restoration
+ * Point (@orp) and corresponding sequence number (@orp_seqnum) for this
+ * enqueue operation to employ order restoration. Each frame queue object acts
+ * as an Order Definition Point (ODP) by providing each frame dequeued from it
+ * with an incrementing sequence number, this value is generally ignored unless
+ * that sequence of dequeued frames will need order restoration later. Each
+ * frame queue object also encapsulates an Order Restoration Point (ORP), which
+ * is a re-assembly context for re-ordering frames relative to their sequence
+ * numbers as they are enqueued. The ORP does not have to be within the frame
+ * queue that receives the enqueued frame, in fact it is usually the frame
+ * queue from which the frames were originally dequeued. For the purposes of
+ * order restoration, multiple frames (or "fragments") can be enqueued for a
+ * single sequence number by setting the QMAN_ENQUEUE_FLAG_NLIS flag for all
+ * enqueues except the final fragment of a given sequence number. Ordering
+ * between sequence numbers is guaranteed, even if fragments of different
+ * sequence numbers are interlaced with one another. Fragments of the same
+ * sequence number will retain the order in which they are enqueued. If no
+ * enqueue is to performed, QMAN_ENQUEUE_FLAG_HOLE indicates that the given
+ * sequence number is to be "skipped" by the ORP logic (eg. if a frame has been
+ * dropped from a sequence), or QMAN_ENQUEUE_FLAG_NESN indicates that the given
+ * sequence number should become the ORP's "Next Expected Sequence Number".
+ *
+ * Side note: a frame queue object can be used purely as an ORP, without
+ * carrying any frames at all. Care should be taken not to deallocate a frame
+ * queue object that is being actively used as an ORP, as a future allocation
+ * of the frame queue object may start using the internal ORP before the
+ * previous use has finished.
+ */
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum);
+
+/**
+ * qman_alloc_fqid_range - Allocate a contiguous range of FQIDs
+ * @result: is set by the API to the base FQID of the allocated range
+ * @count: the number of FQIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count FQIDs
+ *
+ * Returns the number of frame queues allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * FQs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_fqid(u32 *result)
+{
+	int ret = qman_alloc_fqid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_fqid_range - Release the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of FQIDs
+ * that it can subsequently allocate from.
+ */
+void qman_release_fqid_range(u32 fqid, unsigned int count);
+static inline void qman_release_fqid(u32 fqid)
+{
+	qman_release_fqid_range(fqid, 1);
+}
+
+void qman_seed_fqid_range(u32 fqid, unsigned int count);
+
+int qman_shutdown_fq(u32 fqid);
+
+/**
+ * qman_reserve_fqid_range - Reserve the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ */
+int qman_reserve_fqid_range(u32 fqid, unsigned int count);
+static inline int qman_reserve_fqid(u32 fqid)
+{
+	return qman_reserve_fqid_range(fqid, 1);
+}
+
+/* Pool-channel management */
+/**
+ * qman_alloc_pool_range - Allocate a contiguous range of pool-channel IDs
+ * @result: is set by the API to the base pool-channel ID of the allocated range
+ * @count: the number of pool-channel IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of pool-channel IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_pool(u32 *result)
+{
+	int ret = qman_alloc_pool_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_pool_range - Release the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to deallocate
+ * @count: the number of pool-channel IDs in the range
+ */
+void qman_release_pool_range(u32 id, unsigned int count);
+static inline void qman_release_pool(u32 id)
+{
+	qman_release_pool_range(id, 1);
+}
+
+/**
+ * qman_reserve_pool_range - Reserve the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to reserve
+ * @count: the number of pool-channel IDs in the range
+ */
+int qman_reserve_pool_range(u32 id, unsigned int count);
+static inline int qman_reserve_pool(u32 id)
+{
+	return qman_reserve_pool_range(id, 1);
+}
+
+void qman_seed_pool_range(u32 id, unsigned int count);
+
+	/* CGR management */
+	/* -------------- */
+/**
+ * qman_create_cgr - Register a congestion group object
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: optional state of CGR settings
+ *
+ * Registers this object to receiving congestion entry/exit callbacks on the
+ * portal affine to the cpu portal on which this API is executed. If opts is
+ * NULL then only the callback (cgr->cb) function is registered. If @flags
+ * contains QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset
+ * any unspecified parameters) will be used rather than a modify hw hardware
+ * (which only modifies the specified parameters).
+ */
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_create_cgr_to_dcp - Register a congestion group object to DCP portal
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @dcp_portal: the DCP portal to which the cgr object is registered.
+ * @opts: optional state of CGR settings
+ *
+ */
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_delete_cgr - Deregisters a congestion group object
+ * @cgr: the 'cgr' object to deregister
+ *
+ * "Unplugs" this CGR object from the portal affine to the cpu on which this API
+ * is executed. This must be excuted on the same affine portal on which it was
+ * created.
+ */
+int qman_delete_cgr(struct qman_cgr *cgr);
+
+/**
+ * qman_modify_cgr - Modify CGR fields
+ * @cgr: the 'cgr' object to modify
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: the CGR-modification settings
+ *
+ * The @opts parameter comes from the low-level portal API, and can be NULL.
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver, in particular the 'cgrid' field is ignored (this operation
+ * only affects the given CGR object). If @flags contains
+ * QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset any
+ * unspecified parameters) will be used rather than a modify hw hardware (which
+ * only modifies the specified parameters).
+ */
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_query_cgr - Queries CGR fields
+ * @cgr: the 'cgr' object to query
+ * @result: storage for the queried congestion group record
+ */
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *result);
+
+/**
+ * qman_query_congestion - Queries the state of all congestion groups
+ * @congestion: storage for the queried state of all congestion groups
+ */
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
+
+/**
+ * qman_alloc_cgrid_range - Allocate a contiguous range of CGR IDs
+ * @result: is set by the API to the base CGR ID of the allocated range
+ * @count: the number of CGR IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of CGR IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_cgrid(u32 *result)
+{
+	int ret = qman_alloc_cgrid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_cgrid_range - Release the specified range of CGR IDs
+ * @id: the base CGR ID of the range to deallocate
+ * @count: the number of CGR IDs in the range
+ */
+void qman_release_cgrid_range(u32 id, unsigned int count);
+static inline void qman_release_cgrid(u32 id)
+{
+	qman_release_cgrid_range(id, 1);
+}
+
+/**
+ * qman_reserve_cgrid_range - Reserve the specified range of CGR ID
+ * @id: the base CGR ID of the range to reserve
+ * @count: the number of CGR IDs in the range
+ */
+int qman_reserve_cgrid_range(u32 id, unsigned int count);
+static inline int qman_reserve_cgrid(u32 id)
+{
+	return qman_reserve_cgrid_range(id, 1);
+}
+
+void qman_seed_cgrid_range(u32 id, unsigned int count);
+
+	/* Helpers */
+	/* ------- */
+/**
+ * qman_poll_fq_for_init - Check if an FQ has been initialised from OOS
+ * @fqid: the FQID that will be initialised by other s/w
+ *
+ * In many situations, a FQID is provided for communication between s/w
+ * entities, and whilst the consumer is responsible for initialising and
+ * scheduling the FQ, the producer(s) generally create a wrapper FQ object using
+ * and only call qman_enqueue() (no FQ initialisation, scheduling, etc). Ie;
+ *     qman_create_fq(..., QMAN_FQ_FLAG_NO_MODIFY, ...);
+ * However, data can not be enqueued to the FQ until it is initialised out of
+ * the OOS state - this function polls for that condition. It is particularly
+ * useful for users of IPC functions - each endpoint's Rx FQ is the other
+ * endpoint's Tx FQ, so each side can initialise and schedule their Rx FQ object
+ * and then use this API on the (NO_MODIFY) Tx FQ object in order to
+ * synchronise. The function returns zero for success, +1 if the FQ is still in
+ * the OOS state, or negative if there was an error.
+ */
+static inline int qman_poll_fq_for_init(struct qman_fq *fq)
+{
+	struct qm_mcr_queryfq_np np;
+	int err;
+
+	err = qman_query_fq_np(fq, &np);
+	if (err)
+		return err;
+	if ((np.state & QM_MCR_NP_STATE_MASK) == QM_MCR_NP_STATE_OOS)
+		return 1;
+	return 0;
+}
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+#define cpu_to_hw_sg(x) (x)
+#define hw_sg_to_cpu(x) (x)
+#else
+#define cpu_to_hw_sg(x)  __cpu_to_hw_sg(x)
+#define hw_sg_to_cpu(x)  __hw_sg_to_cpu(x)
+
+static inline void __cpu_to_hw_sg(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = cpu_to_be64(sgentry->opaque);
+	sgentry->val = cpu_to_be32(sgentry->val);
+	sgentry->val_off = cpu_to_be16(sgentry->val_off);
+}
+
+static inline void __hw_sg_to_cpu(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = be64_to_cpu(sgentry->opaque);
+	sgentry->val = be32_to_cpu(sgentry->val);
+	sgentry->val_off = be16_to_cpu(sgentry->val_off);
+}
+#endif
 
 #ifdef __cplusplus
 }
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index b0d953f..a4897b0 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -42,6 +42,7 @@
 #define __FSL_USD_H
 
 #include <compat.h>
+#include <fsl_qman.h>
 
 #ifdef __cplusplus
 extern "C" {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 12/40] bus/dpaa: add BMAN driver core
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (10 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 13/40] bus/dpaa: support FMAN frame queue lookup Shreyansh Jain
                           ` (28 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Buffer Manager (BMan) is a hardware buffer pool management block that
allows software and accelerators on the datapath to acquire and release
buffers in order to build frames.

This patch adds the core routines.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman_driver.c | 311 +++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_priv.h   | 125 ++++++++++
 drivers/bus/dpaa/include/fsl_bman.h       | 375 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   5 +
 5 files changed, 817 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 5957c15..e1415e4 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -63,6 +63,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
 	base/qbman/dpaa_alloc.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
new file mode 100644
index 0000000..fb3c50e
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -0,0 +1,311 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_branch_prediction.h>
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "bman_priv.h"
+#include <sys/ioctl.h>
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+u16 bman_ip_rev;
+u16 bman_pool_max;
+void *bman_ccsr_map;
+
+/*****************/
+/* Portal driver */
+/*****************/
+
+static __thread int fd = -1;
+static __thread struct bm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_bman
+};
+
+static int fsl_bman_portal_init(uint32_t idx, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!");
+		return -EINVAL;
+	}
+	/* Allocate and map a bman portal */
+	map.index = idx;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+	pcfg.is_shared = is_shared;
+	pcfg.index = map.index;
+	bman_depletion_fill(&pcfg.mask);
+
+	fd = open(BMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("BMan irq init failed");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+	/* Use the IRQ FD as a unique IRQ number */
+	pcfg.irq = fd;
+
+	/* Set the IRQ number */
+	irq_map.type = dpaa_portal_bman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_bman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int bman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_bman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int bman_thread_finish(void)
+{
+	return fsl_bman_portal_finish();
+}
+
+void bman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int bman_init_ccsr(const struct device_node *node)
+{
+	static int ccsr_map_fd;
+	uint64_t phys_addr;
+	const uint32_t *bman_addr;
+	uint64_t regs_size;
+
+	bman_addr = of_get_address(node, 0, &regs_size, NULL);
+	if (!bman_addr) {
+		pr_err("of_get_address cannot return BMan address");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(node, bman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open(BMAN_CCSR_MAP, O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for BMan CCSR map");
+		return ccsr_map_fd;
+	}
+
+	bman_ccsr_map = mmap(NULL, regs_size, PROT_READ |
+			     PROT_WRITE, MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (bman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map BMan CCSR base Bman: "
+		       "0x%x Phys: 0x%lx size 0x%lx",
+		       *bman_addr, phys_addr, regs_size);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int bman_global_init(void)
+{
+	const struct device_node *dt_node;
+	static int done;
+
+	if (done)
+		return -EBUSY;
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,bman-portal");
+	if (!dt_node) {
+		pr_err("No bman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,bman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,bman-portal-1.0.0")) {
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.0.8")) {
+		bman_ip_rev = BMAN_REV20;
+		bman_pool_max = 8;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.3")) {
+		bman_ip_rev = BMAN_REV21;
+		bman_pool_max = 64;
+	} else {
+		pr_warn("unknown BMan version in portal node,default "
+			"to rev1.0");
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	}
+
+	if (!bman_ip_rev) {
+		pr_err("Unknown bman portal version\n");
+		return -ENODEV;
+	}
+	{
+		const struct device_node *dn = of_find_compatible_node(NULL,
+							NULL, "fsl,bman");
+		if (!dn)
+			pr_err("No bman device node available");
+
+		if (bman_init_ccsr(dn))
+			pr_err("BMan CCSR map failed.");
+	}
+
+	done = 1;
+	return 0;
+}
+
+#define BMAN_POOL_CONTENT(n) (0x0600 + ((n) * 0x04))
+u32 bm_pool_free_buffers(u32 bpid)
+{
+	return in_be32(bman_ccsr_map + BMAN_POOL_CONTENT(bpid));
+}
+
+static u32 __generate_thresh(u32 val, int roundup)
+{
+	u32 e = 0;      /* co-efficient, exponent */
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	DPAA_ASSERT(e < 0x10);
+	return (val | (e << 8));
+}
+
+#define POOL_SWDET(n)       (0x0000 + ((n) * 0x04))
+#define POOL_HWDET(n)       (0x0100 + ((n) * 0x04))
+#define POOL_SWDXT(n)       (0x0200 + ((n) * 0x04))
+#define POOL_HWDXT(n)       (0x0300 + ((n) * 0x04))
+int bm_pool_set(u32 bpid, const u32 *thresholds)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	out_be32(bman_ccsr_map + POOL_SWDET(bpid),
+		 __generate_thresh(thresholds[0], 0));
+	out_be32(bman_ccsr_map + POOL_SWDXT(bpid),
+		 __generate_thresh(thresholds[1], 1));
+	out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+		 __generate_thresh(thresholds[2], 0));
+	out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+		 __generate_thresh(thresholds[3], 1));
+	return 0;
+}
+
+#define BMAN_LOW_DEFAULT_THRESH		0x40
+#define BMAN_HIGH_DEFAULT_THRESH		0x80
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	if (low_thresh && high_thresh) {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(low_thresh, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(high_thresh, 1));
+	} else {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(BMAN_LOW_DEFAULT_THRESH, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(BMAN_HIGH_DEFAULT_THRESH, 1));
+	}
+	return 0;
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman_priv.h b/drivers/bus/dpaa/base/qbman/bman_priv.h
new file mode 100644
index 0000000..07d9cec
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_priv.h
@@ -0,0 +1,125 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_PRIV_H
+#define __BMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_bman.h>
+
+/* Revision info (for errata and feature handling) */
+#define BMAN_REV10 0x0100
+#define BMAN_REV20 0x0200
+#define BMAN_REV21 0x0201
+
+#define BMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+#define BMAN_CCSR_MAP "/dev/mem"
+
+/* This mask contains all the "irqsource" bits visible to API users */
+#define BM_PIRQ_VISIBLE	(BM_PIRQ_RCRI | BM_PIRQ_BSCN)
+
+/* These are bm_<reg>_<verb>(). So for example, bm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define bm_isr_status_read(bm)		__bm_isr_read(bm, bm_isr_status)
+#define bm_isr_status_clear(bm, m)	__bm_isr_write(bm, bm_isr_status, m)
+#define bm_isr_enable_read(bm)		__bm_isr_read(bm, bm_isr_enable)
+#define bm_isr_enable_write(bm, v)	__bm_isr_write(bm, bm_isr_enable, v)
+#define bm_isr_disable_read(bm)		__bm_isr_read(bm, bm_isr_disable)
+#define bm_isr_disable_write(bm, v)	__bm_isr_write(bm, bm_isr_disable, v)
+#define bm_isr_inhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 1)
+#define bm_isr_uninhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 0)
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+extern u16 bman_pool_max;
+
+/* used by CCSR and portal interrupt code */
+enum bm_isr_reg {
+	bm_isr_status = 0,
+	bm_isr_enable = 1,
+	bm_isr_disable = 2,
+	bm_isr_inhibit = 3
+};
+
+struct bm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* This is used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* These are the buffer pool IDs that may be used via this portal. */
+	struct bman_depletion mask;
+
+};
+
+int bman_init_ccsr(const struct device_node *node);
+
+struct bman_portal *bman_create_affine_portal(
+			const struct bm_portal_config *config);
+const struct bm_portal_config *bman_destroy_affine_portal(void);
+
+/* Set depletion thresholds associated with a buffer pool. Requires that the
+ * operating system have access to Bman CCSR (ie. compiled in support and
+ * run-time access courtesy of the device-tree).
+ */
+int bm_pool_set(u32 bpid, const u32 *thresholds);
+
+/* Read the free buffer count for a given buffer */
+u32 bm_pool_free_buffers(u32 bpid);
+
+#endif /* __BMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
new file mode 100644
index 0000000..383106b
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -0,0 +1,375 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_BMAN_H
+#define __FSL_BMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* This wrapper represents a bit-array for the depletion state of the 64 Bman
+ * buffer pools.
+ */
+struct bman_depletion {
+	u32 state[2];
+};
+
+static inline void bman_depletion_init(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = 0;
+}
+
+static inline void bman_depletion_fill(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = ~0;
+}
+
+/* --- Bman data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct bm_rcr_entry;	/* RCR (Release Command Ring) entries */
+struct bm_mc_command;	/* MC (Management Command) command */
+struct bm_mc_result;	/* MC result */
+
+/* Code-reduction, define a wrapper for 48-bit buffers. In cases where a buffer
+ * pool id specific to this buffer is needed (BM_RCR_VERB_CMD_BPID_MULTI,
+ * BM_MCC_VERB_ACQUIRE), the 'bpid' field is used.
+ */
+struct bm_buffer {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1;
+			u8 bpid;
+			u16 hi; /* High 16-bits of 48-bit address */
+			u32 lo; /* Low 32-bits of 48-bit address */
+#else
+			u32 lo;
+			u16 hi;
+			u8 bpid;
+			u8 __reserved;
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:16;
+			u64 addr:48;
+#else
+			u64 addr:48;
+			u64 __notaddress:16;
+#endif
+		};
+		u64 opaque;
+	};
+} __attribute__((aligned(8)));
+static inline u64 bm_buffer_get64(const struct bm_buffer *buf)
+{
+	return buf->addr;
+}
+
+static inline dma_addr_t bm_buf_addr(const struct bm_buffer *buf)
+{
+	return (dma_addr_t)buf->addr;
+}
+
+#define bm_buffer_set64(buf, v) \
+	do { \
+		struct bm_buffer *__buf931 = (buf); \
+		__buf931->hi = upper_32_bits(v); \
+		__buf931->lo = lower_32_bits(v); \
+	} while (0)
+
+/* See 1.5.3.5.4: "Release Command" */
+struct bm_rcr_entry {
+	union {
+		struct {
+			u8 __dont_write_directly__verb;
+			u8 bpid; /* used with BM_RCR_VERB_CMD_BPID_SINGLE */
+			u8 __reserved1[62];
+		};
+		struct bm_buffer bufs[8];
+	};
+} __packed;
+#define BM_RCR_VERB_VBIT		0x80
+#define BM_RCR_VERB_CMD_MASK		0x70	/* one of two values; */
+#define BM_RCR_VERB_CMD_BPID_SINGLE	0x20
+#define BM_RCR_VERB_CMD_BPID_MULTI	0x30
+#define BM_RCR_VERB_BUFCOUNT_MASK	0x0f	/* values 1..8 */
+
+/* See 1.5.3.1: "Acquire Command" */
+/* See 1.5.3.2: "Query Command" */
+struct bm_mcc_acquire {
+	u8 bpid;
+	u8 __reserved1[62];
+} __packed;
+struct bm_mcc_query {
+	u8 __reserved2[63];
+} __packed;
+struct bm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct bm_mcc_acquire acquire;
+		struct bm_mcc_query query;
+	};
+} __packed;
+#define BM_MCC_VERB_VBIT		0x80
+#define BM_MCC_VERB_CMD_MASK		0x70	/* where the verb contains; */
+#define BM_MCC_VERB_CMD_ACQUIRE		0x10
+#define BM_MCC_VERB_CMD_QUERY		0x40
+#define BM_MCC_VERB_ACQUIRE_BUFCOUNT	0x0f	/* values 1..8 go here */
+
+/* See 1.5.3.3: "Acquire Response" */
+/* See 1.5.3.4: "Query Response" */
+struct bm_pool_state {
+	u8 __reserved1[32];
+	/* "availability state" and "depletion state" */
+	struct {
+		u8 __reserved1[8];
+		/* Access using bman_depletion_***() */
+		struct bman_depletion state;
+	} as, ds;
+};
+
+struct bm_mc_result {
+	union {
+		struct {
+			u8 verb;
+			u8 __reserved1[63];
+		};
+		union {
+			struct {
+				u8 __reserved1;
+				u8 bpid;
+				u8 __reserved2[62];
+			};
+			struct bm_buffer bufs[8];
+		} acquire;
+		struct bm_pool_state query;
+	};
+} __packed;
+#define BM_MCR_VERB_VBIT		0x80
+#define BM_MCR_VERB_CMD_MASK		BM_MCC_VERB_CMD_MASK
+#define BM_MCR_VERB_CMD_ACQUIRE		BM_MCC_VERB_CMD_ACQUIRE
+#define BM_MCR_VERB_CMD_QUERY		BM_MCC_VERB_CMD_QUERY
+#define BM_MCR_VERB_CMD_ERR_INVALID	0x60
+#define BM_MCR_VERB_CMD_ERR_ECC		0x70
+#define BM_MCR_VERB_ACQUIRE_BUFCOUNT	BM_MCC_VERB_ACQUIRE_BUFCOUNT /* 0..8 */
+
+/* Portal and Buffer Pools */
+/* Represents a managed portal */
+struct bman_portal;
+
+/* This object type represents Bman buffer pools. */
+struct bman_pool;
+
+/* This struct specifies parameters for a bman_pool object. */
+struct bman_pool_params {
+	/* index of the buffer pool to encapsulate (0-63), ignored if
+	 * BMAN_POOL_FLAG_DYNAMIC_BPID is set.
+	 */
+	u32 bpid;
+	/* bit-mask of BMAN_POOL_FLAG_*** options */
+	u32 flags;
+	/* depletion-entry/exit thresholds, if BMAN_POOL_FLAG_THRESH is set. NB:
+	 * this is only allowed if BMAN_POOL_FLAG_DYNAMIC_BPID is used *and*
+	 * when run in the control plane (which controls Bman CCSR). This array
+	 * matches the definition of bm_pool_set().
+	 */
+	u32 thresholds[4];
+};
+
+/* Flags to bman_new_pool() */
+#define BMAN_POOL_FLAG_NO_RELEASE    0x00000001 /* can't release to pool */
+#define BMAN_POOL_FLAG_ONLY_RELEASE  0x00000002 /* can only release to pool */
+#define BMAN_POOL_FLAG_DYNAMIC_BPID  0x00000008 /* (de)allocate bpid */
+#define BMAN_POOL_FLAG_THRESH        0x00000010 /* set depletion thresholds */
+
+/* Flags to bman_release() */
+#define BMAN_RELEASE_FLAG_NOW        0x00000008 /* issue immediate release */
+
+
+/**
+ * bman_get_portal_index - get portal configuration index
+ */
+int bman_get_portal_index(void);
+
+/**
+ * bman_rcr_is_empty - Determine if portal's RCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * releases for the local portal have been processed by Bman but can't use the
+ * BMAN_RELEASE_FLAG_WAIT_SYNC flag to do this from the final bman_release().
+ * The function forces tracking of RCR consumption (which normally doesn't
+ * happen until release processing needs to find space to put new release
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int bman_rcr_is_empty(void);
+
+/**
+ * bman_alloc_bpid_range - Allocate a contiguous range of BPIDs
+ * @result: is set by the API to the base BPID of the allocated range
+ * @count: the number of BPIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count BPIDs
+ *
+ * Returns the number of buffer pools allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * BPs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int bman_alloc_bpid(u32 *result)
+{
+	int ret = bman_alloc_bpid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * bman_release_bpid_range - Release the specified range of buffer pool IDs
+ * @bpid: the base BPID of the range to deallocate
+ * @count: the number of BPIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of BPIDs
+ * that it can subsequently allocate from.
+ */
+void bman_release_bpid_range(u32 bpid, unsigned int count);
+static inline void bman_release_bpid(u32 bpid)
+{
+	bman_release_bpid_range(bpid, 1);
+}
+
+int bman_reserve_bpid_range(u32 bpid, unsigned int count);
+static inline int bman_reserve_bpid(u32 bpid)
+{
+	return bman_reserve_bpid_range(bpid, 1);
+}
+
+void bman_seed_bpid_range(u32 bpid, unsigned int count);
+
+int bman_shutdown_pool(u32 bpid);
+
+/**
+ * bman_new_pool - Allocates a Buffer Pool object
+ * @params: parameters specifying the buffer pool ID and behaviour
+ *
+ * Creates a pool object for the given @params. A portal and the depletion
+ * callback field of @params are only used if the BMAN_POOL_FLAG_DEPLETION flag
+ * is set. NB, the fields from @params are copied into the new pool object, so
+ * the structure provided by the caller can be released or reused after the
+ * function returns.
+ */
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params);
+
+/**
+ * bman_free_pool - Deallocates a Buffer Pool object
+ * @pool: the pool object to release
+ */
+void bman_free_pool(struct bman_pool *pool);
+
+/**
+ * bman_get_params - Returns a pool object's parameters.
+ * @pool: the pool object
+ *
+ * The returned pointer refers to state within the pool object so must not be
+ * modified and can no longer be read once the pool object is destroyed.
+ */
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
+
+/**
+ * bman_release - Release buffer(s) to the buffer pool
+ * @pool: the buffer pool object to release to
+ * @bufs: an array of buffers to release
+ * @num: the number of buffers in @bufs (1-8)
+ * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options
+ *
+ */
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_acquire - Acquire buffer(s) from a buffer pool
+ * @pool: the buffer pool object to acquire from
+ * @bufs: array for storing the acquired buffers
+ * @num: the number of buffers desired (@bufs is at least this big)
+ *
+ * Issues an "Acquire" command via the portal's management command interface.
+ * The return value will be the number of buffers obtained from the pool, or a
+ * negative error code if a h/w error or pool starvation was encountered.
+ */
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_query_pools - Query all buffer pool states
+ * @state: storage for the queried availability and depletion states
+ */
+int bman_query_pools(struct bm_pool_state *state);
+
+/**
+ * bman_query_free_buffers - Query how many free buffers are in buffer pool
+ * @pool: the buffer pool object to query
+ *
+ * Return the number of the free buffers
+ */
+u32 bman_query_free_buffers(struct bman_pool *pool);
+
+/**
+ * bman_update_pool_thresholds - Change the buffer pool's depletion thresholds
+ * @pool: the buffer pool object to which the thresholds will be set
+ * @thresholds: the new thresholds
+ */
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds);
+
+/**
+ * bm_pool_set_hw_threshold - Change the buffer pool's thresholds
+ * @pool: Pool id
+ * @low_thresh: low threshold
+ * @high_thresh: high threshold
+ */
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_BMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index a4897b0..a3243af 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -50,7 +50,9 @@ extern "C" {
 
 /* Thread-entry/exit hooks; */
 int qman_thread_init(void);
+int bman_thread_init(void);
 int qman_thread_finish(void);
+int bman_thread_finish(void);
 
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
@@ -92,9 +94,12 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal);
  * into another blocking read/select/poll.
  */
 void qman_thread_irq(void);
+void bman_thread_irq(void);
 
 /* Global setup */
 int qman_global_init(void);
+int bman_global_init(void);
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 13/40] bus/dpaa: support FMAN frame queue lookup
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (11 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
                           ` (27 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/qbman/qman.c        | 99 ++++++++++++++++++++++++++++++-
 drivers/bus/dpaa/base/qbman/qman_driver.c |  7 ++-
 drivers/bus/dpaa/base/qbman/qman_priv.h   |  7 +++
 drivers/bus/dpaa/include/fsl_qman.h       | 12 ++++
 4 files changed, 122 insertions(+), 3 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 9b1630b..8c8d270 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -176,6 +176,65 @@ static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
 	return fqtree_find(&p->retire_table, fqid);
 }
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+static void **qman_fq_lookup_table;
+static size_t qman_fq_lookup_table_size;
+
+int qman_setup_fq_lookup_table(size_t num_entries)
+{
+	num_entries++;
+	/* Allocate 1 more entry since the first entry is not used */
+	qman_fq_lookup_table = vmalloc((num_entries * sizeof(void *)));
+	if (!qman_fq_lookup_table) {
+		pr_err("QMan: Could not allocate fq lookup table\n");
+		return -ENOMEM;
+	}
+	memset(qman_fq_lookup_table, 0, num_entries * sizeof(void *));
+	qman_fq_lookup_table_size = num_entries;
+	pr_debug("QMan: Allocated lookup table at %p, entry count %lu\n",
+		qman_fq_lookup_table,
+			(unsigned long)qman_fq_lookup_table_size);
+	return 0;
+}
+
+/* global structure that maintains fq object mapping */
+static DEFINE_SPINLOCK(fq_hash_table_lock);
+
+static int find_empty_fq_table_entry(u32 *entry, struct qman_fq *fq)
+{
+	u32 i;
+
+	spin_lock(&fq_hash_table_lock);
+	/* Can't use index zero because this has special meaning
+	 * in context_b field.
+	 */
+	for (i = 1; i < qman_fq_lookup_table_size; i++) {
+		if (qman_fq_lookup_table[i] == NULL) {
+			*entry = i;
+			qman_fq_lookup_table[i] = fq;
+			spin_unlock(&fq_hash_table_lock);
+			return 0;
+		}
+	}
+	spin_unlock(&fq_hash_table_lock);
+	return -ENOMEM;
+}
+
+static void clear_fq_table_entry(u32 entry)
+{
+	spin_lock(&fq_hash_table_lock);
+	DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
+	qman_fq_lookup_table[entry] = NULL;
+	spin_unlock(&fq_hash_table_lock);
+}
+
+static inline struct qman_fq *get_fq_table_entry(u32 entry)
+{
+	DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
+	return qman_fq_lookup_table[entry];
+}
+#endif
+
 static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
 {
 	/* Byteswap the FQD to HW format */
@@ -766,8 +825,13 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 				break;
 			case QM_MR_VERB_FQPN:
 				/* Parked */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+				fq = get_fq_table_entry(
+					be32_to_cpu(msg->fq.contextB));
+#else
 				fq = (void *)(uintptr_t)
 					be32_to_cpu(msg->fq.contextB);
+#endif
 				fq_state_change(p, fq, msg, verb);
 				if (fq->cb.fqs)
 					fq->cb.fqs(p, fq, &swapped_msg);
@@ -792,7 +856,11 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 			}
 		} else {
 			/* Its a software ERN */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(be32_to_cpu(msg->ern.tag));
+#else
 			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+#endif
 			fq->cb.ern(p, fq, &swapped_msg);
 		}
 		num++;
@@ -907,7 +975,11 @@ static inline unsigned int __poll_portal_fast(struct qman_portal *p,
 				clear_vdqcr(p, fq);
 		} else {
 			/* SDQCR: context_b points to the FQ */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(dq->contextB);
+#else
 			fq = (void *)(uintptr_t)dq->contextB;
+#endif
 			/* Now let the callback do its stuff */
 			res = fq->cb.dqrr(p, fq, dq);
 			/*
@@ -1119,7 +1191,12 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
 	fq->flags = flags;
 	fq->state = qman_fq_state_oos;
 	fq->cgr_groupid = 0;
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	if (unlikely(find_empty_fq_table_entry(&fq->key, fq))) {
+		pr_info("Find empty table entry failed\n");
+		return -ENOMEM;
+	}
+#endif
 	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
 		return 0;
 	/* Everything else is AS_IS support */
@@ -1193,7 +1270,9 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
 	case qman_fq_state_oos:
 		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
 			qman_release_fqid(fq->fqid);
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		clear_fq_table_entry(fq->key);
+#endif
 		return;
 	default:
 		break;
@@ -1258,7 +1337,11 @@ int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
 		dma_addr_t phys_fq;
 
 		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		mcc->initfq.fqd.context_b = fq->key;
+#else
 		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+#endif
 		/*
 		 *  and the physical address - NB, if the user wasn't trying to
 		 * set CONTEXTA, clear the stashing settings.
@@ -1419,7 +1502,11 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
 			msg.verb = QM_MR_VERB_FQRNI;
 			msg.fq.fqs = mcr->alterfq.fqs;
 			msg.fq.fqid = fq->fqid;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			msg.fq.contextB = fq->key;
+#else
 			msg.fq.contextB = (u32)(uintptr_t)fq;
+#endif
 			fq->cb.fqs(p, fq, &msg);
 		}
 	} else if (res == QM_MCR_RESULT_PENDING) {
@@ -1861,7 +1948,11 @@ static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
 					QM_EQCR_DCA_PARK : 0) |
 			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
 	eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	eq->tag = cpu_to_be32(fq->key);
+#else
 	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 	eq->fd = *fd;
 	cpu_to_hw_fd(&eq->fd);
 	return eq;
@@ -1907,7 +1998,11 @@ int qman_enqueue_multi(struct qman_fq *fq,
 	/* try to send as many frames as possible */
 	while (eqcr->available && frames_to_send--) {
 		eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		eq->tag = cpu_to_be32(fq->key);
+#else
 		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 		eq->fd.opaque_addr = fd->opaque_addr;
 		eq->fd.addr = cpu_to_be40(fd->addr);
 		eq->fd.status = cpu_to_be32(fd->status);
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 90fb130..7a68896 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -279,5 +279,10 @@ int qman_global_init(void)
 	else
 		qman_clk = be32_to_cpu(*clk);
 
-	return ret;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	ret = qman_setup_fq_lookup_table(CONFIG_FSL_QMAN_FQ_LOOKUP_MAX);
+	if (ret)
+		return ret;
+#endif
+	return 0;
 }
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index 4a11e40..3e1d7f9 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -197,6 +197,13 @@ void qm_set_liodns(struct qm_portal_config *pcfg);
 int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
 		       struct qm_mcr_cgrtestwrite *result);
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+/* If the fq object pointer is greater than the size of context_b field,
+ * than a lookup table is required.
+ */
+int qman_setup_fq_lookup_table(size_t num_entries);
+#endif
+
 /*   QMan s/w corenet portal, low-level i/face	 */
 
 /*
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 85ae13b..eedfd7e 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -46,6 +46,15 @@ extern "C" {
 
 #include <dpaa_rbtree.h>
 
+/* FQ lookups (turn this on for 64bit user-space) */
+#if (__WORDSIZE == 64)
+#define CONFIG_FSL_QMAN_FQ_LOOKUP
+/* if FQ lookups are supported, this controls the number of initialised,
+ * s/w-consumed FQs that can be supported at any one time.
+ */
+#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)
+#endif
+
 /* Last updated for v00.800 of the BG */
 
 /* Hardware constants */
@@ -1228,6 +1237,9 @@ struct qman_fq {
 	enum qman_fq_state state;
 	int cgr_groupid;
 	struct rb_node node;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	u32 key;
+#endif
 };
 
 /*
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 14/40] bus/dpaa: add BMan hardware interfaces
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (12 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 13/40] bus/dpaa: support FMAN frame queue lookup Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
                           ` (26 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman.c        | 394 +++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman.h        | 550 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_driver.c |  12 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |  16 +
 5 files changed, 973 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index e1415e4..61b6432 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -63,6 +63,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman.c \
 	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman.c b/drivers/bus/dpaa/base/qbman/bman.c
new file mode 100644
index 0000000..0480caa
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.c
@@ -0,0 +1,394 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "bman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define RCR_THRESH	2	/* reread h/w CI when running out of space */
+#define IRQNAME		"BMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "BMan portal %d" */
+
+struct bman_portal {
+	struct bm_portal p;
+	/* 2-element array. pools[0] is mask, pools[1] is snapshot. */
+	struct bman_depletion *pools;
+	int thresh_set;
+	unsigned long irq_sources;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct bm_portal_config *config;
+	char irqname[MAX_IRQNAME];
+};
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static RTE_DEFINE_PER_LCORE(struct bman_portal, bman_affine_portal);
+
+static inline struct bman_portal *get_affine_portal(void)
+{
+	return &RTE_PER_LCORE(bman_affine_portal);
+}
+
+/*
+ * This object type refers to a pool, it isn't *the* pool. There may be
+ * more than one such object per BMan buffer pool, eg. if different users of
+ * the pool are operating via different portals.
+ */
+struct bman_pool {
+	struct bman_pool_params params;
+	/* Used for hash-table admin when using depletion notifications. */
+	struct bman_portal *portal;
+	struct bman_pool *next;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	atomic_t in_use;
+#endif
+};
+
+static inline
+struct bman_portal *bman_create_portal(struct bman_portal *portal,
+				       const struct bm_portal_config *c)
+{
+	struct bm_portal *p;
+	const struct bman_depletion *pools = &c->mask;
+	int ret;
+	u8 bpid = 0;
+
+	p = &portal->p;
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference...
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	if (bm_rcr_init(p, bm_rcr_pvb, bm_rcr_cce)) {
+		pr_err("Bman RCR initialisation failed\n");
+		return NULL;
+	}
+	if (bm_mc_init(p)) {
+		pr_err("Bman MC initialisation failed\n");
+		goto fail_mc;
+	}
+	portal->pools = kmalloc(2 * sizeof(*pools), GFP_KERNEL);
+	if (!portal->pools)
+		goto fail_pools;
+	portal->pools[0] = *pools;
+	bman_depletion_init(portal->pools + 1);
+	while (bpid < bman_pool_max) {
+		/*
+		 * Default to all BPIDs disabled, we enable as required at
+		 * run-time.
+		 */
+		bm_isr_bscn_mask(p, bpid, 0);
+		bpid++;
+	}
+	portal->slowpoll = 0;
+	/* Write-to-clear any stale interrupt status bits */
+	bm_isr_disable_write(p, 0xffffffff);
+	portal->irq_sources = 0;
+	bm_isr_enable_write(p, portal->irq_sources);
+	bm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, NULL, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need RCR to be empty before continuing */
+	ret = bm_rcr_get_fill(p);
+	if (ret) {
+		pr_err("Bman RCR unclean\n");
+		goto fail_rcr_empty;
+	}
+	/* Success */
+	portal->config = c;
+
+	bm_isr_disable_write(p, 0);
+	bm_isr_uninhibit(p);
+	return portal;
+fail_rcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->pools);
+fail_pools:
+	bm_mc_finish(p);
+fail_mc:
+	bm_rcr_finish(p);
+	return NULL;
+}
+
+struct bman_portal *
+bman_create_affine_portal(const struct bm_portal_config *c)
+{
+	struct bman_portal *portal = get_affine_portal();
+
+	/*This function is called from the context which is already affine to
+	 *CPU or in other words this in non-migratable to other CPUs.
+	 */
+	portal = bman_create_portal(portal, c);
+	if (portal) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		spin_unlock(&affine_mask_lock);
+	}
+	return portal;
+}
+
+static inline
+void bman_destroy_portal(struct bman_portal *bm)
+{
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bm_rcr_cce_update(&bm->p);
+	bm_rcr_cce_update(&bm->p);
+
+	free_irq(pcfg->irq, bm);
+
+	kfree(bm->pools);
+	bm_mc_finish(&bm->p);
+	bm_rcr_finish(&bm->p);
+	bm->config = NULL;
+}
+
+const struct
+bm_portal_config *bman_destroy_affine_portal(void)
+{
+	struct bman_portal *bm = get_affine_portal();
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bman_destroy_portal(bm);
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(pcfg->cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int
+bman_get_portal_index(void)
+{
+	struct bman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+static const u32 zero_thresholds[4] = {0, 0, 0, 0};
+
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params)
+{
+	struct bman_pool *pool = NULL;
+	u32 bpid;
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID) {
+		int ret = bman_alloc_bpid(&bpid);
+
+		if (ret)
+			return NULL;
+	} else {
+		if (params->bpid >= bman_pool_max)
+			return NULL;
+		bpid = params->bpid;
+	}
+	if (params->flags & BMAN_POOL_FLAG_THRESH) {
+		int ret = bm_pool_set(bpid, params->thresholds);
+
+		if (ret)
+			goto err;
+	}
+
+	pool = kmalloc(sizeof(*pool), GFP_KERNEL);
+	if (!pool)
+		goto err;
+	pool->params = *params;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	atomic_set(&pool->in_use, 1);
+#endif
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		pool->params.bpid = bpid;
+
+	return pool;
+err:
+	if (params->flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(bpid, zero_thresholds);
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(bpid);
+	kfree(pool);
+
+	return NULL;
+}
+
+void bman_free_pool(struct bman_pool *pool)
+{
+	if (pool->params.flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(pool->params.bpid, zero_thresholds);
+	if (pool->params.flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(pool->params.bpid);
+	kfree(pool);
+}
+
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool)
+{
+	return &pool->params;
+}
+
+static void update_rcr_ci(struct bman_portal *p, int avail)
+{
+	if (avail)
+		bm_rcr_cce_prefetch(&p->p);
+	else
+		bm_rcr_cce_update(&p->p);
+}
+
+#define BMAN_BUF_MASK 0x0000fffffffffffful
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p;
+	struct bm_rcr_entry *r;
+	u32 i = num - 1;
+	u8 avail;
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_NO_RELEASE)
+		return -EINVAL;
+#endif
+
+	p = get_affine_portal();
+	avail = bm_rcr_get_avail(&p->p);
+	if (avail < 2)
+		update_rcr_ci(p, avail);
+	r = bm_rcr_start(&p->p);
+	if (unlikely(!r))
+		return -EBUSY;
+
+	/*
+	 * we can copy all but the first entry, as this can trigger badness
+	 * with the valid-bit
+	 */
+	r->bufs[0].opaque =
+		cpu_to_be64(((u64)pool->params.bpid << 48) |
+			    (bufs[0].opaque & BMAN_BUF_MASK));
+	if (i) {
+		for (i = 1; i < num; i++)
+			r->bufs[i].opaque =
+				cpu_to_be64(bufs[i].opaque & BMAN_BUF_MASK);
+	}
+
+	bm_rcr_pvb_commit(&p->p, BM_RCR_VERB_CMD_BPID_SINGLE |
+			  (num & BM_RCR_VERB_BUFCOUNT_MASK));
+
+	return 0;
+}
+
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_command *mcc;
+	struct bm_mc_result *mcr;
+	int ret, i;
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_ONLY_RELEASE)
+		return -EINVAL;
+#endif
+
+	mcc = bm_mc_start(&p->p);
+	mcc->acquire.bpid = pool->params.bpid;
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_ACQUIRE |
+			(num & BM_MCC_VERB_ACQUIRE_BUFCOUNT));
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	ret = mcr->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT;
+	if (bufs) {
+		for (i = 0; i < num; i++)
+			bufs[i].opaque =
+				be64_to_cpu(mcr->acquire.bufs[i].opaque);
+	}
+	if (ret != num)
+		ret = -ENOMEM;
+	return ret;
+}
+
+int bman_query_pools(struct bm_pool_state *state)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_result *mcr;
+
+	bm_mc_start(&p->p);
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_QUERY);
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & BM_MCR_VERB_CMD_MASK) ==
+		    BM_MCR_VERB_CMD_QUERY);
+	*state = mcr->query;
+	state->as.state.state[0] = be32_to_cpu(state->as.state.state[0]);
+	state->as.state.state[1] = be32_to_cpu(state->as.state.state[1]);
+	state->ds.state.state[0] = be32_to_cpu(state->ds.state.state[0]);
+	state->ds.state.state[1] = be32_to_cpu(state->ds.state.state[1]);
+	return 0;
+}
+
+u32 bman_query_free_buffers(struct bman_pool *pool)
+{
+	return bm_pool_free_buffers(pool->params.bpid);
+}
+
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds)
+{
+	u32 bpid;
+
+	bpid = bman_get_params(pool)->bpid;
+
+	return bm_pool_set(bpid, thresholds);
+}
+
+int bman_shutdown_pool(u32 bpid)
+{
+	struct bman_portal *p = get_affine_portal();
+	return bm_shutdown_pool(&p->p, bpid);
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman.h b/drivers/bus/dpaa/base/qbman/bman.h
new file mode 100644
index 0000000..4b088da
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.h
@@ -0,0 +1,550 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_H
+#define __BMAN_H
+
+#include "bman_priv.h"
+
+/* Cache-inhibited register offsets */
+#define BM_REG_RCR_PI_CINH	0x3000
+#define BM_REG_RCR_CI_CINH	0x3100
+#define BM_REG_RCR_ITR		0x3200
+#define BM_REG_CFG		0x3300
+#define BM_REG_SCN(n)		(0x3400 + ((n) << 6))
+#define BM_REG_ISR		0x3e00
+#define BM_REG_IIR              0x3ec0
+
+/* Cache-enabled register offsets */
+#define BM_CL_CR		0x0000
+#define BM_CL_RR0		0x0100
+#define BM_CL_RR1		0x0140
+#define BM_CL_RCR		0x1000
+#define BM_CL_RCR_PI_CENA	0x3000
+#define BM_CL_RCR_CI_CENA	0x3100
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __bm_in(bm, o)		be32_to_cpu(__raw_readl((bm)->ci + (o)))
+#define __bm_out(bm, o, val)    __raw_writel(cpu_to_be32(val), \
+					     (bm)->ci + (o))
+#define bm_in(reg)		__bm_in(&portal->addr, BM_REG_##reg)
+#define bm_out(reg, val)	__bm_out(&portal->addr, BM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __bm_cl_touch_ro(bm, o) dcbt_ro((bm)->ce + (o))
+#define __bm_cl_touch_rw(bm, o) dcbt_rw((bm)->ce + (o))
+#define __bm_cl_in(bm, o)	be32_to_cpu(__raw_readl((bm)->ce + (o)))
+#define __bm_cl_out(bm, o, val) \
+	do { \
+		u32 *__tmpclout = (bm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __bm_cl_invalidate(bm, o) dccivac((bm)->ce + (o))
+#define bm_cl_touch_ro(reg) __bm_cl_touch_ro(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_touch_rw(reg) __bm_cl_touch_rw(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_in(reg)	    __bm_cl_in(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_out(reg, val) __bm_cl_out(&portal->addr, BM_CL_##reg##_CENA, val)
+#define bm_cl_invalidate(reg)\
+	__bm_cl_invalidate(&portal->addr, BM_CL_##reg##_CENA)
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 bm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ */
+enum bm_rcr_pmode {		/* matches BCSP_CFG::RPM */
+	bm_rcr_pci = 0,		/* PI index, cache-inhibited */
+	bm_rcr_pce = 1,		/* PI index, cache-enabled */
+	bm_rcr_pvb = 2		/* valid-bit */
+};
+
+enum bm_rcr_cmode {		/* s/w-only */
+	bm_rcr_cci,		/* CI index, cache-inhibited */
+	bm_rcr_cce		/* CI index, cache-enabled */
+};
+
+/* --- Portal structures --- */
+
+#define BM_RCR_SIZE		8
+
+struct bm_rcr {
+	struct bm_rcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	u32 busy;
+	enum bm_rcr_pmode pmode;
+	enum bm_rcr_cmode cmode;
+#endif
+};
+
+struct bm_mc {
+	struct bm_mc_command *cr;
+	struct bm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	enum {
+		/* Can only be _mc_start()ed */
+		mc_idle,
+		/* Can only be _mc_commit()ed or _mc_abort()ed */
+		mc_user,
+		/* Can only be _mc_retry()ed */
+		mc_hw
+	} state;
+#endif
+};
+
+struct bm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct bm_portal {
+	struct bm_addr addr;
+	struct bm_rcr rcr;
+	struct bm_mc mc;
+	struct bm_portal_config config;
+} ____cacheline_aligned;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define RCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(BM_RCR_SIZE << 6)))
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 RCR_PTR2IDX(struct bm_rcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (BM_RCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void RCR_INC(struct bm_rcr *rcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates
+	 * fast code with essentially no branching overheads. We increment to
+	 * the next RCR pointer and handle overflow and 'vbit'.
+	 */
+	struct bm_rcr_entry *partial = rcr->cursor + 1;
+
+	rcr->cursor = RCR_CARRYCLEAR(partial);
+	if (partial != rcr->cursor)
+		rcr->vbit ^= BM_RCR_VERB_VBIT;
+}
+
+static inline int bm_rcr_init(struct bm_portal *portal, enum bm_rcr_pmode pmode,
+			      __maybe_unused enum bm_rcr_cmode cmode)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct bm_rcr *rcr = &portal->rcr;
+	u32 cfg;
+	u8 pi;
+
+	rcr->ring = portal->addr.ce + BM_CL_RCR;
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	rcr->cursor = rcr->ring + pi;
+	rcr->vbit = (bm_in(RCR_PI_CINH) & BM_RCR_SIZE) ?  BM_RCR_VERB_VBIT : 0;
+	rcr->available = BM_RCR_SIZE - 1
+		- bm_cyc_diff(BM_RCR_SIZE, rcr->ci, pi);
+	rcr->ithresh = bm_in(RCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 0;
+	rcr->pmode = pmode;
+	rcr->cmode = cmode;
+#endif
+	cfg = (bm_in(CFG) & 0xffffffe0) | (pmode & 0x3); /* BCSP_CFG::RPM */
+	bm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void bm_rcr_finish(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	u8 ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	DPAA_ASSERT(!rcr->busy);
+	if (pi != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("losing uncommitted RCR entries\n");
+	if (ci != rcr->ci)
+		pr_crit("missing existing RCR completions\n");
+	if (rcr->ci != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("RCR destroyed unquiesced\n");
+}
+
+static inline struct bm_rcr_entry *bm_rcr_start(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(!rcr->busy);
+	if (!rcr->available)
+		return NULL;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 1;
+#endif
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 0;
+#endif
+}
+
+static inline struct bm_rcr_entry *bm_rcr_pend_and_next(
+					struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode != bm_rcr_pvb);
+	if (rcr->available == 1)
+		return NULL;
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcr->cursor);
+	RCR_INC(rcr);
+	rcr->available--;
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_pci_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pci);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	hwsync();
+	bm_out(RCR_PI_CINH, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	bm_cl_invalidate(RCR_PI);
+	bm_cl_touch_rw(RCR_PI);
+}
+
+static inline void bm_rcr_pce_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	lwsync();
+	bm_cl_out(RCR_PI, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pvb_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	struct bm_rcr_entry *rcursor;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pvb);
+	lwsync();
+	rcursor = rcr->cursor;
+	rcursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcursor);
+	RCR_INC(rcr);
+	rcr->available--;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 0;
+#endif
+}
+
+static inline u8 bm_rcr_cci_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cci);
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline void bm_rcr_cce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	bm_cl_touch_ro(RCR_CI);
+}
+
+static inline u8 bm_rcr_cce_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	rcr->ci = bm_cl_in(RCR_CI) & (BM_RCR_SIZE - 1);
+	bm_cl_invalidate(RCR_CI);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline u8 bm_rcr_get_ithresh(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->ithresh;
+}
+
+static inline void bm_rcr_set_ithresh(struct bm_portal *portal, u8 ithresh)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	rcr->ithresh = ithresh;
+	bm_out(RCR_ITR, ithresh);
+}
+
+static inline u8 bm_rcr_get_avail(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->available;
+}
+
+static inline u8 bm_rcr_get_fill(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return BM_RCR_SIZE - 1 - rcr->available;
+}
+
+/* --- Management command API --- */
+
+static inline int bm_mc_init(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + BM_CL_CR;
+	mc->rr = portal->addr.ce + BM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			BM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? BM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = mc_idle;
+#endif
+	return 0;
+}
+
+static inline void bm_mc_finish(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (mc->state != mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct bm_mc_command *bm_mc_start(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void bm_mc_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_user);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = mc_idle;
+#endif
+}
+
+static inline void bm_mc_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = mc_hw;
+#endif
+}
+
+static inline struct bm_mc_result *bm_mc_result(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= BM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = mc_idle;
+#endif
+	return rr;
+}
+
+#define SCN_REG(bpid) BM_REG_SCN((bpid) / 32)
+#define SCN_BIT(bpid) (0x80000000 >> (bpid & 31))
+static inline void bm_isr_bscn_mask(struct bm_portal *portal, u8 bpid,
+				    int enable)
+{
+	u32 val;
+
+	DPAA_ASSERT(bpid < bman_pool_max);
+	/* REG_SCN for bpid=0..31, REG_SCN+4 for bpid=32..63 */
+	val = __bm_in(&portal->addr, SCN_REG(bpid));
+	if (enable)
+		val |= SCN_BIT(bpid);
+	else
+		val &= ~SCN_BIT(bpid);
+	__bm_out(&portal->addr, SCN_REG(bpid), val);
+}
+
+static inline u32 __bm_isr_read(struct bm_portal *portal, enum bm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 6));
+#else
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __bm_isr_write(struct bm_portal *portal, enum bm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 6), val);
+#else
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 2), val);
+#endif
+}
+
+/* Buffer Pool Cleanup */
+static inline int bm_shutdown_pool(struct bm_portal *p, u32 bpid)
+{
+	struct bm_mc_command *bm_cmd;
+	struct bm_mc_result *bm_res;
+
+	int aq_count = 0;
+	bool stop = false;
+
+	while (!stop) {
+		/* Acquire buffers until empty */
+		bm_cmd = bm_mc_start(p);
+		bm_cmd->acquire.bpid = bpid;
+		bm_mc_commit(p, BM_MCC_VERB_CMD_ACQUIRE |  1);
+		while (!(bm_res = bm_mc_result(p)))
+			cpu_relax();
+		if (!(bm_res->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT)) {
+			/* Pool is empty */
+			stop = true;
+		} else
+			++aq_count;
+	};
+	return 0;
+}
+
+#endif /* __BMAN_H */
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
index fb3c50e..5c13a80 100644
--- a/drivers/bus/dpaa/base/qbman/bman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -65,6 +65,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct bman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -111,6 +112,14 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 	/* Use the IRQ FD as a unique IRQ number */
 	pcfg.irq = fd;
 
+	portal = bman_create_affine_portal(&pcfg);
+	if (!portal) {
+		pr_err("Bman portal initialisation failed (%d)",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	/* Set the IRQ number */
 	irq_map.type = dpaa_portal_bman;
 	irq_map.portal_cinh = map.addr.cinh;
@@ -120,10 +129,13 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 
 static int fsl_bman_portal_finish(void)
 {
+	__maybe_unused const struct bm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = bman_destroy_affine_portal();
+	DPAA_BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
index 690576a..35dba7f 100644
--- a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -41,6 +41,22 @@
 #include "dpaa_sys.h"
 #include <process.h>
 #include <fsl_qman.h>
+#include <fsl_bman.h>
+
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_bpid, result, count, align, partial);
+}
+
+void bman_release_bpid_range(u32 bpid, u32 count)
+{
+	process_release(dpaa_id_bpid, bpid, count);
+}
+
+int bman_reserve_bpid_range(u32 bpid, u32 count)
+{
+	return process_reserve(dpaa_id_bpid, bpid, count);
+}
 
 int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
 {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 15/40] bus/dpaa: add fman flow control threshold setting
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (13 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
                           ` (25 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/fman/fman_hw.c | 28 ++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_fman.h  |  7 +++++++
 2 files changed, 35 insertions(+)

diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index a7ca661..077c17c 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -37,6 +37,7 @@
  */
 #include <fsl_fman.h>
 #include <fsl_fman_crc64.h>
+#include <fsl_bman.h>
 
 /* Instantiate the global variable that the inline CRC64 implementation (in
  * <fsl_fman.h>) depends on.
@@ -393,6 +394,33 @@ fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
 }
 
 int
+fman_if_get_fc_threshold(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	return in_be32(fmbm_mpd);
+}
+
+int
+fman_if_set_fc_threshold(struct fman_if *fm_if, u32 high_water,
+			 u32 low_water, u32 bpid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	out_be32(fmbm_mpd, FMAN_ENABLE_BPOOL_DEPLETION);
+	return bm_pool_set_hw_threshold(bpid, low_water, high_water);
+
+}
+
+int
 fman_if_get_fc_quanta(struct fman_if *fm_if)
 {
 	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index ac38082..95aee67 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -112,6 +112,13 @@ void fman_if_loopback_disable(struct fman_if *p);
 void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
 		    size_t bufsize);
 
+/* Get Flow Control threshold parameters on specific interface */
+int fman_if_get_fc_threshold(struct fman_if *fm_if);
+
+/* Enable and Set Flow Control threshold parameters on specific interface */
+int fman_if_set_fc_threshold(struct fman_if *fm_if,
+			u32 high_water, u32 low_water, u32 bpid);
+
 /* Get Flow Control pause quanta on specific interface */
 int fman_if_get_fc_quanta(struct fman_if *fm_if);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (14 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
                           ` (24 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Now that QBMAN (QMAN, BMAN) and FMAN drivers are available, this patch
integrates the DPAA Bus driver for using the drivers for scanning
devices and calling the PMD registered probe callbacks.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c               | 248 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  47 ++++++
 drivers/bus/dpaa/rte_dpaa_bus.h           |  25 +++
 3 files changed, 320 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index cc343b3..8017df3 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -63,9 +63,21 @@
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
 int dpaa_logtype_bus;
 
 struct rte_dpaa_bus rte_dpaa_bus;
+struct netcfg_info *dpaa_netcfg;
+
+/* define a variable to hold the portal_key, once created.*/
+pthread_key_t dpaa_portal_key;
+
+RTE_DEFINE_PER_LCORE(bool, _dpaa_io);
 
 static inline void
 dpaa_add_to_device_list(struct rte_dpaa_device *dev)
@@ -79,11 +91,247 @@ dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
 	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
 }
 
+static void dpaa_clean_device_list(void);
+
+static int
+dpaa_create_device_list(void)
+{
+	int i;
+	int ret;
+	struct rte_dpaa_device *dev;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+
+	/* Creating Ethernet Devices */
+	for (i = 0; i < dpaa_netcfg->num_ethports; i++) {
+		dev = calloc(1, sizeof(struct rte_dpaa_device));
+		if (!dev) {
+			DPAA_BUS_LOG(ERR, "Failed to allocate ETH devices");
+			ret = -ENOMEM;
+			goto cleanup;
+		}
+
+		cfg = &dpaa_netcfg->port_cfg[i];
+		fman_intf = cfg->fman_if;
+
+		/* Device identifiers */
+		dev->id.fman_id = fman_intf->fman_idx + 1;
+		dev->id.mac_id = fman_intf->mac_idx;
+		dev->device_type = FSL_DPAA_ETH;
+		dev->id.dev_id = i;
+
+		/* Create device name */
+		memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
+		sprintf(dev->name, "fm%d-mac%d", (fman_intf->fman_idx + 1),
+			fman_intf->mac_idx);
+		DPAA_BUS_LOG(DEBUG, "Device added: %s", dev->name);
+		dev->device.name = dev->name;
+
+		dpaa_add_to_device_list(dev);
+	}
+
+	rte_dpaa_bus.device_count = i;
+
+	return 0;
+
+cleanup:
+	dpaa_clean_device_list();
+	return ret;
+}
+
+static void
+dpaa_clean_device_list(void)
+{
+	struct rte_dpaa_device *dev = NULL;
+	struct rte_dpaa_device *tdev = NULL;
+
+	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+		free(dev);
+		dev = NULL;
+	}
+}
+
+/** XXX move this function into a separate file */
+static int
+_dpaa_portal_init(void *arg)
+{
+	cpu_set_t cpuset;
+	pthread_t id;
+	uint32_t cpu = rte_lcore_id();
+	int ret;
+	struct dpaa_portal *dpaa_io_portal;
+
+	BUS_INIT_FUNC_TRACE();
+
+	if ((uint64_t)arg == 1 || cpu == LCORE_ID_ANY)
+		cpu = rte_get_master_lcore();
+	/* if the core id is not supported */
+	else
+		if (cpu >= RTE_MAX_LCORE)
+			return -1;
+
+	/* Set CPU affinity for this thread */
+	CPU_ZERO(&cpuset);
+	CPU_SET(cpu, &cpuset);
+	id = pthread_self();
+	ret = pthread_setaffinity_np(id, sizeof(cpu_set_t), &cpuset);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "pthread_setaffinity_np failed on "
+			"core :%d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	/* Initialise bman thread portals */
+	ret = bman_thread_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "BMAN thread initialized");
+
+	/* Initialise qman thread portals */
+	ret = qman_thread_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		bman_thread_finish();
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
+
+	dpaa_io_portal = rte_malloc(NULL, sizeof(struct dpaa_portal),
+				    RTE_CACHE_LINE_SIZE);
+	if (!dpaa_io_portal) {
+		DPAA_BUS_LOG(ERR, "Unable to allocate memory");
+		bman_thread_finish();
+		qman_thread_finish();
+		return -ENOMEM;
+	}
+
+	dpaa_io_portal->qman_idx = qman_get_portal_index();
+	dpaa_io_portal->bman_idx = bman_get_portal_index();
+	dpaa_io_portal->tid = syscall(SYS_gettid);
+
+	ret = pthread_setspecific(dpaa_portal_key, (void *)dpaa_io_portal);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "pthread_setspecific failed on "
+			    "core %d with ret: %d", cpu, ret);
+		dpaa_portal_finish(NULL);
+
+		return ret;
+	}
+
+	RTE_PER_LCORE(_dpaa_io) = true;
+
+	DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
+
+	return 0;
+}
+
+/*
+ * rte_dpaa_portal_init - Wrapper over _dpaa_portal_init with thread level check
+ * XXX Complete this
+ */
+int
+rte_dpaa_portal_init(void *arg)
+{
+	if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
+		return _dpaa_portal_init(arg);
+
+	return 0;
+}
+
+void
+dpaa_portal_finish(void *arg)
+{
+	struct dpaa_portal *dpaa_io_portal = (struct dpaa_portal *)arg;
+
+	if (!dpaa_io_portal) {
+		DPAA_BUS_LOG(DEBUG, "Portal already cleaned");
+		return;
+	}
+
+	bman_thread_finish();
+	qman_thread_finish();
+
+	pthread_setspecific(dpaa_portal_key, NULL);
+
+	rte_free(dpaa_io_portal);
+	dpaa_io_portal = NULL;
+
+	RTE_PER_LCORE(_dpaa_io) = false;
+}
+
+#define DPAA_DEV_PATH1 "/sys/devices/platform/soc/soc:fsl,dpaa"
+#define DPAA_DEV_PATH2 "/sys/devices/platform/fsl,dpaa"
+
 static int
 rte_dpaa_bus_scan(void)
 {
+	int ret;
+
 	BUS_INIT_FUNC_TRACE();
 
+	if ((access(DPAA_DEV_PATH1, F_OK) != 0) &&
+	    (access(DPAA_DEV_PATH2, F_OK) != 0)) {
+		RTE_LOG(DEBUG, EAL, "DPAA Bus not present. Skipping.\n");
+		return 0;
+	}
+
+	/* Load the device-tree driver */
+	ret = of_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "of_init failed with ret: %d", ret);
+		return -1;
+	}
+
+	/* Get the interface configurations from device-tree */
+	dpaa_netcfg = netcfg_acquire();
+	if (!dpaa_netcfg) {
+		DPAA_BUS_LOG(ERR, "netcfg_acquire failed");
+		return -EINVAL;
+	}
+
+	RTE_LOG(NOTICE, EAL, "DPAA Bus Detected\n");
+
+	if (!dpaa_netcfg->num_ethports) {
+		DPAA_BUS_LOG(INFO, "no network interfaces available");
+		/* This is not an error */
+		return 0;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "Bus: Address of netcfg=%p, Ethports=%d",
+		     dpaa_netcfg, dpaa_netcfg->num_ethports);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dump_netcfg(dpaa_netcfg);
+#endif
+
+	DPAA_BUS_LOG(DEBUG, "Number of devices = %d\n",
+		     dpaa_netcfg->num_ethports);
+	ret = dpaa_create_device_list();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to create device list. (%d)", ret);
+		return ret;
+	}
+
+	/* create the key, supplying a function that'll be invoked
+	 * when a portal affined thread will be deleted.
+	 */
+	ret = pthread_key_create(&dpaa_portal_key, dpaa_portal_finish);
+	if (ret) {
+		DPAA_BUS_LOG(DEBUG, "Unable to create pthread key. (%d)", ret);
+		dpaa_clean_device_list();
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "dpaa_portal_key=%u, ret=%d\n",
+		    (unsigned int)dpaa_portal_key, ret);
+
 	return 0;
 }
 
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 9f41c77..853bc47 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,8 +1,55 @@
 DPDK_17.11 {
 	global:
 
+	bman_acquire;
+	bman_free_pool;
+	bman_get_params;
+	bman_global_init;
+	bman_new_pool;
+	bman_query_free_buffers;
+	bman_release;
+	dpaa_netcfg;
+	fman_ccsr_map_fd;
+	fman_dealloc_bufs_mask_hi;
+	fman_dealloc_bufs_mask_lo;
+	fman_if_add_mac_addr;
+	fman_if_clear_mac_addr;
+	fman_if_disable_rx;
+	fman_if_enable_rx;
+	fman_if_discard_rx_errors;
+	fman_if_get_fc_threshold;
+	fman_if_get_fc_quanta;
+	fman_if_get_fdoff;
+	fman_if_loopback_disable;
+	fman_if_loopback_enable;
+	fman_if_promiscuous_disable;
+	fman_if_promiscuous_enable;
+	fman_if_reset_mcast_filter_table;
+	fman_if_set_bp;
+	fman_if_set_fc_threshold;
+	fman_if_set_fc_quanta;
+	fman_if_set_fdoff;
+	fman_if_set_ic_params;
+	fman_if_set_maxfrm;
+	fman_if_set_mcast_filter_table;
+	fman_if_stats_get;
+	fman_if_stats_get_all;
+	fman_if_stats_reset;
+	fman_ip_rev;
+	netcfg_acquire;
+	netcfg_release;
+	qman_create_fq;
+	qman_dequeue;
+	qman_dqrr_consume;
+	qman_enqueue_multi;
+	qman_global_init;
+	qman_init_fq;
+	qman_set_vdq;
+	qman_reserve_fqid_range;
 	rte_dpaa_driver_register;
 	rte_dpaa_driver_unregister;
+	rte_dpaa_mem_ptov;
+	rte_dpaa_portal_init;
 
 	local: *;
 };
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 789882e..eafc944 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -35,6 +35,12 @@
 #include <rte_bus.h>
 #include <rte_mempool.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
 #define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
 
 #define DEV_TO_DPAA_DEVICE(ptr)	\
@@ -47,6 +53,9 @@ struct rte_dpaa_driver;
 TAILQ_HEAD(rte_dpaa_device_list, rte_dpaa_device);
 TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
 
+/* Configuration variables exported from DPAA bus */
+extern struct netcfg_info *dpaa_netcfg;
+
 enum rte_dpaa_type {
 	FSL_DPAA_ETH = 1,
 	FSL_DPAA_CRYPTO,
@@ -131,6 +140,22 @@ void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
  */
 void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
 
+/**
+ * Initialize a DPAA portal
+ *
+ * @param arg
+ *	Per thread ID
+ *
+ * @return
+ *	0 in case of success, error otherwise
+ */
+int rte_dpaa_portal_init(void *arg);
+
+/**
+ * Cleanup a DPAA Portal
+ */
+void dpaa_portal_finish(void *arg);
+
 /** Helper for DPAA device registration from driver (eth, crypto) instance */
 #define RTE_PMD_REGISTER_DPAA(nm, dpaa_drv) \
 RTE_INIT(dpaainitfn_ ##nm); \
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 17/40] doc: add NXP DPAA PMD documentation
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (15 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
                           ` (23 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                       |   2 +
 doc/guides/nics/dpaa.rst          | 377 ++++++++++++++++++++++++++++++++++++++
 doc/guides/nics/features/dpaa.ini |   8 +
 doc/guides/nics/index.rst         |   1 +
 4 files changed, 388 insertions(+)
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index c566962..dad876f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -412,6 +412,8 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: doc/guides/nics/dpaa.rst
+F: doc/guides/nics/features/dpaa.ini
 
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
new file mode 100644
index 0000000..7d054d7
--- /dev/null
+++ b/doc/guides/nics/dpaa.rst
@@ -0,0 +1,377 @@
+..  BSD LICENSE
+    Copyright 2017 NXP.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+DPAA Poll Mode Driver
+=====================
+
+The DPAA NIC PMD (**librte_pmd_dpaa**) provides poll mode driver
+support for the inbuilt NIC found in the **NXP DPAA** SoC family.
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+NXP DPAA (Data Path Acceleration Architecture - Gen 1)
+------------------------------------------------------
+
+This section provides an overview of the NXP DPAA architecture
+and how it is integrated into the DPDK.
+
+Contents summary
+
+- DPAA overview
+- DPAA driver architecture overview
+
+.. _dpaa_overview:
+
+DPAA Overview
+~~~~~~~~~~~~~
+
+Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
+
+The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
+components on specific QorIQ series multicore processors. This architecture
+provides the infrastructure to support simplified sharing of networking
+interfaces and accelerators by multiple CPU cores, and the accelerators
+themselves.
+
+DPAA includes:
+
+- Cores
+- Network and packet I/O
+- Hardware offload accelerators
+- Infrastructure required to facilitate flow of packets between the components above
+
+Infrastructure components are:
+
+- The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
+  It allows  CPUs and other accelerators connected to the SoC datapath to
+  enqueue and dequeue ethernet frames, thus providing the infrastructure for
+  data exchange among CPUs and datapath accelerators.
+- The Buffer Manager (BMan) is a hardware buffer pool management block that
+  allows software and accelerators on the datapath to acquire and release
+  buffers in order to build frames.
+
+Hardware accelerators are:
+
+- SEC - Cryptographic accelerator
+- PME - Pattern matching engine
+
+The Network and packet I/O component:
+
+- The Frame Manager (FMan) is a key component in the DPAA and makes use of the
+  DPAA infrastructure (QMan and BMan). FMan  is responsible for packet
+  distribution and policing. Each frame can be parsed, classified and results
+  may be attached to the frame. This meta data can be used to select
+  particular QMan queue, which the packet is forwarded to.
+
+
+DPAA DPDK - Poll Mode Driver Overview
+-------------------------------------
+
+This section provides an overview of the drivers for DPAA:
+
+* Bus driver and associated "DPAA infrastructure" drivers
+* Functional object drivers (such as Ethernet).
+
+Brief description of each driver is provided in layout below as well as
+in the following sections.
+
+.. code-block:: console
+
+                                       +------------+
+                                       | DPDK DPAA  |
+                                       |    PMD     |
+                                       +-----+------+
+                                             |
+                                       +-----+------+       +---------------+
+                                       :  Ethernet  :.......| DPDK DPAA     |
+                    . . . . . . . . .  :   (FMAN)   :       | Mempool driver|
+                   .                   +---+---+----+       |  (BMAN)       |
+                  .                        ^   |            +-----+---------+
+                 .                         |   |<enqueue,         .
+                .                          |   | dequeue>         .
+               .                           |   |                  .
+              .                        +---+---V----+             .
+             .      . . . . . . . . . .: Portal drv :             .
+            .      .                   :            :             .
+           .      .                    +-----+------+             .
+          .      .                     :   QMAN     :             .
+         .      .                      :  Driver    :             .
+    +----+------+-------+              +-----+------+             .
+    |   DPDK DPAA Bus   |                    |                    .
+    |   driver          |....................|.....................
+    |   /bus/dpaa       |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|========================
+                                            PHY
+    =========================================|========================
+
+In the above representation, solid lines represent components which interface
+with DPDK RTE Framework and dotted lines represent DPAA internal components.
+
+DPAA Bus driver
+~~~~~~~~~~~~~~~
+
+The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
+Key functions include:
+
+- Scanning and parsing the various objects and adding them to their respective
+  device list.
+- Performing probe for available drivers against each scanned device
+- Creating necessary ethernet instance before passing control to the PMD
+
+DPAA NIC Driver (PMD)
+~~~~~~~~~~~~~~~~~~~~~
+
+DPAA PMD is traditional DPDK PMD which provides necessary interface between
+RTE framework and DPAA internal components/drivers.
+
+- Once devices have been identified by DPAA Bus, each device is associated
+  with the PMD
+- PMD is responsible for implementing necessary glue layer between RTE APIs
+  and lower level QMan and FMan blocks.
+  The Ethernet driver is bound to a FMAN port and implements the interfaces
+  needed to connect the DPAA network interface to the network stack.
+  Each FMAN Port corresponds to a DPDK network interface.
+
+
+Features
+^^^^^^^^
+
+  Features of the DPAA PMD are:
+
+  - Multiple queues for TX and RX
+  - Receive Side Scaling (RSS)
+  - Packet type information
+  - Checksum offload
+  - Promiscuous mode
+
+DPAA Mempool Driver
+~~~~~~~~~~~~~~~~~~~
+
+DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
+Manager.
+
+- Using standard Mempools operations RTE API, the mempool driver interfaces
+  with RTE to service each mempool creation, deletion, buffer allocation and
+  deallocation requests.
+- Each FMAN instance has a BMan pool attached to it during initialization.
+  Each Tx frame can be automatically released by hardware, if allocated from
+  this pool.
+
+
+Supported DPAA SoCs
+-------------------
+
+- LS1043A/LS1023A
+- LS1046A/LS1026A
+
+Prerequisites
+-------------
+
+There are three main pre-requisities for executing DPAA PMD on a DPAA
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+   For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
+
+2. **Linux Kernel**
+
+   It can be obtained from `NXP's Github hosting <https://github.com/qoriq-open-source/linux>`_.
+
+3. **Rootfile system**
+
+   Any *aarch64* supporting filesystem can be used. For example,
+   Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
+   from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
+
+4. **FMC Tool**
+
+   Before any DPDK application can be executed, the Frame Manager Configuration
+   Tool (FMC) need to be executed to set the configurations of the queues. This
+   includes the queue state, RSS and other policies.
+   This tool can be obtained from `NXP (Freescale) Public Git Repository <http://git.freescale.com/git/cgit.cgi/ppc/sdk/fmc.git>`_.
+   This tool needs configuration files which are available in the
+   :ref:`DPDK Extra Scripts <extra_scripts>`, described below.
+
+As an alternative method, DPAA PMD can also be executed using images provided
+as part of SDK from NXP. The SDK includes all the above prerequisites necessary
+to bring up a DPAA board.
+
+The following dependencies are not part of DPDK and must be installed
+separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+
+.. _extra_scripts:
+
+- **DPDK Extra Scripts**
+
+  DPAA based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK Extra repository.
+
+  `DPDK Extras Scripts <https://github.com/qoriq-open-source/dpdk-extras>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+**.
+- Supported architectures:  **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
+  to setup the basic DPDK environment.
+
+.. note::
+
+   Some part of dpaa bus code (qbman and fman - library) routines are
+   dual licensed (BSD & GPLv2).
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_bus_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_pmd_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
+
+  Toggles display of bus configurations and enables a debugging queue
+  to fetch error (Rx/Tx) packets to driver. By default, packets with errors
+  (like wrong checksum) are dropped by the hardware.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_HWDEBUG`` (default ``n``)
+
+  Enables debugging of the Queue and Buffer Manager layer which interacts
+  with the DPAA hardware.
+
+- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` (default ``dpaa``)
+
+  This is not a DPAA specific configuration - it is a generic RTE config.
+  For optimal performance and hardware utilization, it is expected that DPAA
+  Mempool driver is used for mempools. For that, this configuration needs to
+  enabled.
+
+Environment Variables
+~~~~~~~~~~~~~~~~~~~~~
+
+DPAA drivers uses the following environment variables to configure its
+state during application initialization:
+
+- ``DPAA_NUM_RX_QUEUES`` (default 1)
+
+  This defines the number of Rx queues configured for an application, per
+  port. Hardware would distribute across these many number of queues on Rx
+  of packets.
+  In case the application is configured to use lesser number of queues than
+  configured above, it might result in packet loss (because of distribution).
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+#. Running testpmd:
+
+   Follow instructions available in the document
+   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+   to run testpmd.
+
+   Example output:
+
+   .. code-block:: console
+
+      ./arm64-dpaa-linuxapp-gcc/testpmd -c 0xff -n 1 \
+        -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
+
+      .....
+      EAL: Registered [pci] bus.
+      EAL: Registered [dpaa] bus.
+      EAL: Detected 4 lcore(s)
+      .....
+      EAL: dpaa: Bus scan completed
+      .....
+      Configuring Port 0 (socket 0)
+      Port 0: 00:00:00:00:00:01
+      Configuring Port 1 (socket 0)
+      Port 1: 00:00:00:00:00:02
+      .....
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Port 1 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+Limitations
+-----------
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA drivers for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
+up to 10240 bytes can still reach the host interface.
+
+Multiprocess Support
+~~~~~~~~~~~~~~~~~~~~
+
+Current version of DPAA driver doesn't support multi-process applications
+where I/O is performed using secondary processes. This feature would be
+implemented in subsequent versions.
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
new file mode 100644
index 0000000..9e8befc
--- /dev/null
+++ b/doc/guides/nics/features/dpaa.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'dpaa' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+ARMv8                = Y
+Usage doc            = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 36f4f3f..4115141 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -43,6 +43,7 @@ Network Interface Controller Drivers
     bnx2x
     bnxt
     cxgbe
+    dpaa
     dpaa2
     e1000em
     ena
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 18/40] bus/dpaa: add DPAA mempool logging macros
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (16 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 19/40] mempool/dpaa: support NXP DPAA Mempool Shreyansh Jain
                           ` (22 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c               |  5 +++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  1 +
 drivers/bus/dpaa/rte_dpaa_logs.h          | 20 ++++++++++++++++++++
 3 files changed, 26 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 8017df3..dc2b3ad 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -70,6 +70,7 @@
 #include <netcfg.h>
 
 int dpaa_logtype_bus;
+int dpaa_logtype_mempool;
 
 struct rte_dpaa_bus rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
@@ -452,4 +453,8 @@ dpaa_init_log(void)
 	dpaa_logtype_bus = rte_log_register("bus.dpaa");
 	if (dpaa_logtype_bus >= 0)
 		rte_log_set_level(dpaa_logtype_bus, RTE_LOG_NOTICE);
+
+	dpaa_logtype_mempool = rte_log_register("mempool.dpaa");
+	if (dpaa_logtype_mempool >= 0)
+		rte_log_set_level(dpaa_logtype_mempool, RTE_LOG_NOTICE);
 }
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 853bc47..a2394b8 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -8,6 +8,7 @@ DPDK_17.11 {
 	bman_new_pool;
 	bman_query_free_buffers;
 	bman_release;
+	dpaa_logtype_mempool;
 	dpaa_netcfg;
 	fman_ccsr_map_fd;
 	fman_dealloc_bufs_mask_hi;
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
index cc10937..5335fd8 100644
--- a/drivers/bus/dpaa/rte_dpaa_logs.h
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -36,6 +36,7 @@
 #include <rte_log.h>
 
 extern int dpaa_logtype_bus;
+extern int dpaa_logtype_mempool;
 
 #define DPAA_BUS_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
@@ -62,4 +63,23 @@ extern int dpaa_logtype_bus;
 #define DPAA_BUS_WARN(fmt, args...) \
 	DPAA_BUS_LOG(WARNING, fmt, ## args)
 
+/* Mempool related logs */
+
+#define DPAA_MEMPOOL_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_mempool, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define MEMPOOL_INIT_FUNC_TRACE() DPAA_MEMPOOL_LOG(DEBUG, " >>")
+
+#define DPAA_MEMPOOL_DPDEBUG(fmt, args...) \
+	RTE_LOG_DP(DEBUG, PMD, fmt, ## args)
+#define DPAA_MEMPOOL_DEBUG(fmt, args...) \
+	DPAA_MEMPOOL_LOG(DEBUG, fmt, ## args)
+#define DPAA_MEMPOOL_ERR(fmt, args...) \
+	DPAA_MEMPOOL_LOG(ERR, fmt, ## args)
+#define DPAA_MEMPOOL_INFO(fmt, args...) \
+	DPAA_MEMPOOL_LOG(INFO, fmt, ## args)
+#define DPAA_MEMPOOL_WARN(fmt, args...) \
+	DPAA_MEMPOOL_LOG(WARNING, fmt, ## args)
+
 #endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 19/40] mempool/dpaa: support NXP DPAA Mempool
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (17 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 20/40] config: enable compilation of DPAA Mempool driver Shreyansh Jain
                           ` (21 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This Mempool driver works with DPAA BMan hardware block. This block
manages data buffers in memory, and provides efficient interface with
other hardware and software components for buffer requests.

This patch adds support for BMan. Compilation would be enabled in
subsequent patches.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                                       |   1 +
 drivers/mempool/Makefile                          |   2 +
 drivers/mempool/dpaa/Makefile                     |  58 +++++
 drivers/mempool/dpaa/dpaa_mempool.c               | 286 ++++++++++++++++++++++
 drivers/mempool/dpaa/dpaa_mempool.h               |  77 ++++++
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map |   8 +
 6 files changed, 432 insertions(+)
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index dad876f..022715f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -412,6 +412,7 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: drivers/mempool/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index efd55f2..bfc5f00 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -32,6 +32,8 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mempool librte_ring
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring
diff --git a/drivers/mempool/dpaa/Makefile b/drivers/mempool/dpaa/Makefile
new file mode 100644
index 0000000..25312a0
--- /dev/null
+++ b/drivers/mempool/dpaa/Makefile
@@ -0,0 +1,58 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_dpaa.a
+
+CFLAGS := -I$(SRCDIR) $(CFLAGS)
+CFLAGS += -O3 $(WERROR_FLAGS)
+CFLAGS += -D _GNU_SOURCE
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
+CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
+
+# versioning export map
+EXPORT_MAP := rte_mempool_dpaa_version.map
+
+# Lbrary version
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa_mempool.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
new file mode 100644
index 0000000..921c36b
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -0,0 +1,286 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <dpaa_mempool.h>
+
+struct dpaa_bp_info rte_dpaa_bpid_info[DPAA_MAX_BPOOLS];
+
+static int
+dpaa_mbuf_create_pool(struct rte_mempool *mp)
+{
+	struct bman_pool *bp;
+	struct bm_buffer bufs[8];
+	struct dpaa_bp_info *bp_info;
+	uint8_t bpid;
+	int num_bufs = 0, ret = 0;
+	struct bman_pool_params params = {
+		.flags = BMAN_POOL_FLAG_DYNAMIC_BPID
+	};
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	bp = bman_new_pool(&params);
+	if (!bp) {
+		DPAA_MEMPOOL_ERR("bman_new_pool() failed");
+		return -ENODEV;
+	}
+	bpid = bman_get_params(bp)->bpid;
+
+	/* Drain the pool of anything already in it. */
+	do {
+		/* Acquire is all-or-nothing, so we drain in 8s,
+		 * then in 1s for the remainder.
+		 */
+		if (ret != 1)
+			ret = bman_acquire(bp, bufs, 8, 0);
+		if (ret < 8)
+			ret = bman_acquire(bp, bufs, 1, 0);
+		if (ret > 0)
+			num_bufs += ret;
+	} while (ret > 0);
+	if (num_bufs)
+		DPAA_MEMPOOL_WARN("drained %u bufs from BPID %d",
+				  num_bufs, bpid);
+
+	rte_dpaa_bpid_info[bpid].mp = mp;
+	rte_dpaa_bpid_info[bpid].bpid = bpid;
+	rte_dpaa_bpid_info[bpid].size = mp->elt_size;
+	rte_dpaa_bpid_info[bpid].bp = bp;
+	rte_dpaa_bpid_info[bpid].meta_data_size =
+		sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(mp);
+	rte_dpaa_bpid_info[bpid].dpaa_ops_index = mp->ops_index;
+
+	bp_info = rte_malloc(NULL,
+			     sizeof(struct dpaa_bp_info),
+			     RTE_CACHE_LINE_SIZE);
+	if (!bp_info) {
+		DPAA_MEMPOOL_WARN("Memory allocation failed for bp_info");
+		bman_free_pool(bp);
+		return -ENOMEM;
+	}
+
+	rte_memcpy(bp_info, (void *)&rte_dpaa_bpid_info[bpid],
+		   sizeof(struct dpaa_bp_info));
+	mp->pool_data = (void *)bp_info;
+
+	DPAA_MEMPOOL_INFO("BMAN pool created for bpid =%d", bpid);
+	return 0;
+}
+
+static void
+dpaa_mbuf_free_pool(struct rte_mempool *mp)
+{
+	struct dpaa_bp_info *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	if (bp_info) {
+		bman_free_pool(bp_info->bp);
+		DPAA_MEMPOOL_INFO("BMAN pool freed for bpid =%d",
+				  bp_info->bpid);
+		rte_free(mp->pool_data);
+		mp->pool_data = NULL;
+	}
+}
+
+static void
+dpaa_buf_free(struct dpaa_bp_info *bp_info, uint64_t addr)
+{
+	struct bm_buffer buf;
+	int ret;
+
+	DPAA_MEMPOOL_DEBUG("Free 0x%lx to bpid: %d", addr, bp_info->bpid);
+
+	bm_buffer_set64(&buf, addr);
+retry:
+	ret = bman_release(bp_info->bp, &buf, 1, 0);
+	if (ret) {
+		DPAA_MEMPOOL_DEBUG("BMAN busy. Retrying...");
+		cpu_spin(CPU_SPIN_BACKOFF_CYCLES);
+		goto retry;
+	}
+}
+
+static int
+dpaa_mbuf_free_bulk(struct rte_mempool *pool,
+		    void *const *obj_table,
+		    unsigned int n)
+{
+	struct dpaa_bp_info *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+	int ret;
+	unsigned int i = 0;
+
+	DPAA_MEMPOOL_DPDEBUG("Request to free %d buffers in bpid = %d",
+			     n, bp_info->bpid);
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
+				 ret);
+		return 0;
+	}
+
+	while (i < n) {
+		dpaa_buf_free(bp_info,
+			      (uint64_t)rte_mempool_virt2phy(pool,
+			      obj_table[i]) + bp_info->meta_data_size);
+		i = i + 1;
+	}
+
+	DPAA_MEMPOOL_DPDEBUG("freed %d buffers in bpid =%d",
+			     n, bp_info->bpid);
+
+	return 0;
+}
+
+static int
+dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
+		     void **obj_table,
+		     unsigned int count)
+{
+	struct rte_mbuf **m = (struct rte_mbuf **)obj_table;
+	struct bm_buffer bufs[DPAA_MBUF_MAX_ACQ_REL];
+	struct dpaa_bp_info *bp_info;
+	void *bufaddr;
+	int i, ret;
+	unsigned int n = 0;
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+
+	DPAA_MEMPOOL_DPDEBUG("Request to alloc %d buffers in bpid = %d",
+			     count, bp_info->bpid);
+
+	if (unlikely(count >= (RTE_MEMPOOL_CACHE_MAX_SIZE * 2))) {
+		DPAA_MEMPOOL_ERR("Unable to allocate requested (%u) buffers",
+				 count);
+		return -1;
+	}
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
+				 ret);
+		return -1;
+	}
+
+	while (n < count) {
+		/* Acquire is all-or-nothing, so we drain in 7s,
+		 * then the remainder.
+		 */
+		if ((count - n) > DPAA_MBUF_MAX_ACQ_REL) {
+			ret = bman_acquire(bp_info->bp, bufs,
+					   DPAA_MBUF_MAX_ACQ_REL, 0);
+		} else {
+			ret = bman_acquire(bp_info->bp, bufs, count - n, 0);
+		}
+		/* In case of less than requested number of buffers available
+		 * in pool, qbman_swp_acquire returns 0
+		 */
+		if (ret <= 0) {
+			DPAA_MEMPOOL_DPDEBUG("Buffer acquire failed (%d)",
+					     ret);
+			/* The API expect the exact number of requested
+			 * buffers. Releasing all buffers allocated
+			 */
+			dpaa_mbuf_free_bulk(pool, obj_table, n);
+			return -ENOBUFS;
+		}
+		/* assigning mbuf from the acquired objects */
+		for (i = 0; (i < ret) && bufs[i].addr; i++) {
+			/* TODO-errata - objerved that bufs may be null
+			 * i.e. first buffer is valid, remaining 6 buffers
+			 * may be null.
+			 */
+			bufaddr = (void *)rte_dpaa_mem_ptov(bufs[i].addr);
+			m[n] = (struct rte_mbuf *)((char *)bufaddr
+						- bp_info->meta_data_size);
+			DPAA_MEMPOOL_DPDEBUG("Paddr (%p), FD (%p) from BMAN",
+					     (void *)bufaddr, (void *)m[n]);
+			n++;
+		}
+	}
+
+	DPAA_MEMPOOL_DPDEBUG("Allocated %d buffers from bpid=%d",
+			     n, bp_info->bpid);
+	return 0;
+}
+
+static unsigned int
+dpaa_mbuf_get_count(const struct rte_mempool *mp)
+{
+	struct dpaa_bp_info *bp_info;
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	if (!mp || !mp->pool_data) {
+		DPAA_MEMPOOL_ERR("Invalid mempool provided\n");
+		return 0;
+	}
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	return bman_query_free_buffers(bp_info->bp);
+}
+
+struct rte_mempool_ops dpaa_mpool_ops = {
+	.name = "dpaa",
+	.alloc = dpaa_mbuf_create_pool,
+	.free = dpaa_mbuf_free_pool,
+	.enqueue = dpaa_mbuf_free_bulk,
+	.dequeue = dpaa_mbuf_alloc_bulk,
+	.get_count = dpaa_mbuf_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
diff --git a/drivers/mempool/dpaa/dpaa_mempool.h b/drivers/mempool/dpaa/dpaa_mempool.h
new file mode 100644
index 0000000..de33c0c
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.h
@@ -0,0 +1,77 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_MEMPOOL_H__
+#define __DPAA_MEMPOOL_H__
+
+/* System headers */
+#include <stdio.h>
+#include <stdbool.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <unistd.h>
+
+#include <rte_mempool.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <fsl_usd.h>
+#include <fsl_bman.h>
+
+#define CPU_SPIN_BACKOFF_CYCLES               512
+
+/* total number of bpools on SoC */
+#define DPAA_MAX_BPOOLS	256
+
+/* Maximum release/acquire from BMAN */
+#define DPAA_MBUF_MAX_ACQ_REL  8
+
+struct dpaa_bp_info {
+	struct rte_mempool *mp;
+	struct bman_pool *bp;
+	uint32_t bpid;
+	uint32_t size;
+	uint32_t meta_data_size;
+	int32_t dpaa_ops_index;
+};
+
+#define DPAA_MEMPOOL_TO_POOL_INFO(__mp) \
+	((struct dpaa_bp_info *)__mp->pool_data)
+
+#define DPAA_MEMPOOL_TO_BPID(__mp) \
+	(((struct dpaa_bp_info *)__mp->pool_data)->bpid)
+
+extern struct dpaa_bp_info rte_dpaa_bpid_info[DPAA_MAX_BPOOLS];
+
+#define DPAA_BPID_TO_POOL_INFO(__bpid) (&rte_dpaa_bpid_info[__bpid])
+
+#endif
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
new file mode 100644
index 0000000..cc635c7
--- /dev/null
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -0,0 +1,8 @@
+DPDK_17.11 {
+	global:
+
+	rte_dpaa_bpid_info;
+	rte_dpaa_pool_table;
+
+	local: *;
+};
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 20/40] config: enable compilation of DPAA Mempool driver
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (18 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 19/40] mempool/dpaa: support NXP DPAA Mempool Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 21/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
                           ` (20 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch also adds configuration necessary for compilation of DPAA
Mempool driver into the DPAA specific config file.
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=dpaa is also configured to allow
applications to use DPAA mempool as default.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       | 1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 4 ++++
 2 files changed, 5 insertions(+)

diff --git a/config/common_base b/config/common_base
index fc1cdca..fe287b0 100644
--- a/config/common_base
+++ b/config/common_base
@@ -303,6 +303,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 4d6b046..3e11718 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -50,3 +50,7 @@ CONFIG_RTE_PKTMBUF_HEADROOM=128
 CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA_HWDEBUG=n
+
+# NXP DPAA Mempool
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 21/40] bus/dpaa: add DPAA PMD logging macros
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (19 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 20/40] config: enable compilation of DPAA Mempool driver Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 22/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
                           ` (19 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c               |  5 +++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  1 +
 drivers/bus/dpaa/rte_dpaa_logs.h          | 22 ++++++++++++++++++++++
 3 files changed, 28 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index dc2b3ad..7ae5bfa 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -71,6 +71,7 @@
 
 int dpaa_logtype_bus;
 int dpaa_logtype_mempool;
+int dpaa_logtype_pmd;
 
 struct rte_dpaa_bus rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
@@ -457,4 +458,8 @@ dpaa_init_log(void)
 	dpaa_logtype_mempool = rte_log_register("mempool.dpaa");
 	if (dpaa_logtype_mempool >= 0)
 		rte_log_set_level(dpaa_logtype_mempool, RTE_LOG_NOTICE);
+
+	dpaa_logtype_pmd = rte_log_register("pmd.dpaa");
+	if (dpaa_logtype_pmd >= 0)
+		rte_log_set_level(dpaa_logtype_pmd, RTE_LOG_NOTICE);
 }
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index a2394b8..64a05a9 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -9,6 +9,7 @@ DPDK_17.11 {
 	bman_query_free_buffers;
 	bman_release;
 	dpaa_logtype_mempool;
+	dpaa_logtype_pmd;
 	dpaa_netcfg;
 	fman_ccsr_map_fd;
 	fman_dealloc_bufs_mask_hi;
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
index 5335fd8..037c96b 100644
--- a/drivers/bus/dpaa/rte_dpaa_logs.h
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -37,6 +37,7 @@
 
 extern int dpaa_logtype_bus;
 extern int dpaa_logtype_mempool;
+extern int dpaa_logtype_pmd;
 
 #define DPAA_BUS_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
@@ -82,4 +83,25 @@ extern int dpaa_logtype_mempool;
 #define DPAA_MEMPOOL_WARN(fmt, args...) \
 	DPAA_MEMPOOL_LOG(WARNING, fmt, ## args)
 
+/* PMD related logs */
+
+#define DPAA_PMD_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_pmd, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() DPAA_PMD_LOG(DEBUG, " >>")
+
+#define DPAA_PMD_DEBUG(fmt, args...) \
+	DPAA_PMD_LOG(DEBUG, fmt, ## args)
+#define DPAA_PMD_ERR(fmt, args...) \
+	DPAA_PMD_LOG(ERR, fmt, ## args)
+#define DPAA_PMD_INFO(fmt, args...) \
+	DPAA_PMD_LOG(INFO, fmt, ## args)
+#define DPAA_PMD_WARN(fmt, args...) \
+	DPAA_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define DPAA_DP_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, fmt, ## args)
+
 #endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 22/40] net/dpaa: add NXP DPAA PMD driver skeleton
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (20 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 21/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 23/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
                           ` (18 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

A skeleton which would be called after bus device scan. It currently
fails to identify the device.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                               |   1 +
 drivers/net/dpaa/Makefile                 |  57 +++++++
 drivers/net/dpaa/dpaa_ethdev.c            | 256 ++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_ethdev.h            | 127 +++++++++++++++
 drivers/net/dpaa/rte_pmd_dpaa_version.map |   4 +
 5 files changed, 445 insertions(+)
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 022715f..9eec984 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -413,6 +413,7 @@ M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
 F: drivers/mempool/dpaa/
+F: drivers/net/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
new file mode 100644
index 0000000..bb305ca
--- /dev/null
+++ b/drivers/net/dpaa/Makefile
@@ -0,0 +1,57 @@
+#   BSD LICENSE
+#
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_SDK_DPAA=$(RTE_SDK)/drivers/net/dpaa
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa.a
+
+CFLAGS := -I$(SRCDIR) $(CFLAGS)
+CFLAGS += -O3 $(WERROR_FLAGS)
+CFLAGS += -I$(RTE_SDK_DPAA)/
+CFLAGS += -I$(RTE_SDK_DPAA)/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
+
+EXPORT_MAP := rte_pmd_dpaa_version.map
+
+LIBABIVER := 1
+
+# Interfaces with DPDK
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
new file mode 100644
index 0000000..4543dfc
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -0,0 +1,256 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <dpaa_ethdev.h>
+
+/* Keep track of whether QMAN and BMAN have been globally initialized */
+static int is_global_init;
+
+static int
+dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	/* Change tx callback to the real one */
+	dev->tx_pkt_burst = NULL;
+
+	return 0;
+}
+
+static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = NULL;
+}
+
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static struct eth_dev_ops dpaa_devops = {
+	.dev_configure		  = dpaa_eth_dev_configure,
+	.dev_start		  = dpaa_eth_dev_start,
+	.dev_stop		  = dpaa_eth_dev_stop,
+	.dev_close		  = dpaa_eth_dev_close,
+};
+
+/* Initialise a network interface */
+static int
+dpaa_dev_init(struct rte_eth_dev *eth_dev)
+{
+	int dev_id;
+	struct rte_dpaa_device *dpaa_device;
+	struct dpaa_if *dpaa_intf;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
+	dev_id = dpaa_device->id.dev_id;
+	dpaa_intf = eth_dev->data->dev_private;
+
+	dpaa_intf->name = dpaa_device->name;
+
+	dpaa_intf->ifid = dev_id;
+
+	eth_dev->dev_ops = &dpaa_devops;
+
+	return 0;
+}
+
+static int
+dpaa_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	if (!dpaa_intf) {
+		DPAA_PMD_WARN("Already closed or not started");
+		return -1;
+	}
+
+	dpaa_eth_dev_close(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
+	return 0;
+}
+
+static int
+rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv,
+	       struct rte_dpaa_device *dpaa_dev)
+{
+	int diag;
+	int ret;
+	struct rte_eth_dev *eth_dev;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* In case of secondary process, the device is already configured
+	 * and no further action is required, except portal initialization
+	 * and verifying secondary attachment to port name.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		eth_dev = rte_eth_dev_attach_secondary(dpaa_dev->name);
+		if (!eth_dev)
+			return -ENOMEM;
+		return 0;
+	}
+
+	if (!is_global_init) {
+		/* One time load of Qman/Bman drivers */
+		ret = qman_global_init();
+		if (ret) {
+			DPAA_PMD_ERR("QMAN initialization failed: %d",
+				     ret);
+			return ret;
+		}
+		ret = bman_global_init();
+		if (ret) {
+			DPAA_PMD_ERR("BMAN initialization failed: %d",
+				     ret);
+			return ret;
+		}
+
+		is_global_init = 1;
+	}
+
+	ret = rte_dpaa_portal_init((void *)1);
+	if (ret) {
+		DPAA_PMD_ERR("Unable to initialize portal");
+		return ret;
+	}
+
+	eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
+	if (eth_dev == NULL)
+		return -ENOMEM;
+
+	eth_dev->data->dev_private = rte_zmalloc(
+					"ethdev private structure",
+					sizeof(struct dpaa_if),
+					RTE_CACHE_LINE_SIZE);
+	if (!eth_dev->data->dev_private) {
+		DPAA_PMD_ERR("Cannot allocate memzone for port data");
+		rte_eth_dev_release_port(eth_dev);
+		return -ENOMEM;
+	}
+
+	eth_dev->device = &dpaa_dev->device;
+	eth_dev->device->driver = &dpaa_drv->driver;
+	dpaa_dev->eth_dev = eth_dev;
+
+	/* Invoke PMD device initialization function */
+	diag = dpaa_dev_init(eth_dev);
+	if (diag == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eth_dev->data->dev_private);
+
+	rte_eth_dev_release_port(eth_dev);
+	return diag;
+}
+
+static int
+rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
+{
+	struct rte_eth_dev *eth_dev;
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev = dpaa_dev->eth_dev;
+	dpaa_dev_uninit(eth_dev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eth_dev->data->dev_private);
+
+	rte_eth_dev_release_port(eth_dev);
+
+	return 0;
+}
+
+static struct rte_dpaa_driver rte_dpaa_pmd = {
+	.drv_type = FSL_DPAA_ETH,
+	.probe = rte_dpaa_probe,
+	.remove = rte_dpaa_remove,
+};
+
+RTE_PMD_REGISTER_DPAA(net_dpaa, rte_dpaa_pmd);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
new file mode 100644
index 0000000..2f25acb
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -0,0 +1,127 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2014-2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_ETHDEV_H__
+#define __DPAA_ETHDEV_H__
+
+/* System headers */
+#include <stdbool.h>
+#include <rte_ethdev.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
+#define DPAA_MBUF_HW_ANNOTATION		64
+#define DPAA_FD_PTA_SIZE		64
+
+#if (DPAA_MBUF_HW_ANNOTATION + DPAA_FD_PTA_SIZE) > RTE_PKTMBUF_HEADROOM
+#error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM"
+#endif
+
+/* we will re-use the HEADROOM for annotation in RX */
+#define DPAA_HW_BUF_RESERVE	0
+#define DPAA_PACKET_LAYOUT_ALIGN	64
+
+/* Alignment to use for cpu-local structs to avoid coherency problems. */
+#define MAX_CACHELINE			64
+
+#define DPAA_MIN_RX_BUF_SIZE 512
+#define DPAA_MAX_RX_PKT_LEN  10240
+
+/* RX queue tail drop threshold
+ * currently considering 32 KB packets.
+ */
+#define CONG_THRESHOLD_RX_Q  (32 * 1024)
+
+/*max mac filter for memac(8) including primary mac addr*/
+#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
+
+/*Maximum number of slots available in TX ring*/
+#define MAX_TX_RING_SLOTS	8
+
+/* PCD frame queues */
+#define DPAA_PCD_FQID_START		0x400
+#define DPAA_PCD_FQID_MULTIPLIER	0x100
+#define DPAA_DEFAULT_NUM_PCD_QUEUES	1
+
+#define DPAA_IF_TX_PRIORITY		3
+#define DPAA_IF_RX_PRIORITY		4
+#define DPAA_IF_DEBUG_PRIORITY		7
+
+#define DPAA_IF_RX_ANNOTATION_STASH	1
+#define DPAA_IF_RX_DATA_STASH		1
+#define DPAA_IF_RX_CONTEXT_STASH		0
+
+/* Each "debug" FQ is represented by one of these */
+#define DPAA_DEBUG_FQ_RX_ERROR   0
+#define DPAA_DEBUG_FQ_TX_ERROR   1
+
+#define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
+		PKT_TX_IP_CKSUM |                \
+		PKT_TX_TCP_CKSUM |               \
+		PKT_TX_UDP_CKSUM)
+
+/* DPAA Frame descriptor macros */
+
+#define DPAA_FD_CMD_FCO			0x80000000
+/**< Frame queue Context Override */
+#define DPAA_FD_CMD_RPD			0x40000000
+/**< Read Prepended Data */
+#define DPAA_FD_CMD_UPD			0x20000000
+/**< Update Prepended Data */
+#define DPAA_FD_CMD_DTC			0x10000000
+/**< Do IP/TCP/UDP Checksum */
+#define DPAA_FD_CMD_DCL4C		0x10000000
+/**< Didn't calculate L4 Checksum */
+#define DPAA_FD_CMD_CFQ			0x00ffffff
+/**< Confirmation Frame Queue */
+
+/* Each network interface is represented by one of these */
+struct dpaa_if {
+	int valid;
+	char *name;
+	const struct fm_eth_port_cfg *cfg;
+	struct qman_fq *rx_queues;
+	struct qman_fq *tx_queues;
+	struct qman_fq debug_queues[2];
+	uint16_t nb_rx_queues;
+	uint16_t nb_tx_queues;
+	uint32_t ifid;
+	struct fman_if *fif;
+	struct dpaa_bp_info *bp_info;
+	struct rte_eth_fc_conf *fc_conf;
+};
+
+#endif
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
new file mode 100644
index 0000000..a70bd19
--- /dev/null
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -0,0 +1,4 @@
+DPDK_17.11 {
+
+	local: *;
+};
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 23/40] config: enable NXP DPAA PMD compilation
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (21 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 22/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 24/40] net/dpaa: support Tx and Rx queue setup Shreyansh Jain
                           ` (17 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       | 1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 3 +++
 drivers/net/Makefile                     | 2 ++
 mk/rte.app.mk                            | 5 +++++
 4 files changed, 11 insertions(+)

diff --git a/config/common_base b/config/common_base
index fe287b0..ca47615 100644
--- a/config/common_base
+++ b/config/common_base
@@ -304,6 +304,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
+CONFIG_RTE_LIBRTE_DPAA_PMD=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 3e11718..f59834c 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -54,3 +54,6 @@ CONFIG_RTE_LIBRTE_DPAA_HWDEBUG=n
 # NXP DPAA Mempool
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
 CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
+
+# Compile software NXP DPAA PMD
+CONFIG_RTE_LIBRTE_DPAA_PMD=y
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index d33c959..2bd42f8 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -51,6 +51,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DEPDIRS-bonding = $(core-libs) librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
 DEPDIRS-cxgbe = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += e1000
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index c25fdd9..9c5a171 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -116,6 +116,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD)      += -lrte_pmd_cxgbe
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_pmd_dpaa
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_pmd_dpaa2
 _LDLIBS-$(CONFIG_RTE_LIBRTE_E1000_PMD)      += -lrte_pmd_e1000
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ENA_PMD)        += -lrte_pmd_ena
@@ -182,6 +183,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_bus_fslmc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_mempool_dpaa2
 endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
 
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+endif
+
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
 
 _LDLIBS-y += --no-whole-archive
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 24/40] net/dpaa: support Tx and Rx queue setup
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (22 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 23/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 25/40] net/dpaa: support MTU update Shreyansh Jain
                           ` (16 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/Makefile      |   4 +
 drivers/net/dpaa/dpaa_ethdev.c | 296 ++++++++++++++++++++++++++++++++-
 drivers/net/dpaa/dpaa_rxtx.c   | 370 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h   |  61 +++++++
 mk/rte.app.mk                  |   1 +
 5 files changed, 729 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h

diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index bb305ca..c77384c 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -38,10 +38,12 @@ LIB = librte_pmd_dpaa.a
 
 CFLAGS := -I$(SRCDIR) $(CFLAGS)
 CFLAGS += -O3 $(WERROR_FLAGS)
+CFLAGS += -Wno-pointer-arith
 CFLAGS += -I$(RTE_SDK_DPAA)/
 CFLAGS += -I$(RTE_SDK_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
 
@@ -51,7 +53,9 @@ LIBABIVER := 1
 
 # Interfaces with DPDK
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_rxtx.c
 
 LDLIBS += -lrte_bus_dpaa
+LDLIBS += -lrte_mempool_dpaa
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4543dfc..2db7d99 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -62,8 +62,15 @@
 
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
+#include <dpaa_mempool.h>
 
 #include <dpaa_ethdev.h>
+#include <dpaa_rxtx.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <fsl_fman.h>
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -78,20 +85,104 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
 	PMD_INIT_FUNC_TRACE();
 
 	/* Change tx callback to the real one */
-	dev->tx_pkt_burst = NULL;
+	dev->tx_pkt_burst = dpaa_eth_queue_tx;
+	fman_if_enable_rx(dpaa_intf->fif);
 
 	return 0;
 }
 
 static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = NULL;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_disable_rx(dpaa_intf->fif);
+	dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
 }
 
-static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+}
+
+static
+int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+			    unsigned int socket_id __rte_unused,
+			    const struct rte_eth_rxconf *rx_conf __rte_unused,
+			    struct rte_mempool *mp)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	DPAA_PMD_INFO("Rx queue setup for queue index: %d", queue_idx);
+
+	if (!dpaa_intf->bp_info || dpaa_intf->bp_info->mp != mp) {
+		struct fman_if_ic_params icp;
+		uint32_t fd_offset;
+		uint32_t bp_size;
+
+		if (!mp->pool_data) {
+			DPAA_PMD_ERR("Not an offloaded buffer pool!");
+			return -1;
+		}
+		dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+		memset(&icp, 0, sizeof(icp));
+		/* set ICEOF for to the default value , which is 0*/
+		icp.iciof = DEFAULT_ICIOF;
+		icp.iceof = DEFAULT_RX_ICEOF;
+		icp.icsz = DEFAULT_ICSZ;
+		fman_if_set_ic_params(dpaa_intf->fif, &icp);
+
+		fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
+		fman_if_set_fdoff(dpaa_intf->fif, fd_offset);
+
+		/* Buffer pool size should be equal to Dataroom Size*/
+		bp_size = rte_pktmbuf_data_room_size(mp);
+		fman_if_set_bp(dpaa_intf->fif, mp->size,
+			       dpaa_intf->bp_info->bpid, bp_size);
+		dpaa_intf->valid = 1;
+		DPAA_PMD_INFO("if =%s - fd_offset = %d offset = %d",
+			    dpaa_intf->name, fd_offset,
+			fman_if_get_fdoff(dpaa_intf->fif));
+	}
+	dev->data->rx_queues[queue_idx] = &dpaa_intf->rx_queues[queue_idx];
+
+	return 0;
+}
+
+static
+void dpaa_eth_rx_queue_release(void *rxq __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static
+int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+		unsigned int socket_id __rte_unused,
+		const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	DPAA_PMD_INFO("Tx queue setup for queue index: %d", queue_idx);
+	dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx];
+	return 0;
+}
+
+static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 }
@@ -101,15 +192,102 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+
+	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
+	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
+	.rx_queue_release	  = dpaa_eth_rx_queue_release,
+	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 };
 
+/* Initialise an Rx FQ */
+static int dpaa_rx_queue_init(struct qman_fq *fq,
+			      uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		DPAA_PMD_ERR("reserve rx fqid %d failed with ret: %d",
+			     fqid, ret);
+		return -EINVAL;
+	}
+
+	DPAA_PMD_DEBUG("creating rx fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create rx fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTA;
+
+	opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
+			   QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_a.stashing.exclusive = 0;
+	opts.fqd.context_a.stashing.annotation_cl = DPAA_IF_RX_ANNOTATION_STASH;
+	opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
+	opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
+
+	/*Enable tail drop */
+	opts.we_mask = opts.we_mask | QM_INITFQ_WE_TDTHRESH;
+	opts.fqd.fq_ctrl = opts.fqd.fq_ctrl | QM_FQCTRL_TDE;
+	qm_fqd_taildrop_set(&opts.fqd.td, CONG_THRESHOLD_RX_Q, 1);
+
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init rx fqid %d failed with ret: %d", fqid, ret);
+	return ret;
+}
+
+/* Initialise a Tx FQ */
+static int dpaa_tx_queue_init(struct qman_fq *fq,
+			      struct fman_if *fman_intf)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID |
+			     QMAN_FQ_FLAG_TO_DCPORTAL, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create tx fq failed with ret: %d", ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTB | QM_INITFQ_WE_CONTEXTA;
+	opts.fqd.dest.channel = fman_intf->tx_channel_id;
+	opts.fqd.dest.wq = DPAA_IF_TX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_b = 0;
+	/* no tx-confirmation */
+	opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+	opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
+	DPAA_PMD_DEBUG("init tx fq %p, fqid %d", fq, fq->fqid);
+	ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init tx fqid %d failed %d", fq->fqid, ret);
+	return ret;
+}
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
 {
+	int num_cores, num_rx_fqs, fqid;
+	int loop, ret = 0;
 	int dev_id;
 	struct rte_dpaa_device *dpaa_device;
 	struct dpaa_if *dpaa_intf;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+	struct fman_if_bpool *bp, *tmp_bp;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -120,12 +298,110 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
 	dev_id = dpaa_device->id.dev_id;
 	dpaa_intf = eth_dev->data->dev_private;
+	cfg = &dpaa_netcfg->port_cfg[dev_id];
+	fman_intf = cfg->fman_if;
 
 	dpaa_intf->name = dpaa_device->name;
 
+	/* save fman_if & cfg in the interface struture */
+	dpaa_intf->fif = fman_intf;
 	dpaa_intf->ifid = dev_id;
+	dpaa_intf->cfg = cfg;
+
+	/* Initialize Rx FQ's */
+	if (getenv("DPAA_NUM_RX_QUEUES"))
+		num_rx_fqs = atoi(getenv("DPAA_NUM_RX_QUEUES"));
+	else
+		num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
 
+	/* Each device can not have more than DPAA_PCD_FQID_MULTIPLIER RX
+	 * queues.
+	 */
+	if (num_rx_fqs <= 0 || num_rx_fqs > DPAA_PCD_FQID_MULTIPLIER) {
+		DPAA_PMD_ERR("Invalid number of RX queues\n");
+		return -EINVAL;
+	}
+
+	dpaa_intf->rx_queues = rte_zmalloc(NULL,
+		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
+	for (loop = 0; loop < num_rx_fqs; loop++) {
+		fqid = DPAA_PCD_FQID_START + dpaa_intf->ifid *
+			DPAA_PCD_FQID_MULTIPLIER + loop;
+		ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop], fqid);
+		if (ret)
+			return ret;
+		dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_rx_queues = num_rx_fqs;
+
+	/* Initialise Tx FQs. Have as many Tx FQ's as number of cores */
+	num_cores = rte_lcore_count();
+	dpaa_intf->tx_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
+		num_cores, MAX_CACHELINE);
+	if (!dpaa_intf->tx_queues)
+		return -ENOMEM;
+
+	for (loop = 0; loop < num_cores; loop++) {
+		ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
+					 fman_intf);
+		if (ret)
+			return ret;
+		dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_tx_queues = num_cores;
+
+	DPAA_PMD_DEBUG("All frame queues created");
+
+	/* reset bpool list, initialize bpool dynamically */
+	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+
+	/* Populate ethdev structure */
 	eth_dev->dev_ops = &dpaa_devops;
+	eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
+	eth_dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
+		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		DPAA_PMD_ERR("Failed to allocate %d bytes needed to "
+						"store MAC addresses",
+				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
+		rte_free(dpaa_intf->rx_queues);
+		rte_free(dpaa_intf->tx_queues);
+		dpaa_intf->rx_queues = NULL;
+		dpaa_intf->tx_queues = NULL;
+		dpaa_intf->nb_rx_queues = 0;
+		dpaa_intf->nb_tx_queues = 0;
+		return -ENOMEM;
+	}
+
+	/* copy the primary mac address */
+	memcpy(eth_dev->data->mac_addrs[0].addr_bytes,
+		fman_intf->mac_addr.addr_bytes,
+		ETHER_ADDR_LEN);
+
+	RTE_LOG(INFO, PMD, "net: dpaa: %s: %02x:%02x:%02x:%02x:%02x:%02x\n",
+		dpaa_device->name,
+		fman_intf->mac_addr.addr_bytes[0],
+		fman_intf->mac_addr.addr_bytes[1],
+		fman_intf->mac_addr.addr_bytes[2],
+		fman_intf->mac_addr.addr_bytes[3],
+		fman_intf->mac_addr.addr_bytes[4],
+		fman_intf->mac_addr.addr_bytes[5]);
+
+	/* Disable RX mode */
+	fman_if_discard_rx_errors(fman_intf);
+	fman_if_disable_rx(fman_intf);
+	/* Disable promiscuous mode */
+	fman_if_promiscuous_disable(fman_intf);
+	/* Disable multicast */
+	fman_if_reset_mcast_filter_table(fman_intf);
+	/* Reset interface statistics */
+	fman_if_stats_reset(fman_intf);
 
 	return 0;
 }
@@ -147,6 +423,20 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
 
 	dpaa_eth_dev_close(dev);
 
+	/* release configuration memory */
+	if (dpaa_intf->fc_conf)
+		rte_free(dpaa_intf->fc_conf);
+
+	rte_free(dpaa_intf->rx_queues);
+	dpaa_intf->rx_queues = NULL;
+
+	rte_free(dpaa_intf->tx_queues);
+	dpaa_intf->tx_queues = NULL;
+
+	/* free memory for storing MAC addresses */
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
 	dev->dev_ops = NULL;
 	dev->rx_pkt_burst = NULL;
 	dev->tx_pkt_burst = NULL;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
new file mode 100644
index 0000000..c4e67f5
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -0,0 +1,370 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <limits.h>
+#include <sched.h>
+#include <pthread.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_udp.h>
+
+#include "dpaa_ethdev.h"
+#include "dpaa_rxtx.h"
+#include <rte_dpaa_bus.h>
+#include <dpaa_mempool.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
+#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
+	do { \
+		(_fd)->cmd = 0; \
+		(_fd)->opaque_addr = 0; \
+		(_fd)->opaque = QM_FD_CONTIG << DPAA_FD_FORMAT_SHIFT; \
+		(_fd)->opaque |= ((_mbuf)->data_off) << DPAA_FD_OFFSET_SHIFT; \
+		(_fd)->opaque |= (_mbuf)->pkt_len; \
+		(_fd)->addr = (_mbuf)->buf_physaddr; \
+		(_fd)->bpid = _bpid; \
+	} while (0)
+
+static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
+							uint32_t ifid)
+{
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *mbuf;
+	void *ptr;
+	uint16_t offset =
+		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
+	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
+
+	DPAA_DP_LOG(DEBUG, " FD--->MBUF");
+
+	/* Ignoring case when format != qm_fd_contig */
+	ptr = rte_dpaa_mem_ptov(fd->addr);
+	/* Ignoring case when ptr would be NULL. That is only possible incase
+	 * of a corrupted packet
+	 */
+
+	mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
+	/* Prefetch the Parse results and packet data to L1 */
+	rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
+	rte_prefetch0((void *)((uint8_t *)ptr + offset));
+
+	mbuf->data_off = offset;
+	mbuf->data_len = length;
+	mbuf->pkt_len = length;
+
+	mbuf->port = ifid;
+	mbuf->nb_segs = 1;
+	mbuf->ol_flags = 0;
+	mbuf->next = NULL;
+	rte_mbuf_refcnt_set(mbuf, 1);
+
+	return mbuf;
+}
+
+uint16_t dpaa_eth_queue_rx(void *q,
+			   struct rte_mbuf **bufs,
+			   uint16_t nb_bufs)
+{
+	struct qman_fq *fq = q;
+	struct qm_dqrr_entry *dq;
+	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_PMD_ERR("Failure in affining portal");
+		return 0;
+	}
+
+	ret = qman_set_vdq(fq, (nb_bufs > DPAA_MAX_DEQUEUE_NUM_FRAMES) ?
+				DPAA_MAX_DEQUEUE_NUM_FRAMES : nb_bufs);
+	if (ret)
+		return 0;
+
+	do {
+		dq = qman_dequeue(fq);
+		if (!dq)
+			continue;
+		bufs[num_rx++] = dpaa_eth_fd_to_mbuf(&dq->fd, ifid);
+		qman_dqrr_consume(fq, dq);
+	} while (fq->flags & QMAN_FQ_STATE_VDQCR);
+
+	return num_rx;
+}
+
+static void *dpaa_get_pktbuf(struct dpaa_bp_info *bp_info)
+{
+	int ret;
+	uint64_t buf = 0;
+	struct bm_buffer bufs;
+
+	ret = bman_acquire(bp_info->bp, &bufs, 1, 0);
+	if (ret <= 0) {
+		DPAA_PMD_WARN("Failed to allocate buffers %d", ret);
+		return (void *)buf;
+	}
+
+	DPAA_DP_LOG(DEBUG, "got buffer 0x%lx from pool %d",
+		    (uint64_t)bufs.addr, bufs.bpid);
+
+	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
+	if (!buf)
+		goto out;
+
+out:
+	return (void *)buf;
+}
+
+static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
+					     struct dpaa_if *dpaa_intf)
+{
+	struct rte_mbuf *dpaa_mbuf;
+
+	/* allocate pktbuffer on bpid for dpaa port */
+	dpaa_mbuf = dpaa_get_pktbuf(dpaa_intf->bp_info);
+	if (!dpaa_mbuf)
+		return NULL;
+
+	memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + mbuf->data_off, (void *)
+		((uint8_t *)(mbuf->buf_addr) + mbuf->data_off), mbuf->pkt_len);
+
+	/* Copy only the required fields */
+	dpaa_mbuf->data_off = mbuf->data_off;
+	dpaa_mbuf->pkt_len = mbuf->pkt_len;
+	dpaa_mbuf->ol_flags = mbuf->ol_flags;
+	dpaa_mbuf->packet_type = mbuf->packet_type;
+	dpaa_mbuf->tx_offload = mbuf->tx_offload;
+	rte_pktmbuf_free(mbuf);
+	return dpaa_mbuf;
+}
+
+/* Handle mbufs which are not segmented (non SG) */
+static inline void
+tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
+			    struct dpaa_bp_info *bp_info,
+			    struct qm_fd *fd_arr)
+{
+	struct rte_mbuf *mi = NULL;
+
+	if (RTE_MBUF_DIRECT(mbuf)) {
+		if (rte_mbuf_refcnt_read(mbuf) > 1) {
+			/* In case of direct mbuf and mbuf being cloned,
+			 * BMAN should _not_ release buffer.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, 0xff);
+			/* Buffer should be releasd by EAL */
+			rte_mbuf_refcnt_update(mbuf, -1);
+		} else {
+			/* In case of direct mbuf and no cloning, mbuf can be
+			 * released by BMAN.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+		}
+	} else {
+		/* This is data-containing core mbuf: 'mi' */
+		mi = rte_mbuf_from_indirect(mbuf);
+		if (rte_mbuf_refcnt_read(mi) > 1) {
+			/* In case of indirect mbuf, and mbuf being cloned,
+			 * BMAN should _not_ release it and let EAL release
+			 * it through pktmbuf_free below.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, 0xff);
+		} else {
+			/* In case of indirect mbuf, and no cloning, core mbuf
+			 * should be released by BMAN.
+			 * Increate refcnt of core mbuf so that when
+			 * pktmbuf_free is called and mbuf is released, EAL
+			 * doesn't try to release core mbuf which would have
+			 * been released by BMAN.
+			 */
+			rte_mbuf_refcnt_update(mi, 1);
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+		}
+		rte_pktmbuf_free(mbuf);
+	}
+}
+
+/* Handle all mbufs on dpaa BMAN managed pool */
+static inline uint16_t
+tx_on_dpaa_pool(struct rte_mbuf *mbuf,
+		struct dpaa_bp_info *bp_info,
+		struct qm_fd *fd_arr)
+{
+	DPAA_DP_LOG(DEBUG, "BMAN offloaded buffer, mbuf: %p", mbuf);
+
+	if (mbuf->nb_segs == 1) {
+		/* Case for non-segmented buffers */
+		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr);
+	} else {
+		DPAA_PMD_DEBUG("Number of Segments not supported");
+		return 1;
+	}
+
+	return 0;
+}
+
+/* Handle all mbufs on an external pool (non-dpaa) */
+static inline uint16_t
+tx_on_external_pool(struct qman_fq *txq, struct rte_mbuf *mbuf,
+		    struct qm_fd *fd_arr)
+{
+	struct dpaa_if *dpaa_intf = txq->dpaa_intf;
+	struct rte_mbuf *dmable_mbuf;
+
+	DPAA_DP_LOG(DEBUG, "Non-BMAN offloaded buffer."
+		    "Allocating an offloaded buffer");
+	dmable_mbuf = dpaa_get_dmable_mbuf(mbuf, dpaa_intf);
+	if (!dmable_mbuf) {
+		DPAA_DP_LOG(DEBUG, "no dpaa buffers.");
+		return 1;
+	}
+
+	DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, dpaa_intf->bp_info->bpid);
+
+	return 0;
+}
+
+uint16_t
+dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+	struct rte_mbuf *mbuf, *mi = NULL;
+	struct rte_mempool *mp;
+	struct dpaa_bp_info *bp_info;
+	struct qm_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send, loop, i = 0;
+	uint16_t state;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_PMD_ERR("Failure in affining portal");
+		return 0;
+	}
+
+	DPAA_DP_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
+
+	while (nb_bufs) {
+		frames_to_send = (nb_bufs >> 3) ? MAX_TX_RING_SLOTS : nb_bufs;
+		for (loop = 0; loop < frames_to_send; loop++, i++) {
+			mbuf = bufs[i];
+			if (RTE_MBUF_DIRECT(mbuf)) {
+				mp = mbuf->pool;
+			} else {
+				mi = rte_mbuf_from_indirect(mbuf);
+				mp = mi->pool;
+			}
+
+			bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+			if (likely(mp->ops_index == bp_info->dpaa_ops_index)) {
+				state = tx_on_dpaa_pool(mbuf, bp_info,
+							&fd_arr[loop]);
+				if (unlikely(state)) {
+					/* Set frames_to_send & nb_bufs so
+					 * that packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+			} else {
+				state = tx_on_external_pool(q, mbuf,
+							    &fd_arr[loop]);
+				if (unlikely(state)) {
+					/* Set frames_to_send & nb_bufs so
+					 * that packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+			}
+		}
+
+send_pkts:
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qman_enqueue_multi(q, &fd_arr[loop],
+					frames_to_send - loop);
+		}
+		nb_bufs -= frames_to_send;
+	}
+
+	DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", i, q);
+
+	return i;
+}
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+		uint16_t nb_bufs __rte_unused)
+{
+	DPAA_DP_LOG(DEBUG, "Drop all packets");
+
+	/* Drop all incoming packets. No need to free packets here
+	 * because the rte_eth f/w frees up the packets through tx_buffer
+	 * callback in case this functions returns count less than nb_bufs
+	 */
+	return 0;
+}
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
new file mode 100644
index 0000000..45bfae8
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -0,0 +1,61 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPDK_RXTX_H__
+#define __DPDK_RXTX_H__
+
+/* internal offset from where IC is copied to packet buffer*/
+#define DEFAULT_ICIOF          32
+/* IC transfer size */
+#define DEFAULT_ICSZ	48
+
+/* IC offsets from buffer header address */
+#define DEFAULT_RX_ICEOF	16
+
+#define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
+	/** <Maximum number of frames to be dequeued in a single rx call*/
+/* FD structure masks and offset */
+#define DPAA_FD_FORMAT_MASK 0xE0000000
+#define DPAA_FD_OFFSET_MASK 0x1FF00000
+#define DPAA_FD_LENGTH_MASK 0xFFFFF
+#define DPAA_FD_FORMAT_SHIFT 29
+#define DPAA_FD_OFFSET_SHIFT 20
+
+uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+			      uint16_t nb_bufs __rte_unused);
+#endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 9c5a171..7440848 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -185,6 +185,7 @@ endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
 
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_mempool_dpaa
 endif
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 25/40] net/dpaa: support MTU update
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (23 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 24/40] net/dpaa: support Tx and Rx queue setup Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 26/40] net/dpaa: support jumbo frames Shreyansh Jain
                           ` (15 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 9e8befc..59ef23d 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,5 +4,6 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 2db7d99..ad0a092 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -76,6 +76,26 @@
 static int is_global_init;
 
 static int
+dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (mtu < ETHER_MIN_MTU)
+		return -EINVAL;
+	if (mtu > ETHER_MAX_LEN)
+		return -1;
+
+	dev->data->dev_conf.rxmode.jumbo_frame = 0;
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
+
+	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
+
+	return 0;
+}
+
+static int
 dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
@@ -197,6 +217,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+	.mtu_set		  = dpaa_mtu_set,
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 26/40] net/dpaa: support jumbo frames
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (24 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 25/40] net/dpaa: support MTU update Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 27/40] net/dpaa: support link status update Shreyansh Jain
                           ` (14 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 13 +++++++++++--
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 59ef23d..e62812c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Jumbo frame          = Y
 MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index ad0a092..c013a84 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -85,9 +85,10 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	if (mtu < ETHER_MIN_MTU)
 		return -EINVAL;
 	if (mtu > ETHER_MAX_LEN)
-		return -1;
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
 
-	dev->data->dev_conf.rxmode.jumbo_frame = 0;
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
 
 	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
@@ -100,6 +101,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 
+	if (dev->data->dev_conf.rxmode.jumbo_frame == 1) {
+		if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
+		    DPAA_MAX_RX_PKT_LEN)
+			return dpaa_mtu_set(dev,
+				dev->data->dev_conf.rxmode.max_rx_pkt_len);
+		else
+			return -1;
+	}
 	return 0;
 }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 27/40] net/dpaa: support link status update
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (25 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 26/40] net/dpaa: support jumbo frames Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 28/40] net/dpaa: support device info and speed capability Shreyansh Jain
                           ` (13 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 42 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index e62812c..132f94b 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c013a84..804c89f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -142,6 +142,28 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static int dpaa_eth_link_update(struct rte_eth_dev *dev,
+				int wait_to_complete __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_link *link = &dev->data->dev_link;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dpaa_intf->fif->mac_type == fman_mac_1g)
+		link->link_speed = 1000;
+	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
+		link->link_speed = 10000;
+	else
+		DPAA_PMD_ERR("invalid link_speed: %s, %d",
+			     dpaa_intf->name, dpaa_intf->fif->mac_type);
+
+	link->link_status = dpaa_intf->valid;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = ETH_LINK_AUTONEG;
+	return 0;
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -216,6 +238,22 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 	PMD_INIT_FUNC_TRACE();
 }
 
+static int dpaa_link_down(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+	return 0;
+}
+
+static int dpaa_link_up(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_start(dev);
+	return 0;
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -226,7 +264,11 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+
+	.link_update		  = dpaa_eth_link_update,
 	.mtu_set		  = dpaa_mtu_set,
+	.dev_set_link_down	  = dpaa_link_down,
+	.dev_set_link_up	  = dpaa_link_up,
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 28/40] net/dpaa: support device info and speed capability
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (26 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 27/40] net/dpaa: support link status update Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 29/40] net/dpaa: support promiscuous toggle Shreyansh Jain
                           ` (12 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 132f94b..19beada 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 804c89f..384be8e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -142,6 +142,25 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
+			      struct rte_eth_dev_info *dev_info)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->max_rx_queues = dpaa_intf->nb_rx_queues;
+	dev_info->max_tx_queues = dpaa_intf->nb_tx_queues;
+	dev_info->min_rx_bufsize = DPAA_MIN_RX_BUF_SIZE;
+	dev_info->max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
+	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
+	dev_info->max_hash_mac_addrs = 0;
+	dev_info->max_vfs = 0;
+	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
+				ETH_LINK_SPEED_10G);
+}
+
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 				int wait_to_complete __rte_unused)
 {
@@ -259,6 +278,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+	.dev_infos_get		  = dpaa_eth_dev_info,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 29/40] net/dpaa: support promiscuous toggle
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (27 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 28/40] net/dpaa: support device info and speed capability Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 30/40] net/dpaa: support multicast toggle Shreyansh Jain
                           ` (11 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 19beada..b2dfd81 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -8,5 +8,6 @@ Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
+Promiscuous mode     = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 384be8e..56a6039 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -183,6 +183,25 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+
+static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_enable(dpaa_intf->fif);
+}
+
+static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_disable(dpaa_intf->fif);
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -286,6 +305,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
+	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 30/40] net/dpaa: support multicast toggle
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (28 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 29/40] net/dpaa: support promiscuous toggle Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 31/40] net/dpaa: support MAC address update Shreyansh Jain
                           ` (10 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index b2dfd81..f21a85f 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -9,5 +9,6 @@ Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
 Promiscuous mode     = Y
+Allmulticast mode    = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 56a6039..f6a2e22 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -202,6 +202,24 @@ static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
 	fman_if_promiscuous_disable(dpaa_intf->fif);
 }
 
+static void dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_set_mcast_filter_table(dpaa_intf->fif);
+}
+
+static void dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_reset_mcast_filter_table(dpaa_intf->fif);
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -307,6 +325,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.link_update		  = dpaa_eth_link_update,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
+	.allmulticast_enable	  = dpaa_eth_multicast_enable,
+	.allmulticast_disable	  = dpaa_eth_multicast_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 31/40] net/dpaa: support MAC address update
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (29 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 30/40] net/dpaa: support multicast toggle Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 32/40] net/dpaa: support basic stats Shreyansh Jain
                           ` (9 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 48 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index f21a85f..cdf5e46 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -10,5 +10,6 @@ Jumbo frame          = Y
 MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
+Unicast MAC filter   = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f6a2e22..f1d0f75 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -310,6 +310,50 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *addr,
+			     uint32_t index,
+			     __rte_unused uint32_t pool)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, index);
+
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Adding the MAC ADDR failed:"
+			" err = %d", ret);
+	return 0;
+}
+
+static void
+dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
+			  uint32_t index)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_clear_mac_addr(dpaa_intf->fif, index);
+}
+
+static void
+dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
+		       struct ether_addr *addr)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, 0);
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Setting the MAC ADDR failed %d", ret);
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -330,6 +374,10 @@ static struct eth_dev_ops dpaa_devops = {
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
+	.mac_addr_add		  = dpaa_dev_add_mac_addr,
+	.mac_addr_remove	  = dpaa_dev_remove_mac_addr,
+	.mac_addr_set		  = dpaa_dev_set_mac_addr,
+
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 32/40] net/dpaa: support basic stats
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (30 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 31/40] net/dpaa: support MAC address update Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 33/40] net/dpaa: support flow control Shreyansh Jain
                           ` (8 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index cdf5e46..c09efd8 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,5 +11,6 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f1d0f75..23a9efd 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -183,6 +183,24 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void dpaa_eth_stats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_stats *stats)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_get(dpaa_intf->fif, stats);
+}
+
+static void dpaa_eth_stats_reset(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_reset(dpaa_intf->fif);
+}
 
 static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
 {
@@ -367,6 +385,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.stats_get		  = dpaa_eth_stats_get,
+	.stats_reset		  = dpaa_eth_stats_reset,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.allmulticast_enable	  = dpaa_eth_multicast_enable,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 33/40] net/dpaa: support flow control
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (31 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 32/40] net/dpaa: support basic stats Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 34/40] net/dpaa: support hashed RSS Shreyansh Jain
                           ` (7 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 112 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index c09efd8..1ba6b11 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Flow control         = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 23a9efd..ebceb8d 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -329,6 +329,85 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 }
 
 static int
+dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			DPAA_PMD_ERR("unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	net_fc = dpaa_intf->fc_conf;
+
+	if (fc_conf->high_water < fc_conf->low_water) {
+		DPAA_PMD_ERR("Incorrect Flow Control Configuration");
+		return -EINVAL;
+	}
+
+	if (fc_conf->mode == RTE_FC_NONE) {
+		return 0;
+	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_FC_FULL) {
+		fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water,
+					 fc_conf->low_water,
+				dpaa_intf->bp_info->bpid);
+		if (fc_conf->pause_time)
+			fman_if_set_fc_quanta(dpaa_intf->fif,
+					      fc_conf->pause_time);
+	}
+
+	/* Save the information in dpaa device */
+	net_fc->pause_time = fc_conf->pause_time;
+	net_fc->high_water = fc_conf->high_water;
+	net_fc->low_water = fc_conf->low_water;
+	net_fc->send_xon = fc_conf->send_xon;
+	net_fc->mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd;
+	net_fc->mode = fc_conf->mode;
+	net_fc->autoneg = fc_conf->autoneg;
+
+	return 0;
+}
+
+static int
+dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc = dpaa_intf->fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (net_fc) {
+		fc_conf->pause_time = net_fc->pause_time;
+		fc_conf->high_water = net_fc->high_water;
+		fc_conf->low_water = net_fc->low_water;
+		fc_conf->send_xon = net_fc->send_xon;
+		fc_conf->mac_ctrl_frame_fwd = net_fc->mac_ctrl_frame_fwd;
+		fc_conf->mode = net_fc->mode;
+		fc_conf->autoneg = net_fc->autoneg;
+		return 0;
+	}
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
+static int
 dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *addr,
 			     uint32_t index,
@@ -384,6 +463,9 @@ static struct eth_dev_ops dpaa_devops = {
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
+	.flow_ctrl_get		  = dpaa_flow_ctrl_get,
+	.flow_ctrl_set		  = dpaa_flow_ctrl_set,
+
 	.link_update		  = dpaa_eth_link_update,
 	.stats_get		  = dpaa_eth_stats_get,
 	.stats_reset		  = dpaa_eth_stats_reset,
@@ -400,6 +482,33 @@ static struct eth_dev_ops dpaa_devops = {
 
 };
 
+static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
+{
+	struct rte_eth_fc_conf *fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			DPAA_PMD_ERR("unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	fc_conf = dpaa_intf->fc_conf;
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
 /* Initialise an Rx FQ */
 static int dpaa_rx_queue_init(struct qman_fq *fq,
 			      uint32_t fqid)
@@ -553,6 +662,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 
 	DPAA_PMD_DEBUG("All frame queues created");
 
+	/* Get the initial configuration for flow control */
+	dpaa_fc_set_default(dpaa_intf);
+
 	/* reset bpool list, initialize bpool dynamically */
 	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
 		list_del(&bp->node);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 34/40] net/dpaa: support hashed RSS
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (32 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 33/40] net/dpaa: support flow control Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 35/40] net/dpaa: support packet type parsing Shreyansh Jain
                           ` (6 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c |  1 +
 drivers/net/dpaa/dpaa_ethdev.h | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index ebceb8d..5860dfa 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -157,6 +157,7 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
 				ETH_LINK_SPEED_10G);
 }
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 2f25acb..e1e062e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -88,6 +88,16 @@
 #define DPAA_DEBUG_FQ_RX_ERROR   0
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
+#define DPAA_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP)
+
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
 		PKT_TX_TCP_CKSUM |               \
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 35/40] net/dpaa: support packet type parsing
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (33 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 34/40] net/dpaa: support hashed RSS Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 36/40] net/dpaa: support checksum offload Shreyansh Jain
                           ` (5 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Add support for parsing the packet type and L2/L3 checksum offload
capability information.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   2 +
 drivers/net/dpaa/dpaa_ethdev.c    |  27 +++++
 drivers/net/dpaa/dpaa_rxtx.c      | 116 +++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 206 ++++++++++++++++++++++++++++++++++++++
 4 files changed, 351 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 1ba6b11..2ef1b56 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,7 +11,9 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+RSS hash             = Y
 Flow control         = Y
+Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5860dfa..cf2c0c7 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -112,6 +112,28 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 	return 0;
 }
 
+static const uint32_t *
+dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		/*todo -= add more types */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_SCTP
+	};
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dev->rx_pkt_burst == dpaa_eth_queue_rx)
+		return ptypes;
+	return NULL;
+}
+
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -160,6 +182,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
 				ETH_LINK_SPEED_10G);
+	dev_info->rx_offload_capa =
+		(DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM   |
+		DEV_RX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
@@ -458,6 +484,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
 	.dev_infos_get		  = dpaa_eth_dev_info,
+	.dev_supported_ptypes_get = dpaa_supported_ptypes_get,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index c4e67f5..f8ac711 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,121 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
+				     uint64_t prs __rte_unused)
+{
+	DPAA_DP_LOG(DEBUG, "Slow parsing");
+	/*TBD:XXX: to be implemented*/
+}
+
+static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
+					uint64_t fd_virt_addr)
+{
+	struct annotations_t *annot = GET_ANNOTATIONS(fd_virt_addr);
+	uint64_t prs = *((uint64_t *)(&annot->parse)) & DPAA_PARSE_MASK;
+
+	DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
+
+	switch (prs) {
+	case DPAA_PKT_TYPE_NONE:
+		m->packet_type = 0;
+		break;
+	case DPAA_PKT_TYPE_ETHER:
+		m->packet_type = RTE_PTYPE_L2_ETHER;
+		break;
+	case DPAA_PKT_TYPE_IPV4:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4;
+		break;
+	case DPAA_PKT_TYPE_IPV6:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6;
+		break;
+	case DPAA_PKT_TYPE_IPV4_FRAG:
+	case DPAA_PKT_TYPE_IPV4_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV6_FRAG:
+	case DPAA_PKT_TYPE_IPV6_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV4_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP;
+		break;
+	/* More switch cases can be added */
+	default:
+		dpaa_slow_parsing(m, prs);
+	}
+
+	m->tx_offload = annot->parse.ip_off[0];
+	m->tx_offload |= (annot->parse.l4_off - annot->parse.ip_off[0])
+					<< DPAA_PKT_L3_LEN_SHIFT;
+
+	/* Set the hash values */
+	m->hash.rss = (uint32_t)(rte_be_to_cpu_64(annot->hash));
+	m->ol_flags = PKT_RX_RSS_HASH;
+	/* All packets with Bad checksum are dropped by interface (and
+	 * corresponding notification issued to RX error queues).
+	 */
+	m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	/* Check if Vlan is present */
+	if (prs & DPAA_PARSE_VLAN_MASK)
+		m->ol_flags |= PKT_RX_VLAN_PKT;
+	/* Packet received without stripping the vlan */
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -117,6 +232,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 	mbuf->ol_flags = 0;
 	mbuf->next = NULL;
 	rte_mbuf_refcnt_set(mbuf, 1);
+	dpaa_eth_packet_info(mbuf, (uint64_t)mbuf->buf_addr);
 
 	return mbuf;
 }
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 45bfae8..68d2c41 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -44,6 +44,7 @@
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
+
 /* FD structure masks and offset */
 #define DPAA_FD_FORMAT_MASK 0xE0000000
 #define DPAA_FD_OFFSET_MASK 0x1FF00000
@@ -51,6 +52,211 @@
 #define DPAA_FD_FORMAT_SHIFT 29
 #define DPAA_FD_OFFSET_SHIFT 20
 
+/* Parsing mask (Little Endian) - 0x00E044ED00800000
+ *	Classification Plan ID 0x00
+ *	L4R 0xE0 -
+ *		0x20 - TCP
+ *		0x40 - UDP
+ *		0x80 - SCTP
+ *	L3R 0xEDC4 (in Big Endian) -
+ *		0x8000 - IPv4
+ *		0x4000 - IPv6
+ *		0x8140 - IPv4 Ext + Frag
+ *		0x8040 - IPv4 Frag
+ *		0x8100 - IPv4 Ext
+ *		0x4140 - IPv6 Ext + Frag
+ *		0x4040 - IPv6 Frag
+ *		0x4100 - IPv6 Ext
+ *	L2R 0x8000 (in Big Endian) -
+ *		0x8000 - Ethernet type
+ *	ShimR & Logical Port ID 0x0000
+ */
+#define DPAA_PARSE_MASK			0x00E044ED00800000
+#define DPAA_PARSE_VLAN_MASK		0x0000000000700000
+
+/* Parsed values (Little Endian) */
+#define DPAA_PKT_TYPE_NONE		0x0000000000000000
+#define DPAA_PKT_TYPE_ETHER		0x0000000000800000
+#define DPAA_PKT_TYPE_IPV4 \
+			(0x0000008000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV6 \
+			(0x0000004000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_GRE \
+			(0x0000002000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV4_FRAG	\
+			(0x0000400000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_FRAG	\
+			(0x0000400000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_EXT \
+			(0x0000000100000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_EXT \
+			(0x0000000100000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_SCTP	\
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_SCTP	\
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_FRAG_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_SCTP \
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_SCTP \
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_EXT_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_IPV4_EXT_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_TUNNEL_4_4 \
+			(0x0000000800000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6 \
+			(0x0000000400000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6 \
+			(0x0000000400000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_4 \
+			(0x0000000800000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_L3_LEN_SHIFT	7
+
+/**
+ * FMan parse result array
+ */
+struct dpaa_eth_parse_results_t {
+	 uint8_t     lpid;		 /**< Logical port id */
+	 uint8_t     shimr;		 /**< Shim header result  */
+	 union {
+		uint16_t              l2r;	/**< Layer 2 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      ethernet:1;
+			uint16_t      vlan:1;
+			uint16_t      llc_snap:1;
+			uint16_t      mpls:1;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      unused_1:3;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      eth_frame_type:2;
+			uint16_t      l2r_err:5;
+			/*00-unicast, 01-multicast, 11-broadcast*/
+#else
+			uint16_t      l2r_err:5;
+			uint16_t      eth_frame_type:2;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      unused_1:3;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      mpls:1;
+			uint16_t      llc_snap:1;
+			uint16_t      vlan:1;
+			uint16_t      ethernet:1;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint16_t              l3r;	/**< Layer 3 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      first_ipv4:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      gre:1;
+			uint16_t      min_enc:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      last_ipv6:1;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      last_ip_err_code:3;
+#else
+			uint16_t      last_ip_err_code:3;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      last_ipv6:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      min_enc:1;
+			uint16_t      gre:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      first_ipv4:1;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint8_t               l4r;	/**< Layer 4 result */
+		struct{
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint8_t	       l4_type:3;
+			uint8_t	       l4_info_err:1;
+			uint8_t	       l4_result:4;
+					/* if type IPSec: 1 ESP, 2 AH */
+#else
+			uint8_t        l4_result:4;
+					/* if type IPSec: 1 ESP, 2 AH */
+			uint8_t        l4_info_err:1;
+			uint8_t        l4_type:3;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 uint8_t     cplan;		 /**< Classification plan id */
+	 uint16_t    nxthdr;		 /**< Next Header  */
+	 uint16_t    cksum;		 /**< Checksum */
+	 uint32_t    lcv;		 /**< LCV */
+	 uint8_t     shim_off[3];	 /**< Shim offset */
+	 uint8_t     eth_off;		 /**< ETH offset */
+	 uint8_t     llc_snap_off;	 /**< LLC_SNAP offset */
+	 uint8_t     vlan_off[2];	 /**< VLAN offset */
+	 uint8_t     etype_off;		 /**< ETYPE offset */
+	 uint8_t     pppoe_off;		 /**< PPP offset */
+	 uint8_t     mpls_off[2];	 /**< MPLS offset */
+	 uint8_t     ip_off[2];		 /**< IP offset */
+	 uint8_t     gre_off;		 /**< GRE offset */
+	 uint8_t     l4_off;		 /**< Layer 4 offset */
+	 uint8_t     nxthdr_off;	 /**< Parser end point */
+} __attribute__ ((__packed__));
+
+/* The structure is the Prepended Data to the Frame which is used by FMAN */
+struct annotations_t {
+	uint8_t reserved[DEFAULT_RX_ICEOF];
+	struct dpaa_eth_parse_results_t parse;	/**< Pointer to Parsed result*/
+	uint64_t reserved1;
+	uint64_t hash;			/**< Hash Result */
+};
+
+#define GET_ANNOTATIONS(_buf) \
+	(struct annotations_t *)(_buf)
+
+#define GET_RX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
+
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
 uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 36/40] net/dpaa: support checksum offload
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (34 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 35/40] net/dpaa: support packet type parsing Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 37/40] net/dpaa: support Scattered Rx Shreyansh Jain
                           ` (4 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  2 +
 drivers/net/dpaa/dpaa_ethdev.c    |  4 ++
 drivers/net/dpaa/dpaa_rxtx.c      | 89 +++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 23 +++++++++-
 4 files changed, 117 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 2ef1b56..23626c0 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -13,6 +13,8 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 RSS hash             = Y
 Flow control         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index cf2c0c7..4eb9b62 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -186,6 +186,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 		(DEV_RX_OFFLOAD_IPV4_CKSUM |
 		DEV_RX_OFFLOAD_UDP_CKSUM   |
 		DEV_RX_OFFLOAD_TCP_CKSUM);
+	dev_info->tx_offload_capa =
+		(DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index f8ac711..976268b 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -200,6 +200,82 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
 	/* Packet received without stripping the vlan */
 }
 
+static inline void dpaa_checksum(struct rte_mbuf *mbuf)
+{
+	struct ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf, struct ether_hdr *);
+	char *l3_hdr = (char *)eth_hdr + mbuf->l2_len;
+	struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+	struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	DPAA_DP_LOG(DEBUG, "Calculating checksum for mbuf: %p", mbuf);
+
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	    ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	    RTE_PTYPE_L3_IPV4_EXT)) {
+		ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+		ipv4_hdr->hdr_checksum = 0;
+		ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
+	} else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6_EXT))
+		ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP) {
+		struct tcp_hdr *tcp_hdr = (struct tcp_hdr *)(l3_hdr +
+					  mbuf->l3_len);
+		tcp_hdr->cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			tcp_hdr->cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+							       tcp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			tcp_hdr->cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+							       tcp_hdr);
+	} else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) ==
+		   RTE_PTYPE_L4_UDP) {
+		struct udp_hdr *udp_hdr = (struct udp_hdr *)(l3_hdr +
+							     mbuf->l3_len);
+		udp_hdr->dgram_cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			udp_hdr->dgram_cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+								     udp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			udp_hdr->dgram_cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+								     udp_hdr);
+	}
+}
+
+static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
+					 struct qm_fd *fd, char *prs_buf)
+{
+	struct dpaa_eth_parse_results_t *prs;
+
+	DPAA_DP_LOG(DEBUG, " Offloading checksum for mbuf: %p", mbuf);
+
+	prs = GET_TX_PRS(prs_buf);
+	prs->l3r = 0;
+	prs->l4r = 0;
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	   RTE_PTYPE_L3_IPV4_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV4;
+	else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		 ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		RTE_PTYPE_L3_IPV6_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV6;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_TCP;
+	else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_UDP;
+
+	prs->ip_off[0] = mbuf->l2_len;
+	prs->l4_off = mbuf->l3_len + mbuf->l2_len;
+	/* Enable L3 (and L4, if TCP or UDP) HW checksum*/
+	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -358,6 +434,19 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 		}
 		rte_pktmbuf_free(mbuf);
 	}
+
+	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+		if (mbuf->data_off < (DEFAULT_TX_ICEOF +
+		    sizeof(struct dpaa_eth_parse_results_t))) {
+			DPAA_DP_LOG(DEBUG, "Checksum offload Err: "
+				"Not enough Headroom "
+				"space for correct Checksum offload."
+				"So Calculating checksum in Software.");
+			dpaa_checksum(mbuf);
+		} else {
+			dpaa_checksum_offload(mbuf, fd_arr, mbuf->buf_addr);
+		}
+	}
 }
 
 /* Handle all mbufs on dpaa BMAN managed pool */
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 68d2c41..d10298e 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -41,6 +41,22 @@
 
 /* IC offsets from buffer header address */
 #define DEFAULT_RX_ICEOF	16
+#define DEFAULT_TX_ICEOF	16
+
+/*
+ * Values for the L3R field of the FM Parse Results
+ */
+/* L3 Type field: First IP Present IPv4 */
+#define DPAA_L3_PARSE_RESULT_IPV4 0x80
+/* L3 Type field: First IP Present IPv6 */
+#define DPAA_L3_PARSE_RESULT_IPV6	0x40
+/* Values for the L4R field of the FM Parse Results
+ * See $8.8.4.7.20 - L4 HXS - L4 Results from DPAA-Rev2 Reference Manual.
+ */
+/* L4 Type field: UDP */
+#define DPAA_L4_PARSE_RESULT_UDP	0x40
+/* L4 Type field: TCP */
+#define DPAA_L4_PARSE_RESULT_TCP	0x20
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
@@ -255,7 +271,12 @@ struct annotations_t {
 	(struct annotations_t *)(_buf)
 
 #define GET_RX_PRS(_buf) \
-	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)(_buf) + \
+	DEFAULT_RX_ICEOF)
+
+#define GET_TX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)(_buf) + \
+	DEFAULT_TX_ICEOF)
 
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 37/40] net/dpaa: support Scattered Rx
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (35 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 36/40] net/dpaa: support checksum offload Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 38/40] net/dpaa: add packet dump for debugging Shreyansh Jain
                           ` (3 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_rxtx.c      | 159 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      |   9 +++
 3 files changed, 169 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 23626c0..0e7956c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -8,6 +8,7 @@ Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
+Scattered Rx         = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 976268b..9c25d8c 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -276,18 +276,82 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
 	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
 }
 
+struct rte_mbuf *
+dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
+{
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+	struct qm_sg_entry *sgt, *sg_temp;
+	void *vaddr, *sg_vaddr;
+	int i = 0;
+	uint8_t fd_offset = fd->offset;
+
+	DPAA_DP_LOG(DEBUG, "Received an SG frame");
+
+	vaddr = rte_dpaa_mem_ptov(qm_fd_addr(fd));
+	if (!vaddr) {
+		DPAA_PMD_ERR("unable to convert physical address");
+		return NULL;
+	}
+	sgt = vaddr + fd_offset;
+	sg_temp = &sgt[i++];
+	hw_sg_to_cpu(sg_temp);
+	temp = (struct rte_mbuf *)((char *)vaddr - bp_info->meta_data_size);
+	sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+
+	first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						bp_info->meta_data_size);
+	first_seg->data_off = sg_temp->offset;
+	first_seg->data_len = sg_temp->length;
+	first_seg->pkt_len = sg_temp->length;
+	rte_mbuf_refcnt_set(first_seg, 1);
+
+	first_seg->port = ifid;
+	first_seg->nb_segs = 1;
+	first_seg->ol_flags = 0;
+	prev_seg = first_seg;
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		hw_sg_to_cpu(sg_temp);
+		sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+		cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						      bp_info->meta_data_size);
+		cur_seg->data_off = sg_temp->offset;
+		cur_seg->data_len = sg_temp->length;
+		first_seg->pkt_len += sg_temp->length;
+		first_seg->nb_segs += 1;
+		rte_mbuf_refcnt_set(cur_seg, 1);
+		prev_seg->next = cur_seg;
+		if (sg_temp->final) {
+			cur_seg->next = NULL;
+			break;
+		}
+		prev_seg = cur_seg;
+	}
+
+	dpaa_eth_packet_info(first_seg, (uint64_t)vaddr);
+	rte_pktmbuf_free_seg(temp);
+
+	return first_seg;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
 	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
 	struct rte_mbuf *mbuf;
 	void *ptr;
+	uint8_t format =
+		(fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
 	uint16_t offset =
 		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
 	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
 
 	DPAA_DP_LOG(DEBUG, " FD--->MBUF");
 
+	if (unlikely(format == qm_fd_sg))
+		return dpaa_eth_sg_to_mbuf(fd, ifid);
+
 	/* Ignoring case when format != qm_fd_contig */
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
@@ -390,6 +454,95 @@ static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
 	return dpaa_mbuf;
 }
 
+int
+dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
+		struct qm_fd *fd,
+		uint32_t bpid)
+{
+	struct rte_mbuf *cur_seg = mbuf, *prev_seg = NULL;
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(bpid);
+	struct rte_mbuf *temp, *mi;
+	struct qm_sg_entry *sg_temp, *sgt;
+	int i = 0;
+
+	DPAA_DP_LOG(DEBUG, "Creating SG FD to transmit");
+
+	temp = rte_pktmbuf_alloc(bp_info->mp);
+	if (!temp) {
+		DPAA_PMD_ERR("Failure in allocation of mbuf");
+		return -1;
+	}
+	if (temp->buf_len < ((mbuf->nb_segs * sizeof(struct qm_sg_entry))
+				+ temp->data_off)) {
+		DPAA_PMD_ERR("Insufficient space in mbuf for SG entries");
+		return -1;
+	}
+
+	fd->cmd = 0;
+	fd->opaque_addr = 0;
+
+	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+		if (temp->data_off < DEFAULT_TX_ICEOF
+			+ sizeof(struct dpaa_eth_parse_results_t))
+			temp->data_off = DEFAULT_TX_ICEOF
+				+ sizeof(struct dpaa_eth_parse_results_t);
+		dcbz_64(temp->buf_addr);
+		dpaa_checksum_offload(mbuf, fd, temp->buf_addr);
+	}
+
+	sgt = temp->buf_addr + temp->data_off;
+	fd->format = QM_FD_SG;
+	fd->addr = temp->buf_physaddr;
+	fd->offset = temp->data_off;
+	fd->bpid = bpid;
+	fd->length20 = mbuf->pkt_len;
+
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		sg_temp->opaque = 0;
+		sg_temp->val = 0;
+		sg_temp->addr = cur_seg->buf_physaddr;
+		sg_temp->offset = cur_seg->data_off;
+		sg_temp->length = cur_seg->data_len;
+		if (RTE_MBUF_DIRECT(cur_seg)) {
+			if (rte_mbuf_refcnt_read(cur_seg) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+				rte_mbuf_refcnt_update(cur_seg, -1);
+			} else {
+				sg_temp->bpid =
+					DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
+			}
+			cur_seg = cur_seg->next;
+		} else {
+			/* Get owner MBUF from indirect buffer */
+			mi = rte_mbuf_from_indirect(cur_seg);
+			if (rte_mbuf_refcnt_read(mi) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * owner buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+			} else {
+				sg_temp->bpid = DPAA_MEMPOOL_TO_BPID(mi->pool);
+				rte_mbuf_refcnt_update(mi, 1);
+			}
+			prev_seg = cur_seg;
+			cur_seg = cur_seg->next;
+			prev_seg->next = NULL;
+			rte_pktmbuf_free(prev_seg);
+		}
+		if (cur_seg == NULL) {
+			sg_temp->final = 1;
+			cpu_to_hw_sg(sg_temp);
+			break;
+		}
+		cpu_to_hw_sg(sg_temp);
+	}
+	return 0;
+}
+
 /* Handle mbufs which are not segmented (non SG) */
 static inline void
 tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
@@ -460,6 +613,12 @@ tx_on_dpaa_pool(struct rte_mbuf *mbuf,
 	if (mbuf->nb_segs == 1) {
 		/* Case for non-segmented buffers */
 		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr);
+	} else if (mbuf->nb_segs > 1 &&
+		   mbuf->nb_segs <= DPAA_SGT_MAX_ENTRIES) {
+		if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr, bp_info->bpid)) {
+			DPAA_PMD_DEBUG("Unable to create Scatter Gather FD");
+			return 1;
+		}
 	} else {
 		DPAA_PMD_DEBUG("Number of Segments not supported");
 		return 1;
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index d10298e..2ffc4ff 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -58,6 +58,8 @@
 /* L4 Type field: TCP */
 #define DPAA_L4_PARSE_RESULT_TCP	0x20
 
+#define DPAA_SGT_MAX_ENTRIES 16 /* maximum number of entries in SG Table */
+
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
 
@@ -285,4 +287,11 @@ uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
 			      struct rte_mbuf **bufs __rte_unused,
 			      uint16_t nb_bufs __rte_unused);
+
+struct rte_mbuf *dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid);
+
+int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
+			   struct qm_fd *fd,
+			   uint32_t bpid);
+
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 38/40] net/dpaa: add packet dump for debugging
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (36 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 37/40] net/dpaa: support Scattered Rx Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 39/40] net/dpaa: support firmware version get API Shreyansh Jain
                           ` (2 subsequent siblings)
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.c   | 26 ++++++++++++++++++++++++++
 2 files changed, 68 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4eb9b62..ad9fea8 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -618,6 +618,39 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
 	return ret;
 }
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
+static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		DPAA_PMD_ERR("Reserve debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return -EINVAL;
+	}
+	/* "map" this Rx FQ to one of the interfaces Tx FQID */
+	DPAA_PMD_DEBUG("Creating debug fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL;
+	opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init debug fqid %d failed with ret: %d",
+			    fqid, ret);
+	return ret;
+}
+#endif
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -692,6 +725,15 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	}
 	dpaa_intf->nb_tx_queues = num_cores;
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+#endif
+
 	DPAA_PMD_DEBUG("All frame queues created");
 
 	/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 9c25d8c..d73f9cb 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,31 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+#if (defined RTE_LIBRTE_DPAA_DEBUG_DRIVER)
+void dpaa_display_frame(const struct qm_fd *fd)
+{
+	int ii;
+	char *ptr;
+
+	printf("%s::bpid %x addr %08x%08x, format %d off %d, len %d stat %x\n",
+	       __func__, fd->bpid, fd->addr_hi, fd->addr_lo, fd->format,
+		fd->offset, fd->length20, fd->status);
+
+	ptr = (char *)rte_dpaa_mem_ptov(fd->addr);
+	ptr += fd->offset;
+	printf("%02x ", *ptr);
+	for (ii = 1; ii < fd->length20; ii++) {
+		printf("%02x ", *ptr);
+		if ((ii % 16) == 0)
+			printf("\n");
+		ptr++;
+	}
+	printf("\n");
+}
+#else
+#define dpaa_display_frame(a)
+#endif
+
 static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
 				     uint64_t prs __rte_unused)
 {
@@ -353,6 +378,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 		return dpaa_eth_sg_to_mbuf(fd, ifid);
 
 	/* Ignoring case when format != qm_fd_contig */
+	dpaa_display_frame(fd);
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
 	 * of a corrupted packet
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 39/40] net/dpaa: support firmware version get API
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (37 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 38/40] net/dpaa: add packet dump for debugging Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 11:33         ` [PATCH v5 40/40] net/dpaa: support extended statistics Shreyansh Jain
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 36 ++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_ethdev.h    |  5 +++++
 3 files changed, 42 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 0e7956c..09b9bd9 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -18,5 +18,6 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
+FW version           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index ad9fea8..0002324 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -164,6 +164,41 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static int
+dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
+		     char *fw_version,
+		     size_t fw_size)
+{
+	int ret;
+	FILE *svr_file = NULL;
+	unsigned int svr_ver = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	svr_file = fopen(DPAA_SOC_ID_FILE, "r");
+	if (!svr_file) {
+		DPAA_PMD_ERR("Unable to open SoC device");
+		return -ENOTSUP; /* Not supported on this infra */
+	}
+
+	ret = fscanf(svr_file, "svr:%x", &svr_ver);
+	if (ret <= 0) {
+		DPAA_PMD_ERR("Unable to read SoC device");
+		return -ENOTSUP; /* Not supported on this infra */
+	}
+
+	ret = snprintf(fw_version, fw_size,
+		       "svr:%x-fman-v%x",
+		       svr_ver,
+		       fman_ip_rev);
+
+	ret += 1; /* add the size of '\0' */
+	if (fw_size < (uint32_t)ret)
+		return ret;
+	else
+		return 0;
+}
+
 static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 			      struct rte_eth_dev_info *dev_info)
 {
@@ -512,6 +547,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.mac_addr_remove	  = dpaa_dev_remove_mac_addr,
 	.mac_addr_set		  = dpaa_dev_set_mac_addr,
 
+	.fw_version_get		  = dpaa_fw_version_get,
 };
 
 static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index e1e062e..a980262 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -43,6 +43,11 @@
 #include <of.h>
 #include <netcfg.h>
 
+/* DPAA SoC identifier; If this is not available, it can be concluded
+ * that board is non-DPAA. Single slot is currently supported.
+ */
+#define DPAA_SOC_ID_FILE		"sys/devices/soc0/soc_id"
+
 #define DPAA_MBUF_HW_ANNOTATION		64
 #define DPAA_FD_PTA_SIZE		64
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v5 40/40] net/dpaa: support extended statistics
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (38 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 39/40] net/dpaa: support firmware version get API Shreyansh Jain
@ 2017-09-28 11:33         ` Shreyansh Jain
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  40 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:33 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 143 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_ethdev.h    |  40 +++++++++++
 3 files changed, 184 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 09b9bd9..24cfd85 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -18,6 +18,7 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
+Extended stats       = Y
 FW version           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 0002324..db4921f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -75,6 +75,40 @@
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
 
+struct rte_dpaa_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	uint32_t offset;
+};
+
+static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = {
+	{"rx_align_err",
+		offsetof(struct dpaa_if_stats, raln)},
+	{"rx_valid_pause",
+		offsetof(struct dpaa_if_stats, rxpf)},
+	{"rx_fcs_err",
+		offsetof(struct dpaa_if_stats, rfcs)},
+	{"rx_vlan_frame",
+		offsetof(struct dpaa_if_stats, rvlan)},
+	{"rx_frame_err",
+		offsetof(struct dpaa_if_stats, rerr)},
+	{"rx_drop_err",
+		offsetof(struct dpaa_if_stats, rdrp)},
+	{"rx_undersized",
+		offsetof(struct dpaa_if_stats, rund)},
+	{"rx_oversize_err",
+		offsetof(struct dpaa_if_stats, rovr)},
+	{"rx_fragment_pkt",
+		offsetof(struct dpaa_if_stats, rfrg)},
+	{"tx_valid_pause",
+		offsetof(struct dpaa_if_stats, txpf)},
+	{"tx_fcs_err",
+		offsetof(struct dpaa_if_stats, terr)},
+	{"tx_vlan_frame",
+		offsetof(struct dpaa_if_stats, tvlan)},
+	{"rx_undersized",
+		offsetof(struct dpaa_if_stats, tund)},
+};
+
 static int
 dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
@@ -268,6 +302,110 @@ static void dpaa_eth_stats_reset(struct rte_eth_dev *dev)
 	fman_if_stats_reset(dpaa_intf->fif);
 }
 
+static int
+dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+		    unsigned int n)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
+	uint64_t values[sizeof(struct dpaa_if_stats) / 8];
+
+	if (xstats == NULL)
+		return 0;
+
+	if (n < num)
+		return num;
+
+	fman_if_stats_get_all(dpaa_intf->fif, values,
+			      sizeof(struct dpaa_if_stats) / 8);
+
+	for (i = 0; i < num; i++) {
+		xstats[i].id = i;
+		xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
+	}
+	return i;
+}
+
+static int
+dpaa_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+		      struct rte_eth_xstat_name *xstats_names,
+		      __rte_unused unsigned int limit)
+{
+	unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+
+	if (xstats_names != NULL)
+		for (i = 0; i < stat_cnt; i++)
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s",
+				 dpaa_xstats_strings[i].name);
+
+	return stat_cnt;
+}
+
+static int
+dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
+		      uint64_t *values, unsigned int n)
+{
+	unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+	uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8];
+
+	if (!ids) {
+		struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+		if (n < stat_cnt)
+			return stat_cnt;
+
+		if (!values)
+			return 0;
+
+		fman_if_stats_get_all(dpaa_intf->fif, values_copy,
+				      sizeof(struct dpaa_if_stats));
+
+		for (i = 0; i < stat_cnt; i++)
+			values[i] =
+				values_copy[dpaa_xstats_strings[i].offset / 8];
+
+		return stat_cnt;
+	}
+
+	dpaa_xstats_get_by_id(dev, NULL, values_copy, stat_cnt);
+
+	for (i = 0; i < n; i++) {
+		if (ids[i] >= stat_cnt) {
+			DPAA_PMD_ERR("id value isn't valid");
+			return -1;
+		}
+		values[i] = values_copy[ids[i]];
+	}
+	return n;
+}
+
+static int
+dpaa_xstats_get_names_by_id(
+	struct rte_eth_dev *dev,
+	struct rte_eth_xstat_name *xstats_names,
+	const uint64_t *ids,
+	unsigned int limit)
+{
+	unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+	struct rte_eth_xstat_name xstats_names_copy[stat_cnt];
+
+	if (!ids)
+		return dpaa_xstats_get_names(dev, xstats_names, limit);
+
+	dpaa_xstats_get_names(dev, xstats_names_copy, limit);
+
+	for (i = 0; i < limit; i++) {
+		if (ids[i] >= stat_cnt) {
+			DPAA_PMD_ERR("id value isn't valid");
+			return -1;
+		}
+		strcpy(xstats_names[i].name, xstats_names_copy[ids[i]].name);
+	}
+	return limit;
+}
+
 static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -535,6 +673,11 @@ static struct eth_dev_ops dpaa_devops = {
 
 	.link_update		  = dpaa_eth_link_update,
 	.stats_get		  = dpaa_eth_stats_get,
+	.xstats_get		  = dpaa_dev_xstats_get,
+	.xstats_get_by_id	  = dpaa_xstats_get_by_id,
+	.xstats_get_names_by_id	  = dpaa_xstats_get_names_by_id,
+	.xstats_get_names	  = dpaa_xstats_get_names,
+	.xstats_reset		  = dpaa_eth_stats_reset,
 	.stats_reset		  = dpaa_eth_stats_reset,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index a980262..5457d61 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -139,4 +139,44 @@ struct dpaa_if {
 	struct rte_eth_fc_conf *fc_conf;
 };
 
+struct dpaa_if_stats {
+	/* Rx Statistics Counter */
+	uint64_t reoct;		/**<Rx Eth Octets Counter */
+	uint64_t roct;		/**<Rx Octet Counters */
+	uint64_t raln;		/**<Rx Alignment Error Counter */
+	uint64_t rxpf;		/**<Rx valid Pause Frame */
+	uint64_t rfrm;		/**<Rx Frame counter */
+	uint64_t rfcs;		/**<Rx frame check seq error */
+	uint64_t rvlan;		/**<Rx Vlan Frame Counter */
+	uint64_t rerr;		/**<Rx Frame error */
+	uint64_t ruca;		/**<Rx Unicast */
+	uint64_t rmca;		/**<Rx Multicast */
+	uint64_t rbca;		/**<Rx Broadcast */
+	uint64_t rdrp;		/**<Rx Dropped Packet */
+	uint64_t rpkt;		/**<Rx packet */
+	uint64_t rund;		/**<Rx undersized packets */
+	uint32_t res_x[14];
+	uint64_t rovr;		/**<Rx oversized but good */
+	uint64_t rjbr;		/**<Rx oversized with bad csum */
+	uint64_t rfrg;		/**<Rx fragment Packet */
+	uint64_t rcnp;		/**<Rx control packets (0x8808 */
+	uint64_t rdrntp;	/**<Rx dropped due to FIFO overflow */
+	uint32_t res01d0[12];
+	/* Tx Statistics Counter */
+	uint64_t teoct;		/**<Tx eth octets */
+	uint64_t toct;		/**<Tx Octets */
+	uint32_t res0210[2];
+	uint64_t txpf;		/**<Tx valid pause frame */
+	uint64_t tfrm;		/**<Tx frame counter */
+	uint64_t tfcs;		/**<Tx FCS error */
+	uint64_t tvlan;		/**<Tx Vlan Frame */
+	uint64_t terr;		/**<Tx frame error */
+	uint64_t tuca;		/**<Tx Unicast */
+	uint64_t tmca;		/**<Tx Multicast */
+	uint64_t tbca;		/**<Tx Broadcast */
+	uint32_t res0258[2];
+	uint64_t tpkt;		/**<Tx Packet */
+	uint64_t tund;		/**<Tx Undersized */
+};
+
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 11/41] bus/dpaa: add QMan driver core routines
  2017-09-19 14:18           ` Shreyansh Jain
@ 2017-09-28 11:45             ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:45 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Tuesday 19 September 2017 07:48 PM, Shreyansh Jain wrote:
> On Monday 18 September 2017 08:23 PM, Ferruh Yigit wrote:
>> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>>> Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>
>> <...>
>>> +#ifdef RTE_LIBRTE_DPAA_CHECKING
>>
>> This is not defined anywhere, it looks this will come from config file
>> in further patches, config file update can be moved to this patch.
> 
> Its more of a debugging macro and it was introduced in later patches.
> Not that I see any reason why it can't be introduced here. I will fix this.
> 
>>
>>> +    eqcr->busy = 0;
>>> +    eqcr->pmode = pmode;
>>> +#endif
>>

In the v5, taking cue from yours and Thomas' comments, I have removed a 
few debugging macros and combined a couple. I have also changed this to 
"HWDEBUG" and documented this. This macro is necessary for enabling some 
internally debugging configurations (like error queues).

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 13/41] bus/dpaa: add support for FMAN frame queue lookup
  2017-09-18 14:51         ` Ferruh Yigit
@ 2017-09-28 11:47           ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:47 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Monday 18 September 2017 08:21 PM, Ferruh Yigit wrote:
> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>> Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
>> Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> 
> <...>
> 
>>   
>> +#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)
>> +#error "_ARM64 requires _FSL_QMAN_FQ_LOOKUP"
>> +#endif
> 
> This PMD enabled with new added config
> "defconfig_arm64-armv8a-linuxapp-gcc", which is 64bits. So this means
> CONFIG_FSL_QMAN_FQ_LOOKUP always defined for the bus.
> 
> Does is make sense to keep above check, but for rest of the code assume
> CONFIG_FSL_QMAN_FQ_LOOKUP always set and remove the #ifdefs, to simplify
> the code?
> 
> <...>
> 

I have removed these lines in v5. They were indeed unnecessary.
Thanks for highlighting.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 17/41] doc: add NXP DPAA PMD documentation
  2017-09-19 14:25           ` Shreyansh Jain
@ 2017-09-28 11:49             ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:49 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Tuesday 19 September 2017 07:55 PM, Shreyansh Jain wrote:
> On Monday 18 September 2017 08:23 PM, Ferruh Yigit wrote:
>> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>

[...]

>>> +  This is not a DPAA specific configuration - it is a generic RTE 
>>> config.
>>> +  For optimal performance and hardware utilization, it is expected 
>>> that DPAA
>>> +  Mempool driver is used for mempools. For that, this configuration 
>>> needs to
>>> +  enabled.
>>> +
>>> +Environment Variables
>>> +~~~~~~~~~~~~~~~~~~~~~
>>> +
>>> +DPAA drivers uses the following environment variables to configure its
>>> +state during application initialization:
>>> +
>>> +- ``DPAA_NUM_RX_QUEUES`` (default 1)
>>
>> Why not getting this value as device arg?
> 
> We had this discussion during DPAA2 as well. This time, I was not sure 
> of how the device argument patches are turning out to be after the 
> re-shuffle being done by Gaetan. So, I kept this as it is.

In the v5, I have continued with the old way.
I am still reviewing the devargs patches from Gaetan - once that is 
done, probably after 1711, I will push another patch for removing this 
environment variable.

In fact, in an internal code change, we have removed need to rely on 
this. but, that is slightly ahead in future.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup
  2017-09-21 12:59           ` Shreyansh Jain
@ 2017-09-28 11:51             ` Shreyansh Jain
  0 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 11:51 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, hemant.agrawal

On Thursday 21 September 2017 06:29 PM, Shreyansh Jain wrote:
> Hello Ferruh,
> 
> Apologies for delay in response for these, I am already working to get 
> the next version based on your comments. Meanwhile, some comments inline...
> 
> On Monday 18 September 2017 08:25 PM, Ferruh Yigit wrote:
>> On 9/9/2017 12:21 PM, Shreyansh Jain wrote:
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>
>> <...>

[...]

>>
>>> +    }
>>> +
>>> +    /* Populate ethdev structure */
>>>       eth_dev->dev_ops = &dpaa_devops;
>>> +    eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
>>> +    eth_dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
>>> +
>>> +    /* Allocate memory for storing MAC addresses */
>>> +    eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
>>> +        ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
>>> +    if (eth_dev->data->mac_addrs == NULL) {
>>> +        DPAA_PMD_ERR("Failed to allocate %d bytes needed to "
>>> +                        "store MAC addresses",
>>> +                ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
>>
>> free dpaa_intf->rx_queues, tx_queues ?
> 
> yes, certainly an issue. I will fix it.

I have fixed this in v5.

> 
>>
>>> +        return -ENOMEM;
>>> +    }
>>> +
>>> +    /* copy the primary mac address */
>>> +    memcpy(eth_dev->data->mac_addrs[0].addr_bytes,
>>> +        fman_intf->mac_addr.addr_bytes,
>>> +        ETHER_ADDR_LEN);
>>
>> Instead can use ether_addr_copy() instead.
> 
> :) Yes, I can.

Unfortunately, I forgot to fix this in v5.
If you want, I can send a small patch against this. Sending a v6 because 
of this would be overkill. But, this is definitely a valid comment.
Sorry.

> 
>>
>> <...>
>>
> 
> 

^ permalink raw reply	[flat|nested] 367+ messages in thread

* [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
                           ` (39 preceding siblings ...)
  2017-09-28 11:33         ` [PATCH v5 40/40] net/dpaa: support extended statistics Shreyansh Jain
@ 2017-09-28 12:29         ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
                             ` (41 more replies)
  40 siblings, 42 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Change Log:
============

v6:
 - rebased over net-next/master (9d660ac) 
 - fixed mk/rte.app.mk (Thomas's comment). It had incorrect
   style of adding library linking
 - changed from manual memcpy of etheraddr to ether_addr_copy
   as suggested by Ferruh
 (these were minor changes missed in v5)

 v5:
 - rebased over net-next/master (9d660ac)	
 - restructuring debugging macros. Removed a few and combined
   others. DPAA now reflects the dynamic logging with segragated
   DP logging
 - updated documentation for missing configuration option
 - fixed map file; shared build was broken earlier
 - other minor fixes from review comments

v4:
 - Some checkpatch fixes which were reported by checkpatch@dpdk
 - adding extra stats feature patch (patch 41)

v3:
 - Rebasing over 17.11-rc0 (85238f50)
 - Checkpatch fixes
   (There are still 2 errors which I think are false positives)
 - Implement rte_bus.find_device() interface
 - Various other minor updates/cleanups

v2:
 - Fixing various comments from Ferruh, but broadly:
  -) Logging is been changed to reflect rte_log_register
  -) Logs across Bus, Mempool and PMD updated
  -) fixed incorrect feature claimed in dpaa.ini
 - Removed 24/40/48 bit swapping macro from EAL.
   These are defined in dpaa/bus now (compat.h)
 - Added missing memory cleanup operation
 - Updated documentation with some missing information

Introduction
============

RFC was posted here -> [R3]
V5 was posted here  -> [R8]

This patch series adds NXP's QorIQ-Layerscape DPAA Architecture based
bus driver, mempool driver and PMD. This version of driver supports NXP
LS1043A/LS1023A, LS1046A/LS1026A family of network SoCs. [R1]

DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
components designed for high-speed network packet processing. This
architecture provides the infrastructure to support simplified sharing of
networking interfaces and accelerators by multiple CPU cores, and the
accelerators themselves.

This patchset introduces the following:
1. DPAA Bus (drivers/bus/dpaa)
 The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
 or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
 The patches introduce necessary layers to expose the DPAA hardware
 blocks for interfacing with RTE framework.

2. DPAA Mempool (drivers/mempool/dpaa)
 BMan, or Buffer Manager, block of DPAA features a hardware offloaded
 mempool. These patches add support for a driver to manage the BMan
 block. This driver allows for mempool creation, deletion, buffer
 acquire and release, as per the RTE APIs.

3. DPAA PMD (drivers/net/dpaa)
 The Poll Mode Driver for DPAA NIC Interfaces.

Patch Layout
============

01: Add DPAA SoC build configuration
02~16: Add DPAA Bus support and features, incrementally
17: Add Documentation
18~21: Add DPAA Mempool support
22~40: Add PMD and its various features, incrementally

References
==========

[R1] http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-layerscape-arm-processors:QORIQ-ARM
[R2] http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf
[R3] RFC: http://dpdk.org/ml/archives/dev/2017-May/066675.html
[R4] v1: http://dpdk.org/ml/archives/dev/2017-June/068020.html
[R5] v2: http://dpdk.org/ml/archives/dev/2017-July/070113.html
[R6] v3: http://dpdk.org/ml/archives/dev/2017-August/073269.html
[R7] v4: http://dpdk.org/ml/archives/dev/2017-September/074936.html
[R8] v5: http://dpdk.org/dev/patchwork/patch/29245/

Hemant Agrawal (3):
  bus/dpaa: add compatibility and helper macros
  net/dpaa: support firmware version get API
  net/dpaa: support extended statistics

Shreyansh Jain (37):
  config: add NXP DPAA SoC build configuration
  bus/dpaa: introduce NXP DPAA Bus driver skeleton
  bus/dpaa: add OF parser for device scanning
  bus/dpaa: introducing FMan configurations
  bus/dpaa: add FMan hardware operations
  bus/dpaa: enable DPAA IOCTL portal driver
  bus/dpaa: add layer for interrupt emulation using pthread
  bus/dpaa: add routines for managing a RB tree
  bus/dpaa: add QMAN interface driver
  bus/dpaa: add QMan driver core routines
  bus/dpaa: add BMAN driver core
  bus/dpaa: support FMAN frame queue lookup
  bus/dpaa: add BMan hardware interfaces
  bus/dpaa: add fman flow control threshold setting
  bus/dpaa: integrate DPAA Bus with hardware blocks
  doc: add NXP DPAA PMD documentation
  bus/dpaa: add DPAA mempool logging macros
  mempool/dpaa: support NXP DPAA Mempool
  config: enable compilation of DPAA Mempool driver
  bus/dpaa: add DPAA PMD logging macros
  net/dpaa: add NXP DPAA PMD driver skeleton
  config: enable NXP DPAA PMD compilation
  net/dpaa: support Tx and Rx queue setup
  net/dpaa: support MTU update
  net/dpaa: support jumbo frames
  net/dpaa: support link status update
  net/dpaa: support device info and speed capability
  net/dpaa: support promiscuous toggle
  net/dpaa: support multicast toggle
  net/dpaa: support MAC address update
  net/dpaa: support basic stats
  net/dpaa: support flow control
  net/dpaa: support hashed RSS
  net/dpaa: support packet type parsing
  net/dpaa: support checksum offload
  net/dpaa: support Scattered Rx
  net/dpaa: add packet dump for debugging

 MAINTAINERS                                       |    9 +
 config/common_base                                |    5 +
 config/defconfig_arm64-dpaa-linuxapp-gcc          |   59 +
 doc/guides/nics/dpaa.rst                          |  377 ++++
 doc/guides/nics/features/dpaa.ini                 |   24 +
 doc/guides/nics/index.rst                         |    1 +
 drivers/bus/Makefile                              |    3 +
 drivers/bus/dpaa/Makefile                         |   76 +
 drivers/bus/dpaa/base/fman/fman.c                 |  611 +++++
 drivers/bus/dpaa/base/fman/fman_hw.c              |  590 +++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c         |  214 ++
 drivers/bus/dpaa/base/fman/of.c                   |  576 +++++
 drivers/bus/dpaa/base/qbman/bman.c                |  394 ++++
 drivers/bus/dpaa/base/qbman/bman.h                |  550 +++++
 drivers/bus/dpaa/base/qbman/bman_driver.c         |  323 +++
 drivers/bus/dpaa/base/qbman/bman_priv.h           |  125 ++
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c          |  104 +
 drivers/bus/dpaa/base/qbman/dpaa_sys.c            |  136 ++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h            |   61 +
 drivers/bus/dpaa/base/qbman/process.c             |  331 +++
 drivers/bus/dpaa/base/qbman/qman.c                | 2497 +++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h                |  888 ++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c         |  288 +++
 drivers/bus/dpaa/base/qbman/qman_priv.h           |  310 +++
 drivers/bus/dpaa/dpaa_bus.c                       |  465 ++++
 drivers/bus/dpaa/include/compat.h                 |  385 ++++
 drivers/bus/dpaa/include/dpaa_bits.h              |   65 +
 drivers/bus/dpaa/include/dpaa_list.h              |  101 +
 drivers/bus/dpaa/include/dpaa_rbtree.h            |  143 ++
 drivers/bus/dpaa/include/fman.h                   |  458 ++++
 drivers/bus/dpaa/include/fsl_bman.h               |  375 ++++
 drivers/bus/dpaa/include/fsl_fman.h               |  181 ++
 drivers/bus/dpaa/include/fsl_fman_crc64.h         |  263 +++
 drivers/bus/dpaa/include/fsl_qman.h               | 2021 +++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h                |  107 +
 drivers/bus/dpaa/include/netcfg.h                 |   96 +
 drivers/bus/dpaa/include/of.h                     |  190 ++
 drivers/bus/dpaa/include/process.h                |  107 +
 drivers/bus/dpaa/rte_bus_dpaa_version.map         |   57 +
 drivers/bus/dpaa/rte_dpaa_bus.h                   |  173 ++
 drivers/bus/dpaa/rte_dpaa_logs.h                  |  107 +
 drivers/mempool/Makefile                          |    2 +
 drivers/mempool/dpaa/Makefile                     |   58 +
 drivers/mempool/dpaa/dpaa_mempool.c               |  286 +++
 drivers/mempool/dpaa/dpaa_mempool.h               |   77 +
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map |    8 +
 drivers/net/Makefile                              |    2 +
 drivers/net/dpaa/Makefile                         |   61 +
 drivers/net/dpaa/dpaa_ethdev.c                    | 1110 +++++++++
 drivers/net/dpaa/dpaa_ethdev.h                    |  182 ++
 drivers/net/dpaa/dpaa_rxtx.c                      |  760 +++++++
 drivers/net/dpaa/dpaa_rxtx.h                      |  297 +++
 drivers/net/dpaa/rte_pmd_dpaa_version.map         |    4 +
 mk/machine/dpaa/rte.vars.mk                       |   61 +
 mk/rte.app.mk                                     |    4 +
 55 files changed, 16758 insertions(+)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h
 create mode 100644 drivers/bus/dpaa/include/of.h
 create mode 100644 drivers/bus/dpaa/include/process.h
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map
 create mode 100644 mk/machine/dpaa/rte.vars.mk

-- 
2.9.3

^ permalink raw reply	[flat|nested] 367+ messages in thread

* [PATCH v6 01/40] config: add NXP DPAA SoC build configuration
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
                             ` (40 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch adds skeleton build configuration for DPAA platform.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc | 47 ++++++++++++++++++++++++
 mk/machine/dpaa/rte.vars.mk              | 61 ++++++++++++++++++++++++++++++++
 2 files changed, 108 insertions(+)
 create mode 100644 config/defconfig_arm64-dpaa-linuxapp-gcc
 create mode 100644 mk/machine/dpaa/rte.vars.mk

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
new file mode 100644
index 0000000..5bca012
--- /dev/null
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -0,0 +1,47 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 Freescale Semiconductor, Inc.
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+#include "defconfig_arm64-armv8a-linuxapp-gcc"
+
+# NXP (Freescale) - Soc Architecture with FMAN, QMAN & BMAN support
+CONFIG_RTE_MACHINE="dpaa"
+CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
+CONFIG_RTE_LIBRTE_VHOST_NUMA=n
+CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
+
+#
+# Compile Environment Abstraction Layer
+#
+CONFIG_RTE_MAX_LCORE=4
+CONFIG_RTE_MAX_NUMA_NODES=1
+CONFIG_RTE_CACHE_LINE_SIZE=64
+CONFIG_RTE_PKTMBUF_HEADROOM=128
diff --git a/mk/machine/dpaa/rte.vars.mk b/mk/machine/dpaa/rte.vars.mk
new file mode 100644
index 0000000..356a6af
--- /dev/null
+++ b/mk/machine/dpaa/rte.vars.mk
@@ -0,0 +1,61 @@
+#   BSD LICENSE
+#
+#   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# machine:
+#
+#   - can define ARCH variable (overridden by cmdline value)
+#   - can define CROSS variable (overridden by cmdline value)
+#   - define MACHINE_CFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+#   - can define CPU_CFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_LDFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_ASFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - may override any previously defined variable
+#
+
+# ARCH =
+# CROSS =
+# MACHINE_CFLAGS =
+# MACHINE_LDFLAGS =
+# MACHINE_ASFLAGS =
+# CPU_CFLAGS =
+# CPU_LDFLAGS =
+# CPU_ASFLAGS =
+MACHINE_CFLAGS += -march=armv8-a+crc
+
+ifdef CONFIG_RTE_ARCH_ARM_TUNE
+MACHINE_CFLAGS += -mtune=$(CONFIG_RTE_ARCH_ARM_TUNE:"%"=%)
+endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
                             ` (39 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 MAINTAINERS                               |   5 +
 config/common_base                        |   3 +
 config/defconfig_arm64-dpaa-linuxapp-gcc  |   4 +
 drivers/bus/Makefile                      |   3 +
 drivers/bus/dpaa/Makefile                 |  58 +++++++++
 drivers/bus/dpaa/dpaa_bus.c               | 207 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |   8 ++
 drivers/bus/dpaa/rte_dpaa_bus.h           | 148 +++++++++++++++++++++
 drivers/bus/dpaa/rte_dpaa_logs.h          |  65 ++++++++++
 9 files changed, 501 insertions(+)
 create mode 100644 drivers/bus/dpaa/Makefile
 create mode 100644 drivers/bus/dpaa/dpaa_bus.c
 create mode 100644 drivers/bus/dpaa/rte_bus_dpaa_version.map
 create mode 100644 drivers/bus/dpaa/rte_dpaa_bus.h
 create mode 100644 drivers/bus/dpaa/rte_dpaa_logs.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 8df2a7f..c566962 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -408,6 +408,11 @@ F: drivers/net/nfp/
 F: doc/guides/nics/nfp.rst
 F: doc/guides/nics/features/nfp.ini
 
+NXP dpaa
+M: Hemant Agrawal <hemant.agrawal@nxp.com>
+M: Shreyansh Jain <shreyansh.jain@nxp.com>
+F: drivers/bus/dpaa/
+
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/config/common_base b/config/common_base
index 439f3cc..fc1cdca 100644
--- a/config/common_base
+++ b/config/common_base
@@ -301,6 +301,9 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_MBOX=n
 CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=n
+
 #
 # Compile NXP DPAA2 FSL-MC Bus
 #
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 5bca012..8316fc9 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -45,3 +45,7 @@ CONFIG_RTE_MAX_LCORE=4
 CONFIG_RTE_MAX_NUMA_NODES=1
 CONFIG_RTE_CACHE_LINE_SIZE=64
 CONFIG_RTE_PKTMBUF_HEADROOM=128
+
+# NXP DPAA Bus
+CONFIG_RTE_LIBRTE_DPAA_BUS=y
+CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 0224214..6cb6466 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -32,6 +32,9 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mbuf librte_mempool librte_ring librte_ether
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += dpaa
+DEPDIRS-dpaa = $(core-libs)
+
 DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc
 DEPDIRS-fslmc = $(core-libs)
 
diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
new file mode 100644
index 0000000..28694c0
--- /dev/null
+++ b/drivers/bus/dpaa/Makefile
@@ -0,0 +1,58 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_BUS_DPAA=$(RTE_SDK)/drivers/bus/dpaa
+
+#
+# library name
+#
+LIB = librte_bus_dpaa.a
+
+CFLAGS := -I$(SRCDIR) $(CFLAGS)
+CFLAGS += -O3 $(WERROR_FLAGS)
+CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+
+# versioning export map
+EXPORT_MAP := rte_bus_dpaa_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	dpaa_bus.c
+
+# Link Pthread
+LDLIBS += -lpthread
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
new file mode 100644
index 0000000..cc343b3
--- /dev/null
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -0,0 +1,207 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_bus.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+int dpaa_logtype_bus;
+
+struct rte_dpaa_bus rte_dpaa_bus;
+
+static inline void
+dpaa_add_to_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+
+static inline void
+dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
+{
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
+}
+
+static int
+rte_dpaa_bus_scan(void)
+{
+	BUS_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+/* register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
+{
+	RTE_VERIFY(driver);
+
+	BUS_INIT_FUNC_TRACE();
+
+	TAILQ_INSERT_TAIL(&rte_dpaa_bus.driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = &rte_dpaa_bus;
+}
+
+/* un-register a dpaa bus based dpaa driver */
+void
+rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
+{
+	struct rte_dpaa_bus *dpaa_bus;
+
+	BUS_INIT_FUNC_TRACE();
+
+	dpaa_bus = driver->dpaa_bus;
+
+	TAILQ_REMOVE(&dpaa_bus->driver_list, driver, next);
+	/* Update Bus references */
+	driver->dpaa_bus = NULL;
+}
+
+static int
+rte_dpaa_device_match(struct rte_dpaa_driver *drv,
+		      struct rte_dpaa_device *dev)
+{
+	int ret = -1;
+
+	BUS_INIT_FUNC_TRACE();
+
+	if (!drv || !dev) {
+		DPAA_BUS_DEBUG("Invalid drv or dev received.");
+		return ret;
+	}
+
+	if (drv->drv_type == dev->device_type) {
+		DPAA_BUS_INFO("Device: %s matches for driver: %s",
+			      dev->name, drv->driver.name);
+		ret = 0; /* Found a match */
+	}
+
+	return ret;
+}
+
+static int
+rte_dpaa_bus_probe(void)
+{
+	int ret = -1;
+	struct rte_dpaa_device *dev;
+	struct rte_dpaa_driver *drv;
+
+	BUS_INIT_FUNC_TRACE();
+
+	/* For each registered driver, and device, call the driver->probe */
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		TAILQ_FOREACH(drv, &rte_dpaa_bus.driver_list, next) {
+			ret = rte_dpaa_device_match(drv, dev);
+			if (ret)
+				continue;
+
+			if (!drv->probe)
+				continue;
+
+			ret = drv->probe(drv, dev);
+			if (ret)
+				DPAA_BUS_ERR("Unable to probe.\n");
+			break;
+		}
+	}
+	return 0;
+}
+
+static struct rte_device *
+rte_dpaa_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
+		     const void *data)
+{
+	struct rte_dpaa_device *dev;
+
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		if (start && &dev->device == start) {
+			start = NULL;  /* starting point found */
+			continue;
+		}
+
+		if (cmp(&dev->device, data) == 0)
+			return &dev->device;
+	}
+
+	return NULL;
+}
+
+struct rte_dpaa_bus rte_dpaa_bus = {
+	.bus = {
+		.scan = rte_dpaa_bus_scan,
+		.probe = rte_dpaa_bus_probe,
+		.find_device = rte_dpaa_find_device,
+	},
+	.device_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.device_list),
+	.driver_list = TAILQ_HEAD_INITIALIZER(rte_dpaa_bus.driver_list),
+	.device_count = 0,
+};
+
+RTE_REGISTER_BUS(FSL_DPAA_BUS_NAME, rte_dpaa_bus.bus);
+
+RTE_INIT(dpaa_init_log);
+static void
+dpaa_init_log(void)
+{
+	dpaa_logtype_bus = rte_log_register("bus.dpaa");
+	if (dpaa_logtype_bus >= 0)
+		rte_log_set_level(dpaa_logtype_bus, RTE_LOG_NOTICE);
+}
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
new file mode 100644
index 0000000..9f41c77
--- /dev/null
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -0,0 +1,8 @@
+DPDK_17.11 {
+	global:
+
+	rte_dpaa_driver_register;
+	rte_dpaa_driver_unregister;
+
+	local: *;
+};
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
new file mode 100644
index 0000000..789882e
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -0,0 +1,148 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __RTE_DPAA_BUS_H__
+#define __RTE_DPAA_BUS_H__
+
+#include <rte_bus.h>
+#include <rte_mempool.h>
+
+#define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
+
+#define DEV_TO_DPAA_DEVICE(ptr)	\
+		container_of(ptr, struct rte_dpaa_device, device)
+
+struct rte_dpaa_device;
+struct rte_dpaa_driver;
+
+/* DPAA Device and Driver lists for DPAA bus */
+TAILQ_HEAD(rte_dpaa_device_list, rte_dpaa_device);
+TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
+
+enum rte_dpaa_type {
+	FSL_DPAA_ETH = 1,
+	FSL_DPAA_CRYPTO,
+};
+
+struct rte_dpaa_bus {
+	struct rte_bus bus;
+	struct rte_dpaa_device_list device_list;
+	struct rte_dpaa_driver_list driver_list;
+	int device_count;
+};
+
+struct dpaa_device_id {
+	uint8_t fman_id; /**< Fman interface ID, for ETH type device */
+	uint8_t mac_id; /**< Fman MAC interface ID, for ETH type device */
+	uint16_t dev_id; /**< Device Identifier from DPDK */
+};
+
+struct rte_dpaa_device {
+	TAILQ_ENTRY(rte_dpaa_device) next;
+	struct rte_device device;
+	union {
+		struct rte_eth_dev *eth_dev;
+		struct rte_cryptodev *crypto_dev;
+	};
+	struct rte_dpaa_driver *driver;
+	struct dpaa_device_id id;
+	enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
+	char name[RTE_ETH_NAME_MAX_LEN];
+};
+
+typedef int (*rte_dpaa_probe_t)(struct rte_dpaa_driver *dpaa_drv,
+				struct rte_dpaa_device *dpaa_dev);
+typedef int (*rte_dpaa_remove_t)(struct rte_dpaa_device *dpaa_dev);
+
+struct rte_dpaa_driver {
+	TAILQ_ENTRY(rte_dpaa_driver) next;
+	struct rte_driver driver;
+	struct rte_dpaa_bus *dpaa_bus;
+	enum rte_dpaa_type drv_type;
+	rte_dpaa_probe_t probe;
+	rte_dpaa_remove_t remove;
+};
+
+struct dpaa_portal {
+	uint32_t bman_idx; /**< BMAN Portal ID*/
+	uint32_t qman_idx; /**< QMAN Portal ID*/
+	uint64_t tid;/**< Parent Thread id for this portal */
+};
+
+/* TODO - this is costly, need to write a fast coversion routine */
+static inline void *rte_dpaa_mem_ptov(phys_addr_t paddr)
+{
+	const struct rte_memseg *memseg = rte_eal_get_physmem_layout();
+	int i;
+
+	for (i = 0; i < RTE_MAX_MEMSEG && memseg[i].addr != NULL; i++) {
+		if (paddr >= memseg[i].phys_addr && paddr <
+			memseg[i].phys_addr + memseg[i].len)
+			return (uint8_t *)(memseg[i].addr) +
+			       (paddr - memseg[i].phys_addr);
+	}
+
+	return NULL;
+}
+
+/**
+ * Register a DPAA driver.
+ *
+ * @param driver
+ *   A pointer to a rte_dpaa_driver structure describing the driver
+ *   to be registered.
+ */
+void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
+
+/**
+ * Unregister a DPAA driver.
+ *
+ * @param driver
+ *	A pointer to a rte_dpaa_driver structure describing the driver
+ *	to be unregistered.
+ */
+void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
+
+/** Helper for DPAA device registration from driver (eth, crypto) instance */
+#define RTE_PMD_REGISTER_DPAA(nm, dpaa_drv) \
+RTE_INIT(dpaainitfn_ ##nm); \
+static void dpaainitfn_ ##nm(void) \
+{\
+	(dpaa_drv).driver.name = RTE_STR(nm);\
+	rte_dpaa_driver_register(&dpaa_drv); \
+} \
+RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_DPAA_BUS_H__ */
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
new file mode 100644
index 0000000..cc10937
--- /dev/null
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -0,0 +1,65 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _DPAA_LOGS_H_
+#define _DPAA_LOGS_H_
+
+#include <rte_log.h>
+
+extern int dpaa_logtype_bus;
+
+#define DPAA_BUS_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define BUS_INIT_FUNC_TRACE() DPAA_BUS_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_BUS
+#define DPAA_BUS_HWWARN(cond, fmt, args...) \
+	do {\
+		if (cond) \
+			DPAA_BUS_LOG(DEBUG, "WARN: " fmt, ##args); \
+	} while (0)
+#else
+#define DPAA_BUS_HWWARN(cond, fmt, args...) do { } while (0)
+#endif
+
+#define DPAA_BUS_DEBUG(fmt, args...) \
+	DPAA_BUS_LOG(DEBUG, fmt, ## args)
+#define DPAA_BUS_INFO(fmt, args...) \
+	DPAA_BUS_LOG(INFO, fmt, ## args)
+#define DPAA_BUS_ERR(fmt, args...) \
+	DPAA_BUS_LOG(ERR, fmt, ## args)
+#define DPAA_BUS_WARN(fmt, args...) \
+	DPAA_BUS_LOG(WARNING, fmt, ## args)
+
+#endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 03/40] bus/dpaa: add compatibility and helper macros
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
                             ` (38 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Linked list, bit operations and compatibility macros.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/dpaa/include/compat.h    | 385 +++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/dpaa_bits.h |  65 ++++++
 drivers/bus/dpaa/include/dpaa_list.h | 101 +++++++++
 3 files changed, 551 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/compat.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_bits.h
 create mode 100644 drivers/bus/dpaa/include/dpaa_list.h

diff --git a/drivers/bus/dpaa/include/compat.h b/drivers/bus/dpaa/include/compat.h
new file mode 100644
index 0000000..42733ae
--- /dev/null
+++ b/drivers/bus/dpaa/include/compat.h
@@ -0,0 +1,385 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __COMPAT_H
+#define __COMPAT_H
+
+#include <sched.h>
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+#include <stdint.h>
+#include <stdlib.h>
+#include <stddef.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <pthread.h>
+#include <linux/types.h>
+#include <stdbool.h>
+#include <ctype.h>
+#include <malloc.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/mman.h>
+#include <limits.h>
+#include <assert.h>
+#include <dirent.h>
+#include <inttypes.h>
+#include <error.h>
+#include <rte_byteorder.h>
+#include <rte_atomic.h>
+#include <rte_spinlock.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+
+/* The following definitions are primarily to allow the single-source driver
+ * interfaces to be included by arbitrary program code. Ie. for interfaces that
+ * are also available in kernel-space, these definitions provide compatibility
+ * with certain attributes and types used in those interfaces.
+ */
+
+/* Required compiler attributes */
+#define __maybe_unused	__rte_unused
+#define __always_unused	__rte_unused
+#define __packed	__rte_packed
+#define noinline	__attribute__((noinline))
+
+#define L1_CACHE_BYTES 64
+#define ____cacheline_aligned __attribute__((aligned(L1_CACHE_BYTES)))
+#define __stringify_1(x) #x
+#define __stringify(x)	__stringify_1(x)
+
+#ifdef ARRAY_SIZE
+#undef ARRAY_SIZE
+#endif
+#define ARRAY_SIZE(a) (sizeof(a) / sizeof((a)[0]))
+
+/* Debugging */
+#define prflush(fmt, args...) \
+	do { \
+		printf(fmt, ##args); \
+		fflush(stdout); \
+	} while (0)
+
+#define pr_crit(fmt, args...)	 prflush("CRIT:" fmt, ##args)
+#define pr_err(fmt, args...)	 prflush("ERR:" fmt, ##args)
+#define pr_warn(fmt, args...)	 prflush("WARN:" fmt, ##args)
+#define pr_info(fmt, args...)	 prflush(fmt, ##args)
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_BUS
+#ifdef pr_debug
+#undef pr_debug
+#endif
+#define pr_debug(fmt, args...)	printf(fmt, ##args)
+#else
+#define pr_debug(fmt, args...) {}
+#endif
+
+#define DPAA_BUG_ON(x) RTE_ASSERT(x)
+
+/* Required types */
+typedef uint8_t		u8;
+typedef uint16_t	u16;
+typedef uint32_t	u32;
+typedef uint64_t	u64;
+typedef uint64_t	dma_addr_t;
+typedef cpu_set_t	cpumask_t;
+typedef uint32_t	phandle;
+typedef uint32_t	gfp_t;
+typedef uint32_t	irqreturn_t;
+
+#define IRQ_HANDLED	0
+#define request_irq	qbman_request_irq
+#define free_irq	qbman_free_irq
+
+#define __iomem
+#define GFP_KERNEL	0
+#define __raw_readb(p)	(*(const volatile unsigned char *)(p))
+#define __raw_readl(p)	(*(const volatile unsigned int *)(p))
+#define __raw_writel(v, p) {*(volatile unsigned int *)(p) = (v); }
+
+/* to be used as an upper-limit only */
+#define NR_CPUS			64
+
+/* Waitqueue stuff */
+typedef struct { }		wait_queue_head_t;
+#define DECLARE_WAIT_QUEUE_HEAD(x) int dummy_##x __always_unused
+#define wake_up(x)		do { } while (0)
+
+/* I/O operations */
+static inline u32 in_be32(volatile void *__p)
+{
+	volatile u32 *p = __p;
+	return rte_be_to_cpu_32(*p);
+}
+
+static inline void out_be32(volatile void *__p, u32 val)
+{
+	volatile u32 *p = __p;
+	*p = rte_cpu_to_be_32(val);
+}
+
+#define dcbt_ro(p) __builtin_prefetch(p, 0)
+#define dcbt_rw(p) __builtin_prefetch(p, 1)
+
+#define dcbz(p) { asm volatile("dc zva, %0" : : "r" (p) : "memory"); }
+#define dcbz_64(p) dcbz(p)
+#define hwsync() rte_rmb()
+#define lwsync() rte_wmb()
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+#define dccivac(p) { asm volatile("dc civac, %0" : : "r"(p) : "memory"); }
+
+#define dcbit_ro(p) \
+	do { \
+		dccivac(p);						\
+		asm volatile("prfm pldl1keep, [%0, #64]" : : "r" (p));	\
+	} while (0)
+
+#define barrier() { asm volatile ("" : : : "memory"); }
+#define cpu_relax barrier
+
+static inline uint64_t mfatb(void)
+{
+	uint64_t ret, ret_new, timeout = 200;
+
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret));
+	asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	while (ret != ret_new && timeout--) {
+		ret = ret_new;
+		asm volatile ("mrs %0, cntvct_el0" : "=r" (ret_new));
+	}
+	DPAA_BUG_ON(!timeout && (ret != ret_new));
+	return ret * 64;
+}
+
+/* Spin for a few cycles without bothering the bus */
+static inline void cpu_spin(int cycles)
+{
+	uint64_t now = mfatb();
+
+	while (mfatb() < (now + cycles))
+		;
+}
+
+/* Qman/Bman API inlines and macros; */
+#ifdef lower_32_bits
+#undef lower_32_bits
+#endif
+#define lower_32_bits(x) ((u32)(x))
+
+#ifdef upper_32_bits
+#undef upper_32_bits
+#endif
+#define upper_32_bits(x) ((u32)(((x) >> 16) >> 16))
+
+/*
+ * Swap bytes of a 48-bit value.
+ */
+static inline uint64_t
+__bswap_48(uint64_t x)
+{
+	return  ((x & 0x0000000000ffULL) << 40) |
+		((x & 0x00000000ff00ULL) << 24) |
+		((x & 0x000000ff0000ULL) <<  8) |
+		((x & 0x0000ff000000ULL) >>  8) |
+		((x & 0x00ff00000000ULL) >> 24) |
+		((x & 0xff0000000000ULL) >> 40);
+}
+
+/*
+ * Swap bytes of a 40-bit value.
+ */
+static inline uint64_t
+__bswap_40(uint64_t x)
+{
+	return  ((x & 0x00000000ffULL) << 32) |
+		((x & 0x000000ff00ULL) << 16) |
+		((x & 0x0000ff0000ULL)) |
+		((x & 0x00ff000000ULL) >> 16) |
+		((x & 0xff00000000ULL) >> 32);
+}
+
+/*
+ * Swap bytes of a 24-bit value.
+ */
+static inline uint32_t
+__bswap_24(uint32_t x)
+{
+	return  ((x & 0x0000ffULL) << 16) |
+		((x & 0x00ff00ULL)) |
+		((x & 0xff0000ULL) >> 16);
+}
+
+#define be64_to_cpu(x) rte_be_to_cpu_64(x)
+#define be32_to_cpu(x) rte_be_to_cpu_32(x)
+#define be16_to_cpu(x) rte_be_to_cpu_16(x)
+
+#define cpu_to_be64(x) rte_cpu_to_be_64(x)
+#define cpu_to_be32(x) rte_cpu_to_be_32(x)
+#define cpu_to_be16(x) rte_cpu_to_be_16(x)
+
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+
+#define cpu_to_be48(x) __bswap_48(x)
+#define be48_to_cpu(x) __bswap_48(x)
+
+#define cpu_to_be40(x) __bswap_40(x)
+#define be40_to_cpu(x) __bswap_40(x)
+
+#define cpu_to_be24(x) __bswap_24(x)
+#define be24_to_cpu(x) __bswap_24(x)
+
+#else /* RTE_BIG_ENDIAN */
+
+#define cpu_to_be48(x) (x)
+#define be48_to_cpu(x) (x)
+
+#define cpu_to_be40(x) (x)
+#define be40_to_cpu(x) (x)
+
+#define cpu_to_be24(x) (x)
+#define be24_to_cpu(x) (x)
+
+#endif /* RTE_BIG_ENDIAN */
+
+/* When copying aligned words or shorts, try to avoid memcpy() */
+/* memcpy() stuff - when you know alignments in advance */
+#define CONFIG_TRY_BETTER_MEMCPY
+
+#ifdef CONFIG_TRY_BETTER_MEMCPY
+static inline void copy_words(void *dest, const void *src, size_t sz)
+{
+	u32 *__dest = dest;
+	const u32 *__src = src;
+	size_t __sz = sz >> 2;
+
+	DPAA_BUG_ON((unsigned long)dest & 0x3);
+	DPAA_BUG_ON((unsigned long)src & 0x3);
+	DPAA_BUG_ON(sz & 0x3);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_shorts(void *dest, const void *src, size_t sz)
+{
+	u16 *__dest = dest;
+	const u16 *__src = src;
+	size_t __sz = sz >> 1;
+
+	DPAA_BUG_ON((unsigned long)dest & 0x1);
+	DPAA_BUG_ON((unsigned long)src & 0x1);
+	DPAA_BUG_ON(sz & 0x1);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_bytes(void *dest, const void *src, size_t sz)
+{
+	u8 *__dest = dest;
+	const u8 *__src = src;
+
+	while (sz--)
+		*(__dest++) = *(__src++);
+}
+#else
+#define copy_words memcpy
+#define copy_shorts memcpy
+#define copy_bytes memcpy
+#endif
+
+/* Allocator stuff */
+#define kmalloc(sz, t)	malloc(sz)
+#define vmalloc(sz)	malloc(sz)
+#define kfree(p)	{ if (p) free(p); }
+static inline void *kzalloc(size_t sz, gfp_t __foo __rte_unused)
+{
+	void *ptr = malloc(sz);
+
+	if (ptr)
+		memset(ptr, 0, sz);
+	return ptr;
+}
+
+static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused)
+{
+	void *p;
+
+	if (posix_memalign(&p, 4096, 4096))
+		return 0;
+	memset(p, 0, 4096);
+	return (unsigned long)p;
+}
+
+/* Spinlock stuff */
+#define spinlock_t		rte_spinlock_t
+#define __SPIN_LOCK_UNLOCKED(x)	RTE_SPINLOCK_INITIALIZER
+#define DEFINE_SPINLOCK(x)	spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
+#define spin_lock_init(x)	rte_spinlock_init(x)
+#define spin_lock_destroy(x)
+#define spin_lock(x)		rte_spinlock_lock(x)
+#define spin_unlock(x)		rte_spinlock_unlock(x)
+#define spin_lock_irq(x)	spin_lock(x)
+#define spin_unlock_irq(x)	spin_unlock(x)
+#define spin_lock_irqsave(x, f) spin_lock_irq(x)
+#define spin_unlock_irqrestore(x, f) spin_unlock_irq(x)
+
+#define atomic_t                rte_atomic32_t
+#define atomic_read(v)          rte_atomic32_read(v)
+#define atomic_set(v, i)        rte_atomic32_set(v, i)
+
+#define atomic_inc(v)           rte_atomic32_add(v, 1)
+#define atomic_dec(v)           rte_atomic32_sub(v, 1)
+
+#define atomic_inc_and_test(v)  rte_atomic32_inc_and_test(v)
+#define atomic_dec_and_test(v)  rte_atomic32_dec_and_test(v)
+
+#define atomic_inc_return(v)    rte_atomic32_add_return(v, 1)
+#define atomic_dec_return(v)    rte_atomic32_sub_return(v, 1)
+#define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0)
+
+#include <dpaa_list.h>
+#include <dpaa_bits.h>
+
+#endif /* __COMPAT_H */
diff --git a/drivers/bus/dpaa/include/dpaa_bits.h b/drivers/bus/dpaa/include/dpaa_bits.h
new file mode 100644
index 0000000..71f2d80
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_bits.h
@@ -0,0 +1,65 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_BITS_H
+#define __DPAA_BITS_H
+
+/* Bitfield stuff. */
+#define BITS_PER_ULONG	(sizeof(unsigned long) << 3)
+#define SHIFT_PER_ULONG	(((1 << 5) == BITS_PER_ULONG) ? 5 : 6)
+#define BITS_MASK(idx)	(1UL << ((idx) & (BITS_PER_ULONG - 1)))
+#define BITS_IDX(idx)	((idx) >> SHIFT_PER_ULONG)
+
+static inline void dpaa_set_bits(unsigned long mask,
+				 volatile unsigned long *p)
+{
+	*p |= mask;
+}
+
+static inline void dpaa_set_bit(int idx, volatile unsigned long *bits)
+{
+	dpaa_set_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+static inline void dpaa_clear_bits(unsigned long mask,
+				   volatile unsigned long *p)
+{
+	*p &= ~mask;
+}
+
+static inline void dpaa_clear_bit(int idx,
+				  volatile unsigned long *bits)
+{
+	dpaa_clear_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+#endif /* __DPAA_BITS_H */
diff --git a/drivers/bus/dpaa/include/dpaa_list.h b/drivers/bus/dpaa/include/dpaa_list.h
new file mode 100644
index 0000000..871e612
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_list.h
@@ -0,0 +1,101 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_LIST_H
+#define __DPAA_LIST_H
+
+/****************/
+/* Linked-lists */
+/****************/
+
+struct list_head {
+	struct list_head *prev;
+	struct list_head *next;
+};
+
+#define COMPAT_LIST_HEAD(n) \
+struct list_head n = { \
+	.prev = &n, \
+	.next = &n \
+}
+
+#define INIT_LIST_HEAD(p) \
+do { \
+	struct list_head *__p298 = (p); \
+	__p298->next = __p298; \
+	__p298->prev = __p298->next; \
+} while (0)
+#define list_entry(node, type, member) \
+	(type *)((void *)node - offsetof(type, member))
+#define list_empty(p) \
+({ \
+	const struct list_head *__p298 = (p); \
+	((__p298->next == __p298) && (__p298->prev == __p298)); \
+})
+#define list_add(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->next = __l298->next; \
+	__p298->prev = __l298; \
+	__l298->next->prev = __p298; \
+	__l298->next = __p298; \
+} while (0)
+#define list_add_tail(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->prev = __l298->prev; \
+	__p298->next = __l298; \
+	__l298->prev->next = __p298; \
+	__l298->prev = __p298; \
+} while (0)
+#define list_for_each(i, l)				\
+	for (i = (l)->next; i != (l); i = i->next)
+#define list_for_each_safe(i, j, l)			\
+	for (i = (l)->next, j = i->next; i != (l);	\
+	     i = j, j = i->next)
+#define list_for_each_entry(i, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name); &i->name != (l); \
+		i = list_entry(i->name.next, typeof(*i), name))
+#define list_for_each_entry_safe(i, j, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name), \
+		j = list_entry(i->name.next, typeof(*j), name); \
+		&i->name != (l); \
+		i = j, j = list_entry(j->name.next, typeof(*j), name))
+#define list_del(i) \
+do { \
+	(i)->next->prev = (i)->prev; \
+	(i)->prev->next = (i)->next; \
+} while (0)
+
+#endif /* __DPAA_LIST_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 04/40] bus/dpaa: add OF parser for device scanning
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (2 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
                             ` (37 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This layer is used by Bus driver's scan function. Devices are parsed
using OF parser and added to DPAA device list.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile       |   7 +
 drivers/bus/dpaa/base/fman/of.c | 576 ++++++++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/of.h   | 190 +++++++++++++
 3 files changed, 773 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/of.c
 create mode 100644 drivers/bus/dpaa/include/of.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 28694c0..30a3a5d 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -38,7 +38,11 @@ LIB = librte_bus_dpaa.a
 
 CFLAGS := -I$(SRCDIR) $(CFLAGS)
 CFLAGS += -O3 $(WERROR_FLAGS)
+CFLAGS += -Wno-pointer-arith
+CFLAGS += -Wno-cast-qual
+CFLAGS += -D _GNU_SOURCE
 CFLAGS += -I$(RTE_BUS_DPAA)/
+CFLAGS += -I$(RTE_BUS_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -52,6 +56,9 @@ LIBABIVER := 1
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/of.c \
+
 # Link Pthread
 LDLIBS += -lpthread
 
diff --git a/drivers/bus/dpaa/base/fman/of.c b/drivers/bus/dpaa/base/fman/of.c
new file mode 100644
index 0000000..b2d7c02
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/of.c
@@ -0,0 +1,576 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <of.h>
+#include <rte_dpaa_logs.h>
+
+static int alive;
+static struct dt_dir root_dir;
+static const char *base_dir;
+static COMPAT_LIST_HEAD(linear);
+
+static int
+of_open_dir(const char *relative_path, struct dirent ***d)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = scandir(full_path, d, 0, versionsort);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+of_close_dir(struct dirent **d, int num)
+{
+	while (num--)
+		free(d[num]);
+	free(d);
+}
+
+static int
+of_open_file(const char *relative_path)
+{
+	int ret;
+	char full_path[PATH_MAX];
+
+	snprintf(full_path, PATH_MAX, "%s/%s", base_dir, relative_path);
+	ret = open(full_path, O_RDONLY);
+	if (ret < 0)
+		DPAA_BUS_LOG(ERR, "Failed to open directory %s",
+			     full_path);
+	return ret;
+}
+
+static void
+process_file(struct dirent *dent, struct dt_dir *parent)
+{
+	int fd;
+	struct dt_file *f = malloc(sizeof(*f));
+
+	if (!f) {
+		DPAA_BUS_LOG(DEBUG, "Unable to allocate memory for file node");
+		return;
+	}
+	f->node.is_file = 1;
+	snprintf(f->node.node.name, NAME_MAX, "%s", dent->d_name);
+	snprintf(f->node.node.full_name, PATH_MAX, "%s/%s",
+		 parent->node.node.full_name, dent->d_name);
+	f->parent = parent;
+	fd = of_open_file(f->node.node.full_name);
+	if (fd < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to open file node");
+		free(f);
+		return;
+	}
+	f->len = read(fd, f->buf, OF_FILE_BUF_MAX);
+	close(fd);
+	if (f->len < 0) {
+		DPAA_BUS_LOG(DEBUG, "Unable to read file node");
+		free(f);
+		return;
+	}
+	list_add_tail(&f->node.list, &parent->files);
+}
+
+static const struct dt_dir *
+node2dir(const struct device_node *n)
+{
+	struct dt_node *dn = container_of((struct device_node *)n,
+					  struct dt_node, node);
+	const struct dt_dir *d = container_of(dn, struct dt_dir, node);
+
+	assert(!dn->is_file);
+	return d;
+}
+
+/* process_dir() calls iterate_dir(), but the latter will also call the former
+ * when recursing into sub-directories, so a predeclaration is needed.
+ */
+static int process_dir(const char *relative_path, struct dt_dir *dt);
+
+static int
+iterate_dir(struct dirent **d, int num, struct dt_dir *dt)
+{
+	int loop;
+	/* Iterate the directory contents */
+	for (loop = 0; loop < num; loop++) {
+		struct dt_dir *subdir;
+		int ret;
+		/* Ignore dot files of all types (especially "..") */
+		if (d[loop]->d_name[0] == '.')
+			continue;
+		switch (d[loop]->d_type) {
+		case DT_REG:
+			process_file(d[loop], dt);
+			break;
+		case DT_DIR:
+			subdir = malloc(sizeof(*subdir));
+			if (!subdir) {
+				perror("malloc");
+				return -ENOMEM;
+			}
+			snprintf(subdir->node.node.name, NAME_MAX, "%s",
+				 d[loop]->d_name);
+			snprintf(subdir->node.node.full_name, PATH_MAX,
+				 "%s/%s", dt->node.node.full_name,
+				 d[loop]->d_name);
+			subdir->parent = dt;
+			ret = process_dir(subdir->node.node.full_name, subdir);
+			if (ret)
+				return ret;
+			list_add_tail(&subdir->node.list, &dt->subdirs);
+			break;
+		default:
+			DPAA_BUS_LOG(DEBUG, "Ignoring invalid dt entry %s/%s",
+				     dt->node.node.full_name, d[loop]->d_name);
+		}
+	}
+	return 0;
+}
+
+static int
+process_dir(const char *relative_path, struct dt_dir *dt)
+{
+	struct dirent **d;
+	int ret, num;
+
+	dt->node.is_file = 0;
+	INIT_LIST_HEAD(&dt->subdirs);
+	INIT_LIST_HEAD(&dt->files);
+	ret = of_open_dir(relative_path, &d);
+	if (ret < 0)
+		return ret;
+	num = ret;
+	ret = iterate_dir(d, num, dt);
+	of_close_dir(d, num);
+	return (ret < 0) ? ret : 0;
+}
+
+static void
+linear_dir(struct dt_dir *d)
+{
+	struct dt_file *f;
+	struct dt_dir *dd;
+
+	d->compatible = NULL;
+	d->status = NULL;
+	d->lphandle = NULL;
+	d->a_cells = NULL;
+	d->s_cells = NULL;
+	d->reg = NULL;
+	list_for_each_entry(f, &d->files, node.list) {
+		if (!strcmp(f->node.node.name, "compatible")) {
+			if (d->compatible)
+				DPAA_BUS_LOG(DEBUG, "Duplicate compatible in"
+					     " %s", d->node.node.full_name);
+			d->compatible = f;
+		} else if (!strcmp(f->node.node.name, "status")) {
+			if (d->status)
+				DPAA_BUS_LOG(DEBUG, "Duplicate status in %s",
+					     d->node.node.full_name);
+			d->status = f;
+		} else if (!strcmp(f->node.node.name, "linux,phandle")) {
+			if (d->lphandle)
+				DPAA_BUS_LOG(DEBUG, "Duplicate lphandle in %s",
+					     d->node.node.full_name);
+			d->lphandle = f;
+		} else if (!strcmp(f->node.node.name, "#address-cells")) {
+			if (d->a_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate a_cells in %s",
+					     d->node.node.full_name);
+			d->a_cells = f;
+		} else if (!strcmp(f->node.node.name, "#size-cells")) {
+			if (d->s_cells)
+				DPAA_BUS_LOG(DEBUG, "Duplicate s_cells in %s",
+					     d->node.node.full_name);
+			d->s_cells = f;
+		} else if (!strcmp(f->node.node.name, "reg")) {
+			if (d->reg)
+				DPAA_BUS_LOG(DEBUG, "Duplicate reg in %s",
+					     d->node.node.full_name);
+			d->reg = f;
+		}
+	}
+
+	list_for_each_entry(dd, &d->subdirs, node.list) {
+		list_add_tail(&dd->linear, &linear);
+		linear_dir(dd);
+	}
+}
+
+int
+of_init_path(const char *dt_path)
+{
+	int ret;
+
+	base_dir = dt_path;
+
+	/* This needs to be singleton initialization */
+	DPAA_BUS_HWWARN(alive, "Double-init of device-tree driver!");
+
+	/* Prepare root node (the remaining fields are set in process_dir()) */
+	root_dir.node.node.name[0] = '\0';
+	root_dir.node.node.full_name[0] = '\0';
+	INIT_LIST_HEAD(&root_dir.node.list);
+	root_dir.parent = NULL;
+
+	/* Kick things off... */
+	ret = process_dir("", &root_dir);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to parse device tree");
+		return ret;
+	}
+
+	/* Now make a flat, linear list of directories */
+	linear_dir(&root_dir);
+	alive = 1;
+	return 0;
+}
+
+static void
+destroy_dir(struct dt_dir *d)
+{
+	struct dt_file *f, *tmpf;
+	struct dt_dir *dd, *tmpd;
+
+	list_for_each_entry_safe(f, tmpf, &d->files, node.list) {
+		list_del(&f->node.list);
+		free(f);
+	}
+	list_for_each_entry_safe(dd, tmpd, &d->subdirs, node.list) {
+		destroy_dir(dd);
+		list_del(&dd->node.list);
+		free(dd);
+	}
+}
+
+void
+of_finish(void)
+{
+	DPAA_BUS_HWWARN(!alive, "Double-finish of device-tree driver!");
+
+	destroy_dir(&root_dir);
+	INIT_LIST_HEAD(&linear);
+	alive = 0;
+}
+
+static const struct dt_dir *
+next_linear(const struct dt_dir *f)
+{
+	if (f->linear.next == &linear)
+		return NULL;
+	return list_entry(f->linear.next, struct dt_dir, linear);
+}
+
+static int
+check_compatible(const struct dt_file *f, const char *compatible)
+{
+	const char *c = (char *)f->buf;
+	unsigned int len, remains = f->len;
+
+	while (remains) {
+		len = strlen(c);
+		if (!strcmp(c, compatible))
+			return 1;
+
+		if (remains < len + 1)
+			break;
+
+		c += (len + 1);
+		remains -= (len + 1);
+	}
+	return 0;
+}
+
+const struct device_node *
+of_find_compatible_node(const struct device_node *from,
+			const char *type __always_unused,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (list_empty(&linear))
+		return NULL;
+	if (!from)
+		d = list_entry(linear.next, struct dt_dir, linear);
+	else
+		d = node2dir(from);
+	for (d = next_linear(d); d && (!d->compatible ||
+				       !check_compatible(d->compatible,
+				       compatible));
+			d = next_linear(d))
+		;
+	if (d)
+		return &d->node.node;
+	return NULL;
+}
+
+const void *
+of_get_property(const struct device_node *from, const char *name,
+		size_t *lenp)
+{
+	const struct dt_dir *d;
+	const struct dt_file *f;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	d = node2dir(from);
+	list_for_each_entry(f, &d->files, node.list)
+		if (!strcmp(f->node.node.name, name)) {
+			if (lenp)
+				*lenp = f->len;
+			return f->buf;
+		}
+	return NULL;
+}
+
+bool
+of_device_is_available(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	d = node2dir(dev_node);
+	if (!d->status)
+		return true;
+	if (!strcmp((char *)d->status->buf, "okay"))
+		return true;
+	if (!strcmp((char *)d->status->buf, "ok"))
+		return true;
+	return false;
+}
+
+const struct device_node *
+of_find_node_by_phandle(phandle ph)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	list_for_each_entry(d, &linear, linear)
+		if (d->lphandle && (d->lphandle->len == 4) &&
+		    !memcmp(d->lphandle->buf, &ph, 4))
+			return &d->node.node;
+	return NULL;
+}
+
+const struct device_node *
+of_get_parent(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	d = node2dir(dev_node);
+	if (!d->parent)
+		return NULL;
+	return &d->parent->node.node;
+}
+
+const struct device_node *
+of_get_next_child(const struct device_node *dev_node,
+		  const struct device_node *prev)
+{
+	const struct dt_dir *p, *c;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+
+	if (!dev_node)
+		return NULL;
+	p = node2dir(dev_node);
+	if (prev) {
+		c = node2dir(prev);
+		DPAA_BUS_HWWARN((c->parent != p), "Parent/child mismatch");
+		if (c->parent != p)
+			return NULL;
+		if (c->node.list.next == &p->subdirs)
+			/* prev was the last child */
+			return NULL;
+		c = list_entry(c->node.list.next, struct dt_dir, node.list);
+		return &c->node.node;
+	}
+	/* Return first child */
+	if (list_empty(&p->subdirs))
+		return NULL;
+	c = list_entry(p->subdirs.next, struct dt_dir, node.list);
+	return &c->node.node;
+}
+
+uint32_t
+of_n_addr_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->a_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->a_cells->buf[0];
+			assert(d->a_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NA;
+}
+
+uint32_t
+of_n_size_cells(const struct device_node *dev_node)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		return OF_DEFAULT_NA;
+	d = node2dir(dev_node);
+	while ((d = d->parent))
+		if (d->s_cells) {
+			unsigned char *buf =
+				(unsigned char *)&d->s_cells->buf[0];
+			assert(d->s_cells->len == 4);
+			return ((uint32_t)buf[0] << 24) |
+				((uint32_t)buf[1] << 16) |
+				((uint32_t)buf[2] << 8) |
+				(uint32_t)buf[3];
+		}
+	return OF_DEFAULT_NS;
+}
+
+const uint32_t *
+of_get_address(const struct device_node *dev_node, size_t idx,
+	       uint64_t *size, uint32_t *flags __rte_unused)
+{
+	const struct dt_dir *d;
+	const unsigned char *buf;
+	uint32_t na = of_n_addr_cells(dev_node);
+	uint32_t ns = of_n_size_cells(dev_node);
+
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (!d->reg)
+		return NULL;
+	assert(d->reg->len % ((na + ns) * 4) == 0);
+	assert(d->reg->len / ((na + ns) * 4) > (unsigned int) idx);
+	buf = (const unsigned char *)&d->reg->buf[0];
+	buf += (na + ns) * idx * 4;
+	if (size)
+		for (*size = 0; ns > 0; ns--, na++)
+			*size = (*size << 32) +
+				(((uint32_t)buf[4 * na] << 24) |
+				((uint32_t)buf[4 * na + 1] << 16) |
+				((uint32_t)buf[4 * na + 2] << 8) |
+				(uint32_t)buf[4 * na + 3]);
+	return (const uint32_t *)buf;
+}
+
+uint64_t
+of_translate_address(const struct device_node *dev_node,
+		     const uint32_t *addr)
+{
+	uint64_t phys_addr, tmp_addr;
+	const struct device_node *parent;
+	const uint32_t *ranges;
+	size_t rlen;
+	uint32_t na, pna;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	assert(dev_node != NULL);
+
+	na = of_n_addr_cells(dev_node);
+	phys_addr = of_read_number(addr, na);
+
+	dev_node = of_get_parent(dev_node);
+	if (!dev_node)
+		return 0;
+	else if (node2dir(dev_node) == &root_dir)
+		return phys_addr;
+
+	do {
+		pna = of_n_addr_cells(dev_node);
+		parent = of_get_parent(dev_node);
+		if (!parent)
+			return 0;
+
+		ranges = of_get_property(dev_node, "ranges", &rlen);
+		/* "ranges" property is missing. Translation breaks */
+		if (!ranges)
+			return 0;
+		/* "ranges" property is empty. Do 1:1 translation */
+		else if (rlen == 0)
+			continue;
+		else
+			tmp_addr = of_read_number(ranges + na, pna);
+
+		na = pna;
+		dev_node = parent;
+		phys_addr += tmp_addr;
+	} while (node2dir(parent) != &root_dir);
+
+	return phys_addr;
+}
+
+bool
+of_device_is_compatible(const struct device_node *dev_node,
+			const char *compatible)
+{
+	const struct dt_dir *d;
+
+	DPAA_BUS_HWWARN(!alive, "Device-tree driver not initialised!");
+	if (!dev_node)
+		d = &root_dir;
+	else
+		d = node2dir(dev_node);
+	if (d->compatible && check_compatible(d->compatible, compatible))
+		return true;
+	return false;
+}
diff --git a/drivers/bus/dpaa/include/of.h b/drivers/bus/dpaa/include/of.h
new file mode 100644
index 0000000..2984b1e
--- /dev/null
+++ b/drivers/bus/dpaa/include/of.h
@@ -0,0 +1,190 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor, Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __OF_H
+#define	__OF_H
+
+#include <compat.h>
+
+#ifndef OF_INIT_DEFAULT_PATH
+#define OF_INIT_DEFAULT_PATH "/proc/device-tree"
+#endif
+
+#define OF_DEFAULT_NA 1
+#define OF_DEFAULT_NS 1
+
+#define OF_FILE_BUF_MAX 256
+
+/**
+ * Layout of Device Tree:
+ * dt_dir
+ *  |- dt_dir
+ *  |   |- dt_dir
+ *  |   |  |- dt_dir
+ *  |   |  |  |- dt_file
+ *  |   |  |  ``- dt_file
+ *  |   |  ``- dt_file
+ *  |   `-dt_file`
+ *  ``- dt_file
+ *
+ *  +------------------+
+ *  |dt_dir            |
+ *  |+----------------+|
+ *  ||dt_node         ||
+ *  ||+--------------+||
+ *  |||device_node   |||
+ *  ||+--------------+||
+ *  || list_dt_nodes  ||
+ *  |+----------------+|
+ *  | list of subdir   |
+ *  | list of files    |
+ *  +------------------+
+ */
+
+/**
+ * Device description on of a device node in device tree.
+ */
+struct device_node {
+	char name[NAME_MAX];
+	char full_name[PATH_MAX];
+};
+
+/**
+ * List of device nodes available in a device tree layout
+ */
+struct dt_node {
+	struct device_node node; /**< Property of node */
+	int is_file; /**< FALSE==dir, TRUE==file */
+	struct list_head list; /**< Nodes within a parent subdir */
+};
+
+/**
+ * Types we use to represent directories and files
+ */
+struct dt_file;
+struct dt_dir {
+	struct dt_node node;
+	struct list_head subdirs;
+	struct list_head files;
+	struct list_head linear;
+	struct dt_dir *parent;
+	struct dt_file *compatible;
+	struct dt_file *status;
+	struct dt_file *lphandle;
+	struct dt_file *a_cells;
+	struct dt_file *s_cells;
+	struct dt_file *reg;
+};
+
+struct dt_file {
+	struct dt_node node;
+	struct dt_dir *parent;
+	ssize_t len;
+	uint64_t buf[OF_FILE_BUF_MAX >> 3];
+};
+
+const struct device_node *of_find_compatible_node(
+					const struct device_node *from,
+					const char *type __always_unused,
+					const char *compatible)
+	__attribute__((nonnull(3)));
+
+#define for_each_compatible_node(dev_node, type, compatible) \
+	for (dev_node = of_find_compatible_node(NULL, type, compatible); \
+		dev_node != NULL; \
+		dev_node = of_find_compatible_node(dev_node, type, compatible))
+
+const void *of_get_property(const struct device_node *from, const char *name,
+			    size_t *lenp) __attribute__((nonnull(2)));
+bool of_device_is_available(const struct device_node *dev_node);
+
+const struct device_node *of_find_node_by_phandle(phandle ph);
+
+const struct device_node *of_get_parent(const struct device_node *dev_node);
+
+const struct device_node *of_get_next_child(const struct device_node *dev_node,
+					    const struct device_node *prev);
+
+#define for_each_child_node(parent, child) \
+	for (child = of_get_next_child(parent, NULL); child != NULL; \
+			child = of_get_next_child(parent, child))
+
+uint32_t of_n_addr_cells(const struct device_node *dev_node);
+uint32_t of_n_size_cells(const struct device_node *dev_node);
+
+const uint32_t *of_get_address(const struct device_node *dev_node, size_t idx,
+			       uint64_t *size, uint32_t *flags);
+
+uint64_t of_translate_address(const struct device_node *dev_node,
+			      const u32 *addr) __attribute__((nonnull));
+
+bool of_device_is_compatible(const struct device_node *dev_node,
+			     const char *compatible);
+
+/* of_init() must be called prior to initialisation or use of any driver
+ * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers, etc.
+ * The path should usually be "/proc/device-tree".
+ */
+int of_init_path(const char *dt_path);
+
+/* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
+ * full reload is desired without a process exit.
+ */
+void of_finish(void);
+
+/* Use of this wrapper is recommended. */
+static inline int of_init(void)
+{
+	return of_init_path(OF_INIT_DEFAULT_PATH);
+}
+
+/* Read a numeric property according to its size and return it as a 64-bit
+ * value.
+ */
+static inline uint64_t of_read_number(const __be32 *cell, int size)
+{
+	uint64_t r = 0;
+
+	while (size--)
+		r = (r << 32) | be32toh(*(cell++));
+	return r;
+}
+
+#endif	/*  __OF_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 05/40] bus/dpaa: introducing FMan configurations
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (3 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
                             ` (36 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

FMan or Frame Manager, inspects traffic, splits it into queueson ingress.
It is also responsible for directing traffic on queues on egress.

This patch introduces FMan configurational interfaces. This layer is
used by Bus driver for configuring the hardware block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   2 +
 drivers/bus/dpaa/base/fman/fman.c         | 611 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/fman/netcfg_layer.c | 214 +++++++++++
 drivers/bus/dpaa/include/fman.h           | 458 ++++++++++++++++++++++
 drivers/bus/dpaa/include/netcfg.h         |  96 +++++
 5 files changed, 1381 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman.c
 create mode 100644 drivers/bus/dpaa/base/fman/netcfg_layer.c
 create mode 100644 drivers/bus/dpaa/include/fman.h
 create mode 100644 drivers/bus/dpaa/include/netcfg.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 30a3a5d..f6e504d 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -57,7 +57,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	dpaa_bus.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
+	base/fman/fman.c \
 	base/fman/of.c \
+	base/fman/netcfg_layer.c
 
 # Link Pthread
 LDLIBS += -lpthread
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
new file mode 100644
index 0000000..2c6029e
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -0,0 +1,611 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+
+#include <rte_malloc.h>
+
+/* This header declares the driver interface we implement */
+#include <fman.h>
+#include <of.h>
+#include <rte_dpaa_logs.h>
+
+#define QMI_PORT_REGS_OFFSET		0x400
+
+/* CCSR map address to access ccsr based register */
+void *fman_ccsr_map;
+/* fman version info */
+u16 fman_ip_rev;
+static int get_once;
+u32 fman_dealloc_bufs_mask_hi;
+u32 fman_dealloc_bufs_mask_lo;
+
+int fman_ccsr_map_fd = -1;
+static COMPAT_LIST_HEAD(__ifs);
+
+/* This is the (const) global variable that callers have read-only access to.
+ * Internally, we have read-write access directly to __ifs.
+ */
+const struct list_head *fman_if_list = &__ifs;
+
+static void
+if_destructor(struct __fman_if *__if)
+{
+	struct fman_if_bpool *bp, *tmpbp;
+
+	if (__if->__if.mac_type == fman_offline)
+		goto cleanup;
+
+	list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+cleanup:
+	rte_free(__if);
+}
+
+static int
+fman_get_ip_rev(const struct device_node *fman_node)
+{
+	const uint32_t *fman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	uint32_t ip_rev_1;
+	int _errno;
+
+	fman_addr = of_get_address(fman_node, 0, &regs_size, NULL);
+	if (!fman_addr) {
+		pr_err("of_get_address cannot return fman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(fman_node, fman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+	fman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, fman_ccsr_map_fd, phys_addr);
+	if (fman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map FMan ccsr base");
+		return -EINVAL;
+	}
+
+	ip_rev_1 = in_be32(fman_ccsr_map + FMAN_IP_REV_1);
+	fman_ip_rev = (ip_rev_1 & FMAN_IP_REV_1_MAJOR_MASK) >>
+			FMAN_IP_REV_1_MAJOR_SHIFT;
+
+	_errno = munmap(fman_ccsr_map, regs_size);
+	if (_errno)
+		pr_err("munmap() of FMan ccsr failed");
+
+	return 0;
+}
+
+static int
+fman_get_mac_index(uint64_t regs_addr_host, uint8_t *mac_idx)
+{
+	int ret = 0;
+
+	/*
+	 * MAC1 : E_0000h
+	 * MAC2 : E_2000h
+	 * MAC3 : E_4000h
+	 * MAC4 : E_6000h
+	 * MAC5 : E_8000h
+	 * MAC6 : E_A000h
+	 * MAC7 : E_C000h
+	 * MAC8 : E_E000h
+	 * MAC9 : F_0000h
+	 * MAC10: F_2000h
+	 */
+	switch (regs_addr_host) {
+	case 0xE0000:
+		*mac_idx = 1;
+		break;
+	case 0xE2000:
+		*mac_idx = 2;
+		break;
+	case 0xE4000:
+		*mac_idx = 3;
+		break;
+	case 0xE6000:
+		*mac_idx = 4;
+		break;
+	case 0xE8000:
+		*mac_idx = 5;
+		break;
+	case 0xEA000:
+		*mac_idx = 6;
+		break;
+	case 0xEC000:
+		*mac_idx = 7;
+		break;
+	case 0xEE000:
+		*mac_idx = 8;
+		break;
+	case 0xF0000:
+		*mac_idx = 9;
+		break;
+	case 0xF2000:
+		*mac_idx = 10;
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+fman_if_init(const struct device_node *dpa_node)
+{
+	const char *rprop, *mprop;
+	uint64_t phys_addr;
+	struct __fman_if *__if;
+	struct fman_if_bpool *bpool;
+
+	const phandle *mac_phandle, *ports_phandle, *pools_phandle;
+	const phandle *tx_channel_id = NULL, *mac_addr, *cell_idx;
+	const phandle *rx_phandle, *tx_phandle;
+	uint64_t tx_phandle_host[4] = {0};
+	uint64_t rx_phandle_host[4] = {0};
+	uint64_t regs_addr_host = 0;
+	uint64_t cell_idx_host = 0;
+
+	const struct device_node *mac_node = NULL, *tx_node;
+	const struct device_node *pool_node, *fman_node, *rx_node;
+	const uint32_t *regs_addr = NULL;
+	const char *mname, *fname;
+	const char *dname = dpa_node->full_name;
+	size_t lenp;
+	int _errno;
+	const char *char_prop;
+	uint32_t na;
+
+	if (of_device_is_available(dpa_node) == false)
+		return 0;
+
+	rprop = "fsl,qman-frame-queues-rx";
+	mprop = "fsl,fman-mac";
+
+	/* Allocate an object for this network interface */
+	__if = rte_malloc(NULL, sizeof(*__if), RTE_CACHE_LINE_SIZE);
+	if (!__if) {
+		FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*__if));
+		goto err;
+	}
+	memset(__if, 0, sizeof(*__if));
+	INIT_LIST_HEAD(&__if->__if.bpool_list);
+	strncpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
+	__if->node_path[PATH_MAX - 1] = '\0';
+
+	/* Obtain the MAC node used by this interface except macless */
+	mac_phandle = of_get_property(dpa_node, mprop, &lenp);
+	if (!mac_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no %s\n", dname, mprop);
+		goto err;
+	}
+	assert(lenp == sizeof(phandle));
+	mac_node = of_find_node_by_phandle(*mac_phandle);
+	if (!mac_node) {
+		FMAN_ERR(-ENXIO, "%s: bad 'fsl,fman-mac\n", dname);
+		goto err;
+	}
+	mname = mac_node->full_name;
+
+	/* Map the CCSR regs for the MAC node */
+	regs_addr = of_get_address(mac_node, 0, &__if->regs_size, NULL);
+	if (!regs_addr) {
+		FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+		goto err;
+	}
+	phys_addr = of_translate_address(mac_node, regs_addr);
+	if (!phys_addr) {
+		FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+			 mname, regs_addr);
+			 __if->ccsr_map = mmap(NULL, __if->regs_size,
+			 PROT_READ | PROT_WRITE, MAP_SHARED,
+			 fman_ccsr_map_fd, phys_addr);
+		goto err;
+	}
+	if (__if->ccsr_map == MAP_FAILED) {
+		FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+		goto err;
+	}
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	regs_addr_host = of_read_number(regs_addr, na);
+
+
+	/* Get the index of the Fman this i/f belongs to */
+	fman_node = of_get_parent(mac_node);
+	na = of_n_addr_cells(mac_node);
+	if (!fman_node) {
+		FMAN_ERR(-ENXIO, "of_get_parent(%s)\n", mname);
+		goto err;
+	}
+	fname = fman_node->full_name;
+	cell_idx = of_get_property(fman_node, "cell-index", &lenp);
+	if (!cell_idx) {
+		FMAN_ERR(-ENXIO, "%s: no cell-index)\n", fname);
+		goto err;
+	}
+	assert(lenp == sizeof(*cell_idx));
+	cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+	__if->__if.fman_idx = cell_idx_host;
+	if (!get_once) {
+		_errno = fman_get_ip_rev(fman_node);
+		if (_errno) {
+			FMAN_ERR(-ENXIO, "%s: ip_rev is not available\n",
+				 fname);
+			goto err;
+		}
+	}
+
+	if (fman_ip_rev >= FMAN_V3) {
+		/*
+		 * Set A2V, OVOM, EBD bits in contextA to allow external
+		 * buffer deallocation by fman.
+		 */
+		fman_dealloc_bufs_mask_hi = FMAN_V3_CONTEXTA_EN_A2V |
+						FMAN_V3_CONTEXTA_EN_OVOM;
+		fman_dealloc_bufs_mask_lo = FMAN_V3_CONTEXTA_EN_EBD;
+	} else {
+		fman_dealloc_bufs_mask_hi = 0;
+		fman_dealloc_bufs_mask_lo = 0;
+	}
+	/* Is the MAC node 1G, 10G? */
+	__if->__if.is_memac = 0;
+
+	if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
+		__if->__if.mac_type = fman_mac_1g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
+		__if->__if.mac_type = fman_mac_10g;
+	else if (of_device_is_compatible(mac_node, "fsl,fman-memac")) {
+		__if->__if.is_memac = 1;
+		char_prop = of_get_property(mac_node, "phy-connection-type",
+					    NULL);
+		if (!char_prop) {
+			printf("memac: unknown MII type assuming 1G\n");
+			/* Right now forcing memac to 1g in case of error*/
+			__if->__if.mac_type = fman_mac_1g;
+		} else {
+			if (strstr(char_prop, "sgmii"))
+				__if->__if.mac_type = fman_mac_1g;
+			else if (strstr(char_prop, "rgmii")) {
+				__if->__if.mac_type = fman_mac_1g;
+				__if->__if.is_rgmii = 1;
+			} else if (strstr(char_prop, "xgmii"))
+				__if->__if.mac_type = fman_mac_10g;
+		}
+	} else {
+		FMAN_ERR(-EINVAL, "%s: unknown MAC type\n", mname);
+		goto err;
+	}
+
+	/*
+	 * For MAC ports, we cannot rely on cell-index. In
+	 * T2080, two of the 10G ports on single FMAN have same
+	 * duplicate cell-indexes as the other two 10G ports on
+	 * same FMAN. Hence, we now rely upon addresses of the
+	 * ports from device tree to deduce the index.
+	 */
+
+	_errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
+	if (_errno) {
+		FMAN_ERR(-EINVAL, "Invalid register address: %lu",
+			 regs_addr_host);
+		goto err;
+	}
+
+	/* Extract the MAC address for private and shared interfaces */
+	mac_addr = of_get_property(mac_node, "local-mac-address",
+				   &lenp);
+	if (!mac_addr) {
+		FMAN_ERR(-EINVAL, "%s: no local-mac-address\n",
+			 mname);
+		goto err;
+	}
+	memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
+
+	/* Extract the Tx port (it's the second of the two port handles)
+	 * and get its channel ID
+	 */
+	ports_phandle = of_get_property(mac_node, "fsl,port-handles",
+					&lenp);
+	if (!ports_phandle)
+		ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+						&lenp);
+	if (!ports_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,port-handles\n",
+			 mname);
+		goto err;
+	}
+	assert(lenp == (2 * sizeof(phandle)));
+	tx_node = of_find_node_by_phandle(ports_phandle[1]);
+	if (!tx_node) {
+		FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]\n", mname);
+		goto err;
+	}
+	/* Extract the channel ID (from tx-port-handle) */
+	tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
+					&lenp);
+	if (!tx_channel_id) {
+		FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id\n",
+			 tx_node->full_name);
+		goto err;
+	}
+
+	rx_node = of_find_node_by_phandle(ports_phandle[0]);
+	if (!rx_node) {
+		FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]\n", mname);
+		goto err;
+	}
+	regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+	if (!regs_addr) {
+		FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+		goto err;
+	}
+	phys_addr = of_translate_address(rx_node, regs_addr);
+	if (!phys_addr) {
+		FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+			 mname, regs_addr);
+		goto err;
+	}
+	__if->bmi_map = mmap(NULL, __if->regs_size,
+				 PROT_READ | PROT_WRITE, MAP_SHARED,
+				 fman_ccsr_map_fd, phys_addr);
+	if (__if->bmi_map == MAP_FAILED) {
+		FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+		goto err;
+	}
+
+	/* No channel ID for MAC-less */
+	assert(lenp == sizeof(*tx_channel_id));
+	na = of_n_addr_cells(mac_node);
+	__if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+	/* Extract the Rx FQIDs. (Note, the device representation is silly,
+	 * there are "counts" that must always be 1.)
+	 */
+	rx_phandle = of_get_property(dpa_node, rprop, &lenp);
+	if (!rx_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-rx\n", dname);
+		goto err;
+	}
+
+	assert(lenp == (4 * sizeof(phandle)));
+
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
+	rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
+	rx_phandle_host[2] = of_read_number(&rx_phandle[2], na);
+	rx_phandle_host[3] = of_read_number(&rx_phandle[3], na);
+
+	assert((rx_phandle_host[1] == 1) && (rx_phandle_host[3] == 1));
+	__if->__if.fqid_rx_err = rx_phandle_host[0];
+	__if->__if.fqid_rx_def = rx_phandle_host[2];
+
+	/* Extract the Tx FQIDs */
+	tx_phandle = of_get_property(dpa_node,
+				     "fsl,qman-frame-queues-tx", &lenp);
+	if (!tx_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-tx\n", dname);
+		goto err;
+	}
+
+	assert(lenp == (4 * sizeof(phandle)));
+	/*TODO: Fix for other cases also */
+	na = of_n_addr_cells(mac_node);
+	/* Get rid of endianness (issues). Convert to host byte order */
+	tx_phandle_host[0] = of_read_number(&tx_phandle[0], na);
+	tx_phandle_host[1] = of_read_number(&tx_phandle[1], na);
+	tx_phandle_host[2] = of_read_number(&tx_phandle[2], na);
+	tx_phandle_host[3] = of_read_number(&tx_phandle[3], na);
+	assert((tx_phandle_host[1] == 1) && (tx_phandle_host[3] == 1));
+	__if->__if.fqid_tx_err = tx_phandle_host[0];
+	__if->__if.fqid_tx_confirm = tx_phandle_host[2];
+
+	/* Obtain the buffer pool nodes used by this interface */
+	pools_phandle = of_get_property(dpa_node, "fsl,bman-buffer-pools",
+					&lenp);
+	if (!pools_phandle) {
+		FMAN_ERR(-EINVAL, "%s: no fsl,bman-buffer-pools\n", dname);
+		goto err;
+	}
+	/* For each pool, parse the corresponding node and add a pool object
+	 * to the interface's "bpool_list"
+	 */
+	assert(lenp && !(lenp % sizeof(phandle)));
+	while (lenp) {
+		size_t proplen;
+		const phandle *prop;
+		uint64_t bpid_host = 0;
+		uint64_t bpool_host[6] = {0};
+		const char *pname;
+		/* Allocate an object for the pool */
+		bpool = rte_malloc(NULL, sizeof(*bpool), RTE_CACHE_LINE_SIZE);
+		if (!bpool) {
+			FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*bpool));
+			goto err;
+		}
+		/* Find the pool node */
+		pool_node = of_find_node_by_phandle(*pools_phandle);
+		if (!pool_node) {
+			FMAN_ERR(-ENXIO, "%s: bad fsl,bman-buffer-pools\n",
+				 dname);
+			goto err;
+		}
+		pname = pool_node->full_name;
+		/* Extract the BPID property */
+		prop = of_get_property(pool_node, "fsl,bpid", &proplen);
+		if (!prop) {
+			FMAN_ERR(-EINVAL, "%s: no fsl,bpid\n", pname);
+			goto err;
+		}
+		assert(proplen == sizeof(*prop));
+		na = of_n_addr_cells(mac_node);
+		/* Get rid of endianness (issues).
+		 * Convert to host byte-order
+		 */
+		bpid_host = of_read_number(prop, na);
+		bpool->bpid = bpid_host;
+		/* Extract the cfg property (count/size/addr). "fsl,bpool-cfg"
+		 * indicates for the Bman driver to seed the pool.
+		 * "fsl,bpool-ethernet-cfg" is used by the network driver. The
+		 * two are mutually exclusive, so check for either of them.
+		 */
+		prop = of_get_property(pool_node, "fsl,bpool-cfg",
+				       &proplen);
+		if (!prop)
+			prop = of_get_property(pool_node,
+					       "fsl,bpool-ethernet-cfg",
+					       &proplen);
+		if (!prop) {
+			/* It's OK for there to be no bpool-cfg */
+			bpool->count = bpool->size = bpool->addr = 0;
+		} else {
+			assert(proplen == (6 * sizeof(*prop)));
+			na = of_n_addr_cells(mac_node);
+			/* Get rid of endianness (issues).
+			 * Convert to host byte order
+			 */
+			bpool_host[0] = of_read_number(&prop[0], na);
+			bpool_host[1] = of_read_number(&prop[1], na);
+			bpool_host[2] = of_read_number(&prop[2], na);
+			bpool_host[3] = of_read_number(&prop[3], na);
+			bpool_host[4] = of_read_number(&prop[4], na);
+			bpool_host[5] = of_read_number(&prop[5], na);
+
+			bpool->count = ((uint64_t)bpool_host[0] << 32) |
+					bpool_host[1];
+			bpool->size = ((uint64_t)bpool_host[2] << 32) |
+					bpool_host[3];
+			bpool->addr = ((uint64_t)bpool_host[4] << 32) |
+					bpool_host[5];
+		}
+		/* Parsing of the pool is complete, add it to the interface
+		 * list.
+		 */
+		list_add_tail(&bpool->node, &__if->__if.bpool_list);
+		lenp -= sizeof(phandle);
+		pools_phandle++;
+	}
+
+	/* Parsing of the network interface is complete, add it to the list */
+	DPAA_BUS_LOG(DEBUG, "Found %s, Tx Channel = %x, FMAN = %x,"
+		    "Port ID = %x\n",
+		    dname, __if->__if.tx_channel_id, __if->__if.fman_idx,
+		    __if->__if.mac_idx);
+
+	list_add_tail(&__if->__if.node, &__ifs);
+	return 0;
+err:
+	if_destructor(__if);
+	return _errno;
+}
+
+int
+fman_init(void)
+{
+	const struct device_node *dpa_node;
+	int _errno;
+
+	/* If multiple dependencies try to initialise the Fman driver, don't
+	 * panic.
+	 */
+	if (fman_ccsr_map_fd != -1)
+		return 0;
+
+	fman_ccsr_map_fd = open(FMAN_DEVICE_PATH, O_RDWR);
+	if (unlikely(fman_ccsr_map_fd < 0)) {
+		DPAA_BUS_LOG(ERR, "Unable to open (/dev/mem)");
+		return fman_ccsr_map_fd;
+	}
+
+	for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-init") {
+		_errno = fman_if_init(dpa_node);
+		if (_errno) {
+			FMAN_ERR(_errno, "if_init(%s)\n", dpa_node->full_name);
+			goto err;
+		}
+	}
+
+	return 0;
+err:
+	fman_finish();
+	return _errno;
+}
+
+void
+fman_finish(void)
+{
+	struct __fman_if *__if, *tmpif;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	list_for_each_entry_safe(__if, tmpif, &__ifs, __if.node) {
+		int _errno;
+
+		/* disable Rx and Tx */
+		if ((__if->__if.mac_type == fman_mac_1g) &&
+		    (!__if->__if.is_memac))
+			out_be32(__if->ccsr_map + 0x100,
+				 in_be32(__if->ccsr_map + 0x100) & ~(u32)0x5);
+		else
+			out_be32(__if->ccsr_map + 8,
+				 in_be32(__if->ccsr_map + 8) & ~(u32)3);
+		/* release the mapping */
+		_errno = munmap(__if->ccsr_map, __if->regs_size);
+		if (unlikely(_errno < 0))
+			fprintf(stderr, "%s:%hu:%s(): munmap() = %d (%s)\n",
+				__FILE__, __LINE__, __func__,
+				-errno, strerror(errno));
+		printf("Tearing down %s\n", __if->node_path);
+		list_del(&__if->__if.node);
+		rte_free(__if);
+	}
+
+	close(fman_ccsr_map_fd);
+	fman_ccsr_map_fd = -1;
+}
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
new file mode 100644
index 0000000..26cff84
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -0,0 +1,214 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <inttypes.h>
+#include <of.h>
+#include <net/if.h>
+#include <sys/ioctl.h>
+#include <error.h>
+#include <net/if_arp.h>
+#include <assert.h>
+#include <unistd.h>
+
+#include <rte_malloc.h>
+
+#include <rte_dpaa_logs.h>
+#include <netcfg.h>
+
+/* Structure contains information about all the interfaces given by user
+ * on command line.
+ */
+struct netcfg_interface *netcfg_interface;
+
+/* This data structure contaings all configurations information
+ * related to usages of DPA devices.
+ */
+struct netcfg_info *netcfg;
+/* fd to open a socket for making ioctl request to disable/enable shared
+ *  interfaces.
+ */
+static int skfd = -1;
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dump_netcfg(struct netcfg_info *cfg_ptr)
+{
+	int i;
+
+	printf("..........  DPAA Configuration  ..........\n\n");
+
+	/* Network interfaces */
+	printf("Network interfaces: %d\n", cfg_ptr->num_ethports);
+	for (i = 0; i < cfg_ptr->num_ethports; i++) {
+		struct fman_if_bpool *bpool;
+		struct fm_eth_port_cfg *p_cfg = &cfg_ptr->port_cfg[i];
+		struct fman_if *__if = p_cfg->fman_if;
+
+		printf("\n+ Fman %d, MAC %d (%s);\n",
+		       __if->fman_idx, __if->mac_idx,
+		       (__if->mac_type == fman_mac_1g) ? "1G" : "10G");
+
+		printf("\tmac_addr: %02x:%02x:%02x:%02x:%02x:%02x\n",
+		       (&__if->mac_addr)->addr_bytes[0],
+		       (&__if->mac_addr)->addr_bytes[1],
+		       (&__if->mac_addr)->addr_bytes[2],
+		       (&__if->mac_addr)->addr_bytes[3],
+		       (&__if->mac_addr)->addr_bytes[4],
+		       (&__if->mac_addr)->addr_bytes[5]);
+
+		printf("\ttx_channel_id: 0x%02x\n",
+		       __if->tx_channel_id);
+
+		printf("\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
+		printf("\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
+
+		printf("\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
+		printf("\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
+		fman_if_for_each_bpool(bpool, __if)
+			printf("\tbuffer pool: (bpid=%d, count=%"PRId64
+			       " size=%"PRId64", addr=0x%"PRIx64")\n",
+			       bpool->bpid, bpool->count, bpool->size,
+			       bpool->addr);
+	}
+}
+#endif /* RTE_LIBRTE_DPAA_DEBUG_DRIVER */
+
+static inline int
+get_num_netcfg_interfaces(char *str)
+{
+	char *pch;
+	uint8_t count = 0;
+
+	if (str == NULL)
+		return -EINVAL;
+	pch = strtok(str, ",");
+	while (pch != NULL) {
+		count++;
+		pch = strtok(NULL, ",");
+	}
+	return count;
+}
+
+struct netcfg_info *
+netcfg_acquire(void)
+{
+	struct fman_if *__if;
+	int _errno, idx = 0;
+	uint8_t num_ports = 0;
+	uint8_t num_cfg_ports = 0;
+	size_t size;
+
+	/* Extract dpa configuration from fman driver and FMC configuration
+	 * for command-line interfaces.
+	 */
+
+	/* Open a basic socket to enable/disable shared
+	 * interfaces.
+	 */
+	skfd = socket(AF_PACKET, SOCK_RAW, 0);
+	if (unlikely(skfd < 0)) {
+		error(0, errno, "%s(): open(SOCK_RAW)", __func__);
+		return NULL;
+	}
+
+	/* Initialise the Fman driver */
+	_errno = fman_init();
+	if (_errno) {
+		DPAA_BUS_LOG(ERR, "FMAN driver init failed (%d)", errno);
+		close(skfd);
+		skfd = -1;
+		return NULL;
+	}
+
+	/* Number of MAC ports */
+	list_for_each_entry(__if, fman_if_list, node)
+		num_ports++;
+
+	if (!num_ports) {
+		DPAA_BUS_LOG(ERR, "FMAN ports not available");
+		return NULL;
+	}
+	/* Allocate space for all enabled mac ports */
+	size = sizeof(*netcfg) +
+		(num_ports * sizeof(struct fm_eth_port_cfg));
+
+	netcfg = calloc(1, size);
+	if (unlikely(netcfg == NULL)) {
+		DPAA_BUS_LOG(ERR, "Unable to allocat mem for netcfg");
+		goto error;
+	}
+
+	netcfg->num_ethports = num_ports;
+
+	list_for_each_entry(__if, fman_if_list, node) {
+		struct fm_eth_port_cfg *cfg = &netcfg->port_cfg[idx];
+		/* Hook in the fman driver interface */
+		cfg->fman_if = __if;
+		cfg->rx_def = __if->fqid_rx_def;
+		num_cfg_ports++;
+		idx++;
+	}
+
+	if (!num_cfg_ports) {
+		DPAA_BUS_LOG(ERR, "No FMAN ports found");
+		goto error;
+	} else if (num_ports != num_cfg_ports)
+		netcfg->num_ethports = num_cfg_ports;
+
+	return netcfg;
+
+error:
+	if (netcfg) {
+		free(netcfg);
+		netcfg = NULL;
+	}
+
+	return NULL;
+}
+
+void
+netcfg_release(struct netcfg_info *cfg_ptr)
+{
+	free(cfg_ptr);
+	/* Close socket for shared interfaces */
+	if (skfd >= 0) {
+		close(skfd);
+		skfd = -1;
+	}
+}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
new file mode 100644
index 0000000..9890e09
--- /dev/null
+++ b/drivers/bus/dpaa/include/fman.h
@@ -0,0 +1,458 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FMAN_H
+#define __FMAN_H
+
+#include <stdbool.h>
+#include <net/if.h>
+
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+
+#include <compat.h>
+
+#ifndef FMAN_DEVICE_PATH
+#define FMAN_DEVICE_PATH "/dev/mem"
+#endif
+
+#define MEMAC_NUM_OF_PADDRS 7 /* Num of additional exact match MAC adr regs */
+
+/* Control and Configuration Register (COMMAND_CONFIG) for MEMAC */
+#define CMD_CFG_LOOPBACK_EN	0x00000400
+/**< 21 XGMII/GMII loopback enable */
+#define CMD_CFG_PROMIS_EN	0x00000010
+/**< 27 Promiscuous operation enable */
+#define CMD_CFG_PAUSE_IGNORE	0x00000100
+/**< 23 Ignore Pause frame quanta */
+
+/* Statistics Configuration Register (STATN_CONFIG) */
+#define STATS_CFG_CLR           0x00000004
+/**< 29 Reset all counters */
+#define STATS_CFG_CLR_ON_RD     0x00000002
+/**< 30 Clear on read */
+#define STATS_CFG_SATURATE      0x00000001
+/**< 31 Saturate at the maximum val */
+
+/**< Max receive frame length mask */
+#define MAXFRM_SIZE_MEMAC	0x00007fe0
+#define MAXFRM_RX_MASK		0x0000ffff
+
+/**< Interface Mode Register Register for MEMAC */
+#define IF_MODE_RLP 0x00000820
+
+/**< Pool Limits */
+#define FMAN_PORT_MAX_EXT_POOLS_NUM	8
+#define FMAN_PORT_OBS_EXT_POOLS_NUM	2
+
+#define FMAN_PORT_CG_MAP_NUM		8
+#define FMAN_PORT_PRS_RESULT_WORDS_NUM	8
+#define FMAN_PORT_BMI_FIFO_UNITS	0x100
+#define FMAN_PORT_IC_OFFSET_UNITS	0x10
+
+#define FMAN_ENABLE_BPOOL_DEPLETION	0xF00000F0
+
+#define HASH_CTRL_MCAST_EN	0x00000100
+#define GROUP_ADDRESS		0x0000010000000000LL
+#define HASH_CTRL_ADDR_MASK	0x0000003F
+
+/* Pre definitions of FMAN interface and Bpool structures */
+struct __fman_if;
+struct fman_if_bpool;
+/* Lists of fman interfaces and bpools */
+TAILQ_HEAD(rte_fman_if_list, __fman_if);
+
+/* Represents the different flavour of network interface */
+enum fman_mac_type {
+	fman_offline = 0,
+	fman_mac_1g,
+	fman_mac_10g,
+};
+
+struct mac_addr {
+	uint32_t   mac_addr_l;	/**< Lower 32 bits of 48-bit MAC address */
+	uint32_t   mac_addr_u;	/**< Upper 16 bits of 48-bit MAC address */
+};
+
+struct memac_regs {
+	/* General Control and Status */
+	uint32_t res0000[2];
+	uint32_t command_config;	/**< 0x008 Ctrl and cfg */
+	struct mac_addr mac_addr0;	/**< 0x00C-0x010 MAC_ADDR_0...1 */
+	uint32_t maxfrm;		/**< 0x014 Max frame length */
+	uint32_t res0018[5];
+	uint32_t hashtable_ctrl;	/**< 0x02C Hash table control */
+	uint32_t res0030[4];
+	uint32_t ievent;		/**< 0x040 Interrupt event */
+	uint32_t tx_ipg_length;
+	/**< 0x044 Transmitter inter-packet-gap */
+	uint32_t res0048;
+	uint32_t imask;			/**< 0x04C Interrupt mask */
+	uint32_t res0050;
+	uint32_t pause_quanta[4];	/**< 0x054 Pause quanta */
+	uint32_t pause_thresh[4];	/**< 0x064 Pause quanta threshold */
+	uint32_t rx_pause_status;	/**< 0x074 Receive pause status */
+	uint32_t res0078[2];
+	struct mac_addr mac_addr[MEMAC_NUM_OF_PADDRS];
+	/**< 0x80-0x0B4 mac padr */
+	uint32_t lpwake_timer;
+	/**< 0x0B8 Low Power Wakeup Timer */
+	uint32_t sleep_timer;
+	/**< 0x0BC Transmit EEE Low Power Timer */
+	uint32_t res00c0[8];
+	uint32_t statn_config;
+	/**< 0x0E0 Statistics configuration */
+	uint32_t res00e4[7];
+	/* Rx Statistics Counter */
+	uint32_t reoct_l;		/**<Rx Eth Octets Counter */
+	uint32_t reoct_u;
+	uint32_t roct_l;		/**<Rx Octet Counters */
+	uint32_t roct_u;
+	uint32_t raln_l;		/**<Rx Alignment Error Counter */
+	uint32_t raln_u;
+	uint32_t rxpf_l;		/**<Rx valid Pause Frame */
+	uint32_t rxpf_u;
+	uint32_t rfrm_l;		/**<Rx Frame counter */
+	uint32_t rfrm_u;
+	uint32_t rfcs_l;		/**<Rx frame check seq error */
+	uint32_t rfcs_u;
+	uint32_t rvlan_l;		/**<Rx Vlan Frame Counter */
+	uint32_t rvlan_u;
+	uint32_t rerr_l;		/**<Rx Frame error */
+	uint32_t rerr_u;
+	uint32_t ruca_l;		/**<Rx Unicast */
+	uint32_t ruca_u;
+	uint32_t rmca_l;		/**<Rx Multicast */
+	uint32_t rmca_u;
+	uint32_t rbca_l;		/**<Rx Broadcast */
+	uint32_t rbca_u;
+	uint32_t rdrp_l;		/**<Rx Dropper Packet */
+	uint32_t rdrp_u;
+	uint32_t rpkt_l;		/**<Rx packet */
+	uint32_t rpkt_u;
+	uint32_t rund_l;		/**<Rx undersized packets */
+	uint32_t rund_u;
+	uint32_t r64_l;			/**<Rx 64 byte */
+	uint32_t r64_u;
+	uint32_t r127_l;
+	uint32_t r127_u;
+	uint32_t r255_l;
+	uint32_t r255_u;
+	uint32_t r511_l;
+	uint32_t r511_u;
+	uint32_t r1023_l;
+	uint32_t r1023_u;
+	uint32_t r1518_l;
+	uint32_t r1518_u;
+	uint32_t r1519x_l;
+	uint32_t r1519x_u;
+	uint32_t rovr_l;		/**<Rx oversized but good */
+	uint32_t rovr_u;
+	uint32_t rjbr_l;		/**<Rx oversized with bad csum */
+	uint32_t rjbr_u;
+	uint32_t rfrg_l;		/**<Rx fragment Packet */
+	uint32_t rfrg_u;
+	uint32_t rcnp_l;		/**<Rx control packets (0x8808 */
+	uint32_t rcnp_u;
+	uint32_t rdrntp_l;		/**<Rx dropped due to FIFO overflow */
+	uint32_t rdrntp_u;
+	uint32_t res01d0[12];
+	/* Tx Statistics Counter */
+	uint32_t teoct_l;		/**<Tx eth octets */
+	uint32_t teoct_u;
+	uint32_t toct_l;		/**<Tx Octets */
+	uint32_t toct_u;
+	uint32_t res0210[2];
+	uint32_t txpf_l;		/**<Tx valid pause frame */
+	uint32_t txpf_u;
+	uint32_t tfrm_l;		/**<Tx frame counter */
+	uint32_t tfrm_u;
+	uint32_t tfcs_l;		/**<Tx FCS error */
+	uint32_t tfcs_u;
+	uint32_t tvlan_l;		/**<Tx Vlan Frame */
+	uint32_t tvlan_u;
+	uint32_t terr_l;		/**<Tx frame error */
+	uint32_t terr_u;
+	uint32_t tuca_l;		/**<Tx Unicast */
+	uint32_t tuca_u;
+	uint32_t tmca_l;		/**<Tx Multicast */
+	uint32_t tmca_u;
+	uint32_t tbca_l;		/**<Tx Broadcast */
+	uint32_t tbca_u;
+	uint32_t res0258[2];
+	uint32_t tpkt_l;		/**<Tx Packet */
+	uint32_t tpkt_u;
+	uint32_t tund_l;		/**<Tx Undersized */
+	uint32_t tund_u;
+	uint32_t t64_l;
+	uint32_t t64_u;
+	uint32_t t127_l;
+	uint32_t t127_u;
+	uint32_t t255_l;
+	uint32_t t255_u;
+	uint32_t t511_l;
+	uint32_t t511_u;
+	uint32_t t1023_l;
+	uint32_t t1023_u;
+	uint32_t t1518_l;
+	uint32_t t1518_u;
+	uint32_t t1519x_l;
+	uint32_t t1519x_u;
+	uint32_t res02a8[6];
+	uint32_t tcnp_l;		/**<Tx Control Packet type - 0x8808 */
+	uint32_t tcnp_u;
+	uint32_t res02c8[14];
+	/* Line Interface Control */
+	uint32_t if_mode;		/**< 0x300 Interface Mode Control */
+	uint32_t if_status;		/**< 0x304 Interface Status */
+	uint32_t res0308[14];
+	/* HiGig/2 */
+	uint32_t hg_config;		/**< 0x340 Control and cfg */
+	uint32_t res0344[3];
+	uint32_t hg_pause_quanta;	/**< 0x350 Pause quanta */
+	uint32_t res0354[3];
+	uint32_t hg_pause_thresh;	/**< 0x360 Pause quanta threshold */
+	uint32_t res0364[3];
+	uint32_t hgrx_pause_status;	/**< 0x370 Receive pause status */
+	uint32_t hg_fifos_status;	/**< 0x374 fifos status */
+	uint32_t rhm;			/**< 0x378 rx messages counter */
+	uint32_t thm;			/**< 0x37C tx messages counter */
+};
+
+struct rx_bmi_regs {
+	uint32_t fmbm_rcfg;		/**< Rx Configuration */
+	uint32_t fmbm_rst;		/**< Rx Status */
+	uint32_t fmbm_rda;		/**< Rx DMA attributes*/
+	uint32_t fmbm_rfp;		/**< Rx FIFO Parameters*/
+	uint32_t fmbm_rfed;		/**< Rx Frame End Data*/
+	uint32_t fmbm_ricp;		/**< Rx Internal Context Parameters*/
+	uint32_t fmbm_rim;		/**< Rx Internal Buffer Margins*/
+	uint32_t fmbm_rebm;		/**< Rx External Buffer Margins*/
+	uint32_t fmbm_rfne;		/**< Rx Frame Next Engine*/
+	uint32_t fmbm_rfca;		/**< Rx Frame Command Attributes.*/
+	uint32_t fmbm_rfpne;		/**< Rx Frame Parser Next Engine*/
+	uint32_t fmbm_rpso;		/**< Rx Parse Start Offset*/
+	uint32_t fmbm_rpp;		/**< Rx Policer Profile  */
+	uint32_t fmbm_rccb;		/**< Rx Coarse Classification Base */
+	uint32_t fmbm_reth;		/**< Rx Excessive Threshold */
+	uint32_t reserved003c[1];	/**< (0x03C 0x03F) */
+	uint32_t fmbm_rprai[FMAN_PORT_PRS_RESULT_WORDS_NUM];
+					/**< Rx Parse Results Array Init*/
+	uint32_t fmbm_rfqid;		/**< Rx Frame Queue ID*/
+	uint32_t fmbm_refqid;		/**< Rx Error Frame Queue ID*/
+	uint32_t fmbm_rfsdm;		/**< Rx Frame Status Discard Mask*/
+	uint32_t fmbm_rfsem;		/**< Rx Frame Status Error Mask*/
+	uint32_t fmbm_rfene;		/**< Rx Frame Enqueue Next Engine */
+	uint32_t reserved0074[0x2];	/**< (0x074-0x07C)  */
+	uint32_t fmbm_rcmne;
+	/**< Rx Frame Continuous Mode Next Engine */
+	uint32_t reserved0080[0x20];/**< (0x080 0x0FF)  */
+	uint32_t fmbm_ebmpi[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Buffer Manager pool Information-*/
+	uint32_t fmbm_acnt[FMAN_PORT_MAX_EXT_POOLS_NUM];
+					/**< Allocate Counter-*/
+	uint32_t reserved0130[8];
+					/**< 0x130/0x140 - 0x15F reserved -*/
+	uint32_t fmbm_rcgm[FMAN_PORT_CG_MAP_NUM];
+					/**< Congestion Group Map*/
+	uint32_t fmbm_mpd;		/**< BM Pool Depletion  */
+	uint32_t reserved0184[0x1F];	/**< (0x184 0x1FF) */
+	uint32_t fmbm_rstc;		/**< Rx Statistics Counters*/
+	uint32_t fmbm_rfrc;		/**< Rx Frame Counter*/
+	uint32_t fmbm_rfbc;		/**< Rx Bad Frames Counter*/
+	uint32_t fmbm_rlfc;		/**< Rx Large Frames Counter*/
+	uint32_t fmbm_rffc;		/**< Rx Filter Frames Counter*/
+	uint32_t fmbm_rfdc;		/**< Rx Frame Discard Counter*/
+	uint32_t fmbm_rfldec;		/**< Rx Frames List DMA Error Counter*/
+	uint32_t fmbm_rodc;		/**< Rx Out of Buffers Discard nntr*/
+	uint32_t fmbm_rbdc;		/**< Rx Buffers Deallocate Counter*/
+	uint32_t reserved0224[0x17];	/**< (0x224 0x27F) */
+	uint32_t fmbm_rpc;		/**< Rx Performance Counters*/
+	uint32_t fmbm_rpcp;		/**< Rx Performance Count Parameters*/
+	uint32_t fmbm_rccn;		/**< Rx Cycle Counter*/
+	uint32_t fmbm_rtuc;		/**< Rx Tasks Utilization Counter*/
+	uint32_t fmbm_rrquc;
+	/**< Rx Receive Queue Utilization cntr*/
+	uint32_t fmbm_rduc;		/**< Rx DMA Utilization Counter*/
+	uint32_t fmbm_rfuc;		/**< Rx FIFO Utilization Counter*/
+	uint32_t fmbm_rpac;		/**< Rx Pause Activation Counter*/
+	uint32_t reserved02a0[0x18];	/**< (0x2A0 0x2FF) */
+	uint32_t fmbm_rdbg;		/**< Rx Debug-*/
+};
+
+struct fman_port_qmi_regs {
+	uint32_t fmqm_pnc;		/**< PortID n Configuration Register */
+	uint32_t fmqm_pns;		/**< PortID n Status Register */
+	uint32_t fmqm_pnts;		/**< PortID n Task Status Register */
+	uint32_t reserved00c[4];	/**< 0xn00C - 0xn01B */
+	uint32_t fmqm_pnen;		/**< PortID n Enqueue NIA Register */
+	uint32_t fmqm_pnetfc;		/**< PortID n Enq Total Frame Counter */
+	uint32_t reserved024[2];	/**< 0xn024 - 0x02B */
+	uint32_t fmqm_pndn;		/**< PortID n Dequeue NIA Register */
+	uint32_t fmqm_pndc;		/**< PortID n Dequeue Config Register */
+	uint32_t fmqm_pndtfc;		/**< PortID n Dequeue tot Frame cntr */
+	uint32_t fmqm_pndfdc;		/**< PortID n Dequeue FQID Dflt Cntr */
+	uint32_t fmqm_pndcc;		/**< PortID n Dequeue Confirm Counter */
+};
+
+/* This struct exports parameters about an Fman network interface, determined
+ * from the device-tree.
+ */
+struct fman_if {
+	/* Which Fman this interface belongs to */
+	uint8_t fman_idx;
+	/* The type/speed of the interface */
+	enum fman_mac_type mac_type;
+	/* Boolean, set when mac type is memac */
+	uint8_t is_memac;
+	/* Boolean, set when PHY is RGMII */
+	uint8_t is_rgmii;
+	/* The index of this MAC (within the Fman it belongs to) */
+	uint8_t mac_idx;
+	/* The MAC address */
+	struct ether_addr mac_addr;
+	/* The Qman channel to schedule Tx FQs to */
+	u16 tx_channel_id;
+	/* The hard-coded FQIDs for this interface. Note: this doesn't cover
+	 * the PCD nor the "Rx default" FQIDs, which are configured via FMC
+	 * and its XML-based configuration.
+	 */
+	uint32_t fqid_rx_def;
+	uint32_t fqid_rx_err;
+	uint32_t fqid_tx_err;
+	uint32_t fqid_tx_confirm;
+
+	struct list_head bpool_list;
+	/* The node for linking this interface into "fman_if_list" */
+	struct list_head node;
+};
+
+/* This struct exposes parameters for buffer pools, extracted from the network
+ * interface settings in the device tree.
+ */
+struct fman_if_bpool {
+	uint32_t bpid;
+	uint64_t count;
+	uint64_t size;
+	uint64_t addr;
+	/* The node for linking this bpool into fman_if::bpool_list */
+	struct list_head node;
+};
+
+/* Internal Context transfer params - FMBM_RICP*/
+struct fman_if_ic_params {
+	/*IC offset in the packet buffer */
+	uint16_t iceof;
+	/*IC internal offset */
+	uint16_t iciof;
+	/*IC size to copy */
+	uint16_t icsz;
+};
+
+/* The exported "struct fman_if" type contains the subset of fields we want
+ * exposed. This struct is embedded in a larger "struct __fman_if" which
+ * contains the extra bits we *don't* want exposed.
+ */
+struct __fman_if {
+	struct fman_if __if;
+	char node_path[PATH_MAX];
+	uint64_t regs_size;
+	void *ccsr_map;
+	void *bmi_map;
+	void *qmi_map;
+	struct list_head node;
+};
+
+/* And this is the base list node that the interfaces are added to. (See
+ * fman_if_enable_all_rx() below for an example of its use.)
+ */
+extern const struct list_head *fman_if_list;
+
+extern int fman_ccsr_map_fd;
+
+/* To iterate the "bpool_list" for an interface. Eg;
+ *        struct fman_if *p = get_ptr_to_some_interface();
+ *        struct fman_if_bpool *bp;
+ *        printf("Interface uses following BPIDs;\n");
+ *        fman_if_for_each_bpool(bp, p) {
+ *            printf("    %d\n", bp->bpid);
+ *            [...]
+ *        }
+ */
+#define fman_if_for_each_bpool(bp, __if) \
+	list_for_each_entry(bp, &(__if)->bpool_list, node)
+
+#define FMAN_ERR(rc, fmt, args...) \
+	do { \
+		_errno = (rc); \
+		DPAA_BUS_LOG(ERR, fmt "(%d)", ##args, errno); \
+	} while (0)
+
+#define FMAN_IP_REV_1	0xC30C4
+#define FMAN_IP_REV_1_MAJOR_MASK 0x0000FF00
+#define FMAN_IP_REV_1_MAJOR_SHIFT 8
+#define FMAN_V3	0x06
+#define FMAN_V3_CONTEXTA_EN_A2V	0x10000000
+#define FMAN_V3_CONTEXTA_EN_OVOM	0x02000000
+#define FMAN_V3_CONTEXTA_EN_EBD	0x80000000
+#define FMAN_CONTEXTA_DIS_CHECKSUM	0x7ull
+#define FMAN_CONTEXTA_SET_OPCODE11 0x2000000b00000000
+extern u16 fman_ip_rev;
+extern u32 fman_dealloc_bufs_mask_hi;
+extern u32 fman_dealloc_bufs_mask_lo;
+
+/**
+ * Initialize the FMAN driver
+ *
+ * @args void
+ * @return
+ *	0 for success; error OTHERWISE
+ */
+int fman_init(void);
+
+/**
+ * Teardown the FMAN driver
+ *
+ * @args void
+ * @return void
+ */
+void fman_finish(void);
+
+#endif	/* __FMAN_H */
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
new file mode 100644
index 0000000..b77a678
--- /dev/null
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -0,0 +1,96 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2012 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __NETCFG_H
+#define __NETCFG_H
+
+#include <fman.h>
+#include <argp.h>
+
+/* Configuration information related to a specific ethernet port */
+struct fm_eth_port_cfg {
+	/**< A list of PCD FQ ranges, obtained from FMC configuration */
+	struct list_head *list;
+	/**< The "Rx default" FQID, obtained from FMC configuration */
+	uint32_t rx_def;
+	/**< Other interface details are in the fman driver interface */
+	struct fman_if *fman_if;
+};
+
+struct netcfg_info {
+	uint8_t num_ethports;
+	/**< Number of ports */
+	struct fm_eth_port_cfg port_cfg[0];
+	/**< Variable structure array of size num_ethports */
+};
+
+struct interface_info {
+	char *name;
+	struct ether_addr mac_addr;
+	struct ether_addr peer_mac;
+	int mac_present;
+	int fman_enabled_mac_interface;
+};
+
+struct netcfg_interface {
+	uint8_t numof_netcfg_interface;
+	uint8_t numof_fman_enabled_macless;
+	struct interface_info interface_info[0];
+};
+
+/* pcd_file: FMC netpcd XML ("policy") file, that contains PCD information.
+ * cfg_file: FMC config XML file
+ * Returns the configuration information in newly allocated memory.
+ */
+struct netcfg_info *netcfg_acquire(void);
+
+/* cfg_ptr: configuration information pointer.
+ * Frees the resources allocated by the configuration layer.
+ */
+void netcfg_release(struct netcfg_info *cfg_ptr);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* cfg_ptr: configuration information pointer.
+ * This function dumps configuration data to stdout.
+ */
+void dump_netcfg(struct netcfg_info *cfg_ptr);
+#endif
+
+#endif /* __NETCFG_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 06/40] bus/dpaa: add FMan hardware operations
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (4 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
                             ` (35 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/fman/fman_hw.c      | 562 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_fman.h       | 174 +++++++++
 drivers/bus/dpaa/include/fsl_fman_crc64.h | 263 ++++++++++++++
 4 files changed, 1000 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/fman/fman_hw.c
 create mode 100644 drivers/bus/dpaa/include/fsl_fman.h
 create mode 100644 drivers/bus/dpaa/include/fsl_fman_crc64.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index f6e504d..fe65276 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -58,6 +58,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
+	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c
 
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
new file mode 100644
index 0000000..a7ca661
--- /dev/null
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -0,0 +1,562 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <ifaddrs.h>
+#include <fman.h>
+/* This header declares things about Fman hardware itself (the format of status
+ * words and an inline implementation of CRC64). We include it only in order to
+ * instantiate the one global variable it depends on.
+ */
+#include <fsl_fman.h>
+#include <fsl_fman_crc64.h>
+
+/* Instantiate the global variable that the inline CRC64 implementation (in
+ * <fsl_fman.h>) depends on.
+ */
+DECLARE_FMAN_CRC64_TABLE();
+
+#define ETH_ADDR_TO_UINT64(eth_addr)                  \
+	(uint64_t)(((uint64_t)(eth_addr)[0] << 40) |   \
+	((uint64_t)(eth_addr)[1] << 32) |   \
+	((uint64_t)(eth_addr)[2] << 24) |   \
+	((uint64_t)(eth_addr)[3] << 16) |   \
+	((uint64_t)(eth_addr)[4] << 8) |    \
+	((uint64_t)(eth_addr)[5]))
+
+void
+fman_if_set_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i|HASH_CTRL_MCAST_EN);
+}
+
+void
+fman_if_reset_mcast_filter_table(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *hashtable_ctrl;
+	uint32_t i;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	for (i = 0; i < 64; i++)
+		out_be32(hashtable_ctrl, i & ~HASH_CTRL_MCAST_EN);
+}
+
+static
+uint32_t get_mac_hash_code(uint64_t eth_addr)
+{
+	uint64_t	mask1, mask2;
+	uint32_t	xorVal = 0;
+	uint8_t		i, j;
+
+	for (i = 0; i < 6; i++) {
+		mask1 = eth_addr & (uint64_t)0x01;
+		eth_addr >>= 1;
+
+		for (j = 0; j < 7; j++) {
+			mask2 = eth_addr & (uint64_t)0x01;
+			mask1 ^= mask2;
+			eth_addr >>= 1;
+		}
+
+		xorVal |= (mask1 << (5 - i));
+	}
+
+	return xorVal;
+}
+
+int
+fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	uint64_t eth_addr;
+	void *hashtable_ctrl;
+	uint32_t hash;
+
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	eth_addr = ETH_ADDR_TO_UINT64(eth);
+
+	if (!(eth_addr & GROUP_ADDRESS))
+		return -1;
+
+	hash = get_mac_hash_code(eth_addr) & HASH_CTRL_ADDR_MASK;
+	hash = hash | HASH_CTRL_MCAST_EN;
+
+	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
+	out_be32(hashtable_ctrl, hash);
+
+	return 0;
+}
+
+int
+fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *mac_reg =
+		&((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
+	u32 val = in_be32(mac_reg);
+
+	eth[0] = (val & 0x000000ff) >> 0;
+	eth[1] = (val & 0x0000ff00) >> 8;
+	eth[2] = (val & 0x00ff0000) >> 16;
+	eth[3] = (val & 0xff000000) >> 24;
+
+	mac_reg =  &((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_u;
+	val = in_be32(mac_reg);
+
+	eth[4] = (val & 0x000000ff) >> 0;
+	eth[5] = (val & 0x0000ff00) >> 8;
+
+	return 0;
+}
+
+void
+fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	void *reg;
+
+	if (addr_num) {
+		reg = &((struct memac_regs *)m->ccsr_map)->
+				mac_addr[addr_num-1].mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+		out_be32(reg, 0x0);
+	} else {
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+		out_be32(reg, 0x0);
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+		out_be32(reg, 0x0);
+	}
+}
+
+int
+fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+
+	void *reg;
+	u32 val;
+
+	memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_l;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_l;
+
+	val = (m->__if.mac_addr.addr_bytes[0] |
+	       (m->__if.mac_addr.addr_bytes[1] << 8) |
+	       (m->__if.mac_addr.addr_bytes[2] << 16) |
+	       (m->__if.mac_addr.addr_bytes[3] << 24));
+	out_be32(reg, val);
+
+	if (addr_num)
+		reg = &((struct memac_regs *)m->ccsr_map)->
+					mac_addr[addr_num-1].mac_addr_u;
+	else
+		reg = &((struct memac_regs *)m->ccsr_map)->mac_addr0.mac_addr_u;
+
+	val = ((m->__if.mac_addr.addr_bytes[4] << 0) |
+	       (m->__if.mac_addr.addr_bytes[5] << 8));
+	out_be32(reg, val);
+
+	return 0;
+}
+
+void
+fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	u32 value = 0;
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Rx Ignore Pause Frames */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	if (enable)
+		value = in_be32(cmdcfg) | CMD_CFG_PAUSE_IGNORE;
+	else
+		value = in_be32(cmdcfg) & ~CMD_CFG_PAUSE_IGNORE;
+
+	out_be32(cmdcfg, value);
+}
+
+void
+fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	unsigned int *maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Set Max frame length */
+	maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+	out_be32(maxfrm, (MAXFRM_RX_MASK & max_frame_len));
+}
+
+void
+fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+
+	/* read recved packet count */
+	stats->ipackets = ((u64)in_be32(&regs->rfrm_u)) << 32 |
+			in_be32(&regs->rfrm_l);
+	stats->ibytes = ((u64)in_be32(&regs->roct_u)) << 32 |
+			in_be32(&regs->roct_l);
+	stats->ierrors = ((u64)in_be32(&regs->rerr_u)) << 32 |
+			in_be32(&regs->rerr_l);
+
+	/* read xmited packet count */
+	stats->opackets = ((u64)in_be32(&regs->tfrm_u)) << 32 |
+			in_be32(&regs->tfrm_l);
+	stats->obytes = ((u64)in_be32(&regs->toct_u)) << 32 |
+			in_be32(&regs->toct_l);
+	stats->oerrors = ((u64)in_be32(&regs->terr_u)) << 32 |
+			in_be32(&regs->terr_l);
+}
+
+void
+fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+	int i;
+	uint64_t base_offset = offsetof(struct memac_regs, reoct_l);
+
+	for (i = 0; i < n; i++)
+		value[i] = ((u64)in_be32((char *)regs
+				+ base_offset + 8 * i + 4)) << 32 |
+				((u64)in_be32((char *)regs
+				+ base_offset + 8 * i));
+}
+
+void
+fman_if_stats_reset(struct fman_if *p)
+{
+	struct __fman_if *m = container_of(p, struct __fman_if, __if);
+	struct memac_regs *regs = m->ccsr_map;
+	uint32_t tmp;
+
+	tmp = in_be32(&regs->statn_config);
+
+	tmp |= STATS_CFG_CLR;
+
+	out_be32(&regs->statn_config, tmp);
+
+	while (in_be32(&regs->statn_config) & STATS_CFG_CLR)
+		;
+}
+
+void
+fman_if_promiscuous_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_PROMIS_EN);
+}
+
+void
+fman_if_promiscuous_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+	void *cmdcfg;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Disable Rx promiscuous mode */
+	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
+	out_be32(cmdcfg, in_be32(cmdcfg) & (~CMD_CFG_PROMIS_EN));
+}
+
+void
+fman_if_enable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* enable Rx and Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) | 3);
+}
+
+void
+fman_if_disable_rx(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* only disable Rx, not Tx */
+	out_be32(__if->ccsr_map + 8, in_be32(__if->ccsr_map + 8) & ~(u32)2);
+}
+
+void
+fman_if_loopback_enable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	/* Enable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) | IF_MODE_RLP);
+	} else{
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) | CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_loopback_disable(struct fman_if *p)
+{
+	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+	/* Disable loopback mode */
+	if ((__if->__if.is_memac) && (__if->__if.is_rgmii)) {
+		unsigned int *ifmode =
+			&((struct memac_regs *)__if->ccsr_map)->if_mode;
+		out_be32(ifmode, in_be32(ifmode) & ~IF_MODE_RLP);
+	} else {
+		unsigned int *cmdcfg =
+			&((struct memac_regs *)__if->ccsr_map)->command_config;
+		out_be32(cmdcfg, in_be32(cmdcfg) & ~CMD_CFG_LOOPBACK_EN);
+	}
+}
+
+void
+fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
+		    int bpid, size_t bufsize)
+{
+	u32 fmbm_ebmpi;
+	u32 ebmpi_val_ace = 0xc0000000;
+	u32 ebmpi_mask = 0xffc00000;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ebmpi =
+	       in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0]);
+	fmbm_ebmpi = ebmpi_val_ace | (fmbm_ebmpi & ebmpi_mask) | (bpid << 16) |
+		     (bufsize);
+
+	out_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ebmpi[0],
+		 fmbm_ebmpi);
+}
+
+int
+fman_if_get_fc_quanta(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	return in_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0]);
+}
+
+int
+fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	out_be32(&((struct memac_regs *)__if->ccsr_map)->pause_quanta[0],
+		 pause_quanta);
+	return 0;
+}
+
+int
+fman_if_get_fdoff(struct fman_if *fm_if)
+{
+	u32 fmbm_ricp;
+	int fdoff;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_ricp =
+		   in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp);
+	/*iceof + icsz*/
+	fdoff = ((fmbm_ricp & iceof_mask) >> 16) * 16 +
+		(fmbm_ricp & icsz_mask) * 16;
+
+	return fdoff;
+}
+
+void
+fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_refqid =
+			&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_refqid;
+	out_be32(fmbm_refqid, err_fqid);
+}
+
+int
+fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	val = in_be32(fmbm_ricp);
+
+	icp->iceof = (val & iceof_mask) >> 12;
+	icp->iciof = (val & iciof_mask) >> 4;
+	icp->icsz = (val & icsz_mask) << 4;
+
+	return 0;
+}
+
+int
+fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	int val = 0;
+	int iceof_mask = 0x001f0000;
+	int icsz_mask = 0x0000001f;
+	int iciof_mask = 0x00000f00;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	val |= (icp->iceof << 12) & iceof_mask;
+	val |= (icp->iciof << 4) & iciof_mask;
+	val |= (icp->icsz >> 4) & icsz_mask;
+
+	unsigned int *fmbm_ricp =
+		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
+	out_be32(fmbm_ricp, val);
+
+	return 0;
+}
+
+void
+fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rebm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_rebm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rebm;
+
+	out_be32(fmbm_rebm, in_be32(fmbm_rebm) | (fd_offset << 16));
+}
+
+void
+fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	out_be32(reg_maxfrm, (in_be32(reg_maxfrm) & 0xFFFF0000) | max_frm);
+}
+
+uint16_t
+fman_if_get_maxfrm(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *reg_maxfrm;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
+
+	return (in_be32(reg_maxfrm) | 0x0000FFFF);
+}
+
+void
+fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmqm_pndn;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmqm_pndn = &((struct fman_port_qmi_regs *)__if->qmi_map)->fmqm_pndn;
+
+	out_be32(fmqm_pndn, nia);
+}
+
+void
+fman_if_discard_rx_errors(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_rfsdm, *fmbm_rfsem;
+
+	fmbm_rfsem = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsem;
+	out_be32(fmbm_rfsem, 0);
+
+	/* Configure the discard mask to discard the error packets which have
+	 * DMA errors, Frame size error, Header error etc. The mask 0x010CE3F0
+	 * is to configured discard all the errors which come in the FD[STATUS]
+	 */
+	fmbm_rfsdm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsdm;
+	out_be32(fmbm_rfsdm, 0x010CE3F0);
+}
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
new file mode 100644
index 0000000..ac38082
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -0,0 +1,174 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_H
+#define __FSL_FMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Status field in FD is updated on Rx side by FMAN with following information.
+ * Refer to field description in FM BG.
+ */
+struct fm_status_t {
+	unsigned int reserved0:3;
+	unsigned int dcl4c:1; /* Don't Check L4 Checksum */
+	unsigned int reserved1:1;
+	unsigned int ufd:1; /* Unsupported Format */
+	unsigned int lge:1; /* Length Error */
+	unsigned int dme:1; /* DMA Error */
+
+	unsigned int reserved2:4;
+	unsigned int fpe:1; /* Frame physical Error */
+	unsigned int fse:1; /* Frame Size Error */
+	unsigned int dis:1; /* Discard by Classification */
+	unsigned int reserved3:1;
+
+	unsigned int eof:1; /* Key Extraction goes out of frame */
+	unsigned int nss:1; /* No Scheme selected */
+	unsigned int kso:1; /* Key Size Overflow */
+	unsigned int reserved4:1;
+	unsigned int fcl:2; /* Frame Color */
+	unsigned int ipp:1; /* Illegal Policer Profile Selected */
+	unsigned int flm:1; /* Frame Length Mismatch */
+	unsigned int pte:1; /* Parser Timeout */
+	unsigned int isp:1; /* Invalid Soft Parser Instruction */
+	unsigned int phe:1; /* Header Error during parsing */
+	unsigned int frdr:1; /* Frame Dropped by disabled port */
+	unsigned int reserved5:4;
+} __attribute__ ((__packed__));
+
+/* Set MAC address for a particular interface */
+int fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num);
+
+/* Remove a MAC address for a particular interface */
+void fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num);
+
+/* Get the FMAN statistics */
+void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);
+
+/* Reset the FMAN statistics */
+void fman_if_stats_reset(struct fman_if *p);
+
+/* Get all of the FMAN statistics */
+void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
+
+/* Set ignore pause option for a specific interface */
+void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
+
+/* Set max frame length */
+void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len);
+
+/* Enable/disable Rx promiscuous mode on specified interface */
+void fman_if_promiscuous_enable(struct fman_if *p);
+void fman_if_promiscuous_disable(struct fman_if *p);
+
+/* Enable/disable Rx on specific interfaces */
+void fman_if_enable_rx(struct fman_if *p);
+void fman_if_disable_rx(struct fman_if *p);
+
+/* Enable/disable loopback on specific interfaces */
+void fman_if_loopback_enable(struct fman_if *p);
+void fman_if_loopback_disable(struct fman_if *p);
+
+/* Set buffer pool on specific interface */
+void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
+		    size_t bufsize);
+
+/* Get Flow Control pause quanta on specific interface */
+int fman_if_get_fc_quanta(struct fman_if *fm_if);
+
+/* Set Flow Control pause quanta on specific interface */
+int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
+
+/* Set default error fqid on specific interface */
+void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
+
+/* Get IC transfer params */
+int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
+
+/* Set IC transfer params */
+int fman_if_set_ic_params(struct fman_if *fm_if,
+			  const struct fman_if_ic_params *icp);
+
+/* Get interface fd->offset value */
+int fman_if_get_fdoff(struct fman_if *fm_if);
+
+/* Set interface fd->offset value */
+void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
+
+/* Get interface Max Frame length (MTU) */
+uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
+
+/* Set interface  Max Frame length (MTU) */
+void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
+
+/* Set interface next invoked action for dequeue operation */
+void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
+
+/* discard error packets on rx */
+void fman_if_discard_rx_errors(struct fman_if *fm_if);
+
+void fman_if_set_mcast_filter_table(struct fman_if *p);
+
+void fman_if_reset_mcast_filter_table(struct fman_if *p);
+
+int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
+
+int fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth);
+
+
+/* Enable/disable Rx on all interfaces */
+static inline void fman_if_enable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_enable_rx(__if);
+}
+
+static inline void fman_if_disable_all_rx(void)
+{
+	struct fman_if *__if;
+
+	list_for_each_entry(__if, fman_if_list, node)
+		fman_if_disable_rx(__if);
+}
+#endif /* __FSL_FMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_fman_crc64.h b/drivers/bus/dpaa/include/fsl_fman_crc64.h
new file mode 100644
index 0000000..af5803f
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_fman_crc64.h
@@ -0,0 +1,263 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_FMAN_CRC64_H
+#define __FSL_FMAN_CRC64_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*
+ * This following definitions provide a software implementation of the CRC64
+ * algorithm implemented within Fman.
+ *
+ * The following example shows how to compute a CRC64 hash value based on
+ * SRC_IP, DST_IP and ESP_SPI values
+ *
+ *     #define compute_hash(saddr,daddr,spi) \
+ *        do { \
+ *           uint64_t result; \
+ *           result = fman_crc64_init(); \
+ *           result = fman_crc64_compute_32bit(saddr, result); \
+ *           result = fman_crc64_compute_32bit(daddr, result); \
+ *           result = fman_crc64_compute_32bit(spi, result); \
+ *           return (uint32_t) result & RC_HASH_MASK; \
+ *        } while (0);
+ *
+ * If hashing over a different number of fields (or of different types) is
+ * required, this can be implemented using the following primitives.
+ */
+
+/* The following table provides the constants used by the Fman CRC64
+ * implementation. The table is instantiated within the DPAA fman driver.
+ * However if the application is not going to be linked against the DPAA fman
+ * driver but will use this Fman CRC64 implementation, then it will need to
+ * instantiate this table by using the DECLARE_FMAN_CRC64_TABLE() macro.
+ */
+struct fman_crc64_t {
+	uint64_t initial;
+	uint64_t table[1 << 8];
+};
+extern struct fman_crc64_t FMAN_CRC64_ECMA_182;
+#define DECLARE_FMAN_CRC64_TABLE() \
+struct fman_crc64_t FMAN_CRC64_ECMA_182 = { \
+	0xFFFFFFFFFFFFFFFFULL, \
+	{ \
+		0x0000000000000000ULL, 0xb32e4cbe03a75f6fULL, \
+		0xf4843657a840a05bULL, 0x47aa7ae9abe7ff34ULL, \
+		0x7bd0c384ff8f5e33ULL, 0xc8fe8f3afc28015cULL, \
+		0x8f54f5d357cffe68ULL, 0x3c7ab96d5468a107ULL, \
+		0xf7a18709ff1ebc66ULL, 0x448fcbb7fcb9e309ULL, \
+		0x0325b15e575e1c3dULL, 0xb00bfde054f94352ULL, \
+		0x8c71448d0091e255ULL, 0x3f5f08330336bd3aULL, \
+		0x78f572daa8d1420eULL, 0xcbdb3e64ab761d61ULL, \
+		0x7d9ba13851336649ULL, 0xceb5ed8652943926ULL, \
+		0x891f976ff973c612ULL, 0x3a31dbd1fad4997dULL, \
+		0x064b62bcaebc387aULL, 0xb5652e02ad1b6715ULL, \
+		0xf2cf54eb06fc9821ULL, 0x41e11855055bc74eULL, \
+		0x8a3a2631ae2dda2fULL, 0x39146a8fad8a8540ULL, \
+		0x7ebe1066066d7a74ULL, 0xcd905cd805ca251bULL, \
+		0xf1eae5b551a2841cULL, 0x42c4a90b5205db73ULL, \
+		0x056ed3e2f9e22447ULL, 0xb6409f5cfa457b28ULL, \
+		0xfb374270a266cc92ULL, 0x48190ecea1c193fdULL, \
+		0x0fb374270a266cc9ULL, 0xbc9d3899098133a6ULL, \
+		0x80e781f45de992a1ULL, 0x33c9cd4a5e4ecdceULL, \
+		0x7463b7a3f5a932faULL, 0xc74dfb1df60e6d95ULL, \
+		0x0c96c5795d7870f4ULL, 0xbfb889c75edf2f9bULL, \
+		0xf812f32ef538d0afULL, 0x4b3cbf90f69f8fc0ULL, \
+		0x774606fda2f72ec7ULL, 0xc4684a43a15071a8ULL, \
+		0x83c230aa0ab78e9cULL, 0x30ec7c140910d1f3ULL, \
+		0x86ace348f355aadbULL, 0x3582aff6f0f2f5b4ULL, \
+		0x7228d51f5b150a80ULL, 0xc10699a158b255efULL, \
+		0xfd7c20cc0cdaf4e8ULL, 0x4e526c720f7dab87ULL, \
+		0x09f8169ba49a54b3ULL, 0xbad65a25a73d0bdcULL, \
+		0x710d64410c4b16bdULL, 0xc22328ff0fec49d2ULL, \
+		0x85895216a40bb6e6ULL, 0x36a71ea8a7ace989ULL, \
+		0x0adda7c5f3c4488eULL, 0xb9f3eb7bf06317e1ULL, \
+		0xfe5991925b84e8d5ULL, 0x4d77dd2c5823b7baULL, \
+		0x64b62bcaebc387a1ULL, 0xd7986774e864d8ceULL, \
+		0x90321d9d438327faULL, 0x231c512340247895ULL, \
+		0x1f66e84e144cd992ULL, 0xac48a4f017eb86fdULL, \
+		0xebe2de19bc0c79c9ULL, 0x58cc92a7bfab26a6ULL, \
+		0x9317acc314dd3bc7ULL, 0x2039e07d177a64a8ULL, \
+		0x67939a94bc9d9b9cULL, 0xd4bdd62abf3ac4f3ULL, \
+		0xe8c76f47eb5265f4ULL, 0x5be923f9e8f53a9bULL, \
+		0x1c4359104312c5afULL, 0xaf6d15ae40b59ac0ULL, \
+		0x192d8af2baf0e1e8ULL, 0xaa03c64cb957be87ULL, \
+		0xeda9bca512b041b3ULL, 0x5e87f01b11171edcULL, \
+		0x62fd4976457fbfdbULL, 0xd1d305c846d8e0b4ULL, \
+		0x96797f21ed3f1f80ULL, 0x2557339fee9840efULL, \
+		0xee8c0dfb45ee5d8eULL, 0x5da24145464902e1ULL, \
+		0x1a083bacedaefdd5ULL, 0xa9267712ee09a2baULL, \
+		0x955cce7fba6103bdULL, 0x267282c1b9c65cd2ULL, \
+		0x61d8f8281221a3e6ULL, 0xd2f6b4961186fc89ULL, \
+		0x9f8169ba49a54b33ULL, 0x2caf25044a02145cULL, \
+		0x6b055fede1e5eb68ULL, 0xd82b1353e242b407ULL, \
+		0xe451aa3eb62a1500ULL, 0x577fe680b58d4a6fULL, \
+		0x10d59c691e6ab55bULL, 0xa3fbd0d71dcdea34ULL, \
+		0x6820eeb3b6bbf755ULL, 0xdb0ea20db51ca83aULL, \
+		0x9ca4d8e41efb570eULL, 0x2f8a945a1d5c0861ULL, \
+		0x13f02d374934a966ULL, 0xa0de61894a93f609ULL, \
+		0xe7741b60e174093dULL, 0x545a57dee2d35652ULL, \
+		0xe21ac88218962d7aULL, 0x5134843c1b317215ULL, \
+		0x169efed5b0d68d21ULL, 0xa5b0b26bb371d24eULL, \
+		0x99ca0b06e7197349ULL, 0x2ae447b8e4be2c26ULL, \
+		0x6d4e3d514f59d312ULL, 0xde6071ef4cfe8c7dULL, \
+		0x15bb4f8be788911cULL, 0xa6950335e42fce73ULL, \
+		0xe13f79dc4fc83147ULL, 0x521135624c6f6e28ULL, \
+		0x6e6b8c0f1807cf2fULL, 0xdd45c0b11ba09040ULL, \
+		0x9aefba58b0476f74ULL, 0x29c1f6e6b3e0301bULL, \
+		0xc96c5795d7870f42ULL, 0x7a421b2bd420502dULL, \
+		0x3de861c27fc7af19ULL, 0x8ec62d7c7c60f076ULL, \
+		0xb2bc941128085171ULL, 0x0192d8af2baf0e1eULL, \
+		0x4638a2468048f12aULL, 0xf516eef883efae45ULL, \
+		0x3ecdd09c2899b324ULL, 0x8de39c222b3eec4bULL, \
+		0xca49e6cb80d9137fULL, 0x7967aa75837e4c10ULL, \
+		0x451d1318d716ed17ULL, 0xf6335fa6d4b1b278ULL, \
+		0xb199254f7f564d4cULL, 0x02b769f17cf11223ULL, \
+		0xb4f7f6ad86b4690bULL, 0x07d9ba1385133664ULL, \
+		0x4073c0fa2ef4c950ULL, 0xf35d8c442d53963fULL, \
+		0xcf273529793b3738ULL, 0x7c0979977a9c6857ULL, \
+		0x3ba3037ed17b9763ULL, 0x888d4fc0d2dcc80cULL, \
+		0x435671a479aad56dULL, 0xf0783d1a7a0d8a02ULL, \
+		0xb7d247f3d1ea7536ULL, 0x04fc0b4dd24d2a59ULL, \
+		0x3886b22086258b5eULL, 0x8ba8fe9e8582d431ULL, \
+		0xcc0284772e652b05ULL, 0x7f2cc8c92dc2746aULL, \
+		0x325b15e575e1c3d0ULL, 0x8175595b76469cbfULL, \
+		0xc6df23b2dda1638bULL, 0x75f16f0cde063ce4ULL, \
+		0x498bd6618a6e9de3ULL, 0xfaa59adf89c9c28cULL, \
+		0xbd0fe036222e3db8ULL, 0x0e21ac88218962d7ULL, \
+		0xc5fa92ec8aff7fb6ULL, 0x76d4de52895820d9ULL, \
+		0x317ea4bb22bfdfedULL, 0x8250e80521188082ULL, \
+		0xbe2a516875702185ULL, 0x0d041dd676d77eeaULL, \
+		0x4aae673fdd3081deULL, 0xf9802b81de97deb1ULL, \
+		0x4fc0b4dd24d2a599ULL, 0xfceef8632775faf6ULL, \
+		0xbb44828a8c9205c2ULL, 0x086ace348f355aadULL, \
+		0x34107759db5dfbaaULL, 0x873e3be7d8faa4c5ULL, \
+		0xc094410e731d5bf1ULL, 0x73ba0db070ba049eULL, \
+		0xb86133d4dbcc19ffULL, 0x0b4f7f6ad86b4690ULL, \
+		0x4ce50583738cb9a4ULL, 0xffcb493d702be6cbULL, \
+		0xc3b1f050244347ccULL, 0x709fbcee27e418a3ULL, \
+		0x3735c6078c03e797ULL, 0x841b8ab98fa4b8f8ULL, \
+		0xadda7c5f3c4488e3ULL, 0x1ef430e13fe3d78cULL, \
+		0x595e4a08940428b8ULL, 0xea7006b697a377d7ULL, \
+		0xd60abfdbc3cbd6d0ULL, 0x6524f365c06c89bfULL, \
+		0x228e898c6b8b768bULL, 0x91a0c532682c29e4ULL, \
+		0x5a7bfb56c35a3485ULL, 0xe955b7e8c0fd6beaULL, \
+		0xaeffcd016b1a94deULL, 0x1dd181bf68bdcbb1ULL, \
+		0x21ab38d23cd56ab6ULL, 0x9285746c3f7235d9ULL, \
+		0xd52f0e859495caedULL, 0x6601423b97329582ULL, \
+		0xd041dd676d77eeaaULL, 0x636f91d96ed0b1c5ULL, \
+		0x24c5eb30c5374ef1ULL, 0x97eba78ec690119eULL, \
+		0xab911ee392f8b099ULL, 0x18bf525d915feff6ULL, \
+		0x5f1528b43ab810c2ULL, 0xec3b640a391f4fadULL, \
+		0x27e05a6e926952ccULL, 0x94ce16d091ce0da3ULL, \
+		0xd3646c393a29f297ULL, 0x604a2087398eadf8ULL, \
+		0x5c3099ea6de60cffULL, 0xef1ed5546e415390ULL, \
+		0xa8b4afbdc5a6aca4ULL, 0x1b9ae303c601f3cbULL, \
+		0x56ed3e2f9e224471ULL, 0xe5c372919d851b1eULL, \
+		0xa26908783662e42aULL, 0x114744c635c5bb45ULL, \
+		0x2d3dfdab61ad1a42ULL, 0x9e13b115620a452dULL, \
+		0xd9b9cbfcc9edba19ULL, 0x6a978742ca4ae576ULL, \
+		0xa14cb926613cf817ULL, 0x1262f598629ba778ULL, \
+		0x55c88f71c97c584cULL, 0xe6e6c3cfcadb0723ULL, \
+		0xda9c7aa29eb3a624ULL, 0x69b2361c9d14f94bULL, \
+		0x2e184cf536f3067fULL, 0x9d36004b35545910ULL, \
+		0x2b769f17cf112238ULL, 0x9858d3a9ccb67d57ULL, \
+		0xdff2a94067518263ULL, 0x6cdce5fe64f6dd0cULL, \
+		0x50a65c93309e7c0bULL, 0xe388102d33392364ULL, \
+		0xa4226ac498dedc50ULL, 0x170c267a9b79833fULL, \
+		0xdcd7181e300f9e5eULL, 0x6ff954a033a8c131ULL, \
+		0x28532e49984f3e05ULL, 0x9b7d62f79be8616aULL, \
+		0xa707db9acf80c06dULL, 0x14299724cc279f02ULL, \
+		0x5383edcd67c06036ULL, 0xe0ada17364673f59ULL} \
+}
+
+/*
+ * Return the initial CRC seed. Use the value returned from this API as the
+ * "crc" parameter to the first call to add data.
+ */
+static inline uint64_t fman_crc64_init(void)
+{
+	return FMAN_CRC64_ECMA_182.initial;
+}
+
+/* Updates the CRC with arbitrary data */
+static inline uint64_t fman_crc64_update(uint64_t crc,
+					 void *data, unsigned int len)
+{
+	uint8_t *p = data;
+	while (len--)
+		crc = FMAN_CRC64_ECMA_182.table[(crc ^ *(p++)) & 0xff] ^
+				(crc >> 8);
+	return crc;
+}
+
+/* Shorthands for updating the CRC with 8/16/32 bits of data.
+ * IMPORTANT NOTE: the typed "data" arguments should not be mistaken for
+ * host-endian numerical values, the assumption is that these values contain
+ * big-endian (ie. network byte order) data.
+ */
+static inline uint64_t fman_crc64_compute_32bit(uint32_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_16bit(uint16_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+static inline uint64_t fman_crc64_compute_8bit(uint8_t data, uint64_t crc)
+{
+	return fman_crc64_update(crc, &data, sizeof(data));
+}
+
+/*
+ * Finalise the CRC (using 2's complement)
+ */
+static inline uint64_t fman_crc64_finish(uint64_t seed)
+{
+	return ~seed;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_FMAN_CRC64_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 07/40] bus/dpaa: enable DPAA IOCTL portal driver
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (5 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
                             ` (34 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Userspace applications interact with DPAA blocks using this IOCTL driver.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile             |   4 +-
 drivers/bus/dpaa/base/qbman/process.c | 331 ++++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h    |  88 +++++++++
 drivers/bus/dpaa/include/process.h    | 107 +++++++++++
 4 files changed, 529 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/process.c
 create mode 100644 drivers/bus/dpaa/include/fsl_usd.h
 create mode 100644 drivers/bus/dpaa/include/process.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index fe65276..f06521c 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -43,6 +43,7 @@ CFLAGS += -Wno-cast-qual
 CFLAGS += -D _GNU_SOURCE
 CFLAGS += -I$(RTE_BUS_DPAA)/
 CFLAGS += -I$(RTE_BUS_DPAA)/include
+CFLAGS += -I$(RTE_BUS_DPAA)/base/qbman
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 
@@ -60,7 +61,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman.c \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
-	base/fman/netcfg_layer.c
+	base/fman/netcfg_layer.c \
+	base/qbman/process.c
 
 # Link Pthread
 LDLIBS += -lpthread
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
new file mode 100644
index 0000000..b8ec539
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -0,0 +1,331 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2011-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <assert.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+
+#include "process.h"
+
+#include <fsl_usd.h>
+
+/* As higher-level drivers will be built on top of this (dma_mem, qbman, ...),
+ * it's preferable that the process driver itself not provide any exported API.
+ * As such, combined with the fact that none of these operations are
+ * performance critical, it is justified to use lazy initialisation, so that's
+ * what the lock is for.
+ */
+static int fd = -1;
+static pthread_mutex_t fd_init_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static int check_fd(void)
+{
+	int ret;
+
+	if (fd >= 0)
+		return 0;
+	ret = pthread_mutex_lock(&fd_init_lock);
+	assert(!ret);
+	/* check again with the lock held */
+	if (fd < 0)
+		fd = open(PROCESS_PATH, O_RDWR);
+	ret = pthread_mutex_unlock(&fd_init_lock);
+	assert(!ret);
+	return (fd >= 0) ? 0 : -ENODEV;
+}
+
+#define DPAA_IOCTL_MAGIC 'u'
+struct dpaa_ioctl_id_alloc {
+	uint32_t base; /* Return value, the start of the allocated range */
+	enum dpaa_id_type id_type; /* what kind of resource(s) to allocate */
+	uint32_t num; /* how many IDs to allocate (and return value) */
+	uint32_t align; /* must be a power of 2, 0 is treated like 1 */
+	int partial; /* whether to allow less than 'num' */
+};
+
+struct dpaa_ioctl_id_release {
+	/* Input; */
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+struct dpaa_ioctl_id_reserve {
+	enum dpaa_id_type id_type;
+	uint32_t base;
+	uint32_t num;
+};
+
+#define DPAA_IOCTL_ID_ALLOC \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x01, struct dpaa_ioctl_id_alloc)
+#define DPAA_IOCTL_ID_RELEASE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x02, struct dpaa_ioctl_id_release)
+#define DPAA_IOCTL_ID_RESERVE \
+	_IOW(DPAA_IOCTL_MAGIC, 0x0A, struct dpaa_ioctl_id_reserve)
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial)
+{
+	struct dpaa_ioctl_id_alloc id = {
+		.id_type = id_type,
+		.num = num,
+		.align = align,
+		.partial = partial
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	ret = ioctl(fd, DPAA_IOCTL_ID_ALLOC, &id);
+	if (ret)
+		return ret;
+	for (ret = 0; ret < (int)id.num; ret++)
+		base[ret] = id.base + ret;
+	return id.num;
+}
+
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_release id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret) {
+		fprintf(stderr, "Process FD failure\n");
+		return;
+	}
+	ret = ioctl(fd, DPAA_IOCTL_ID_RELEASE, &id);
+	if (ret)
+		fprintf(stderr, "Process FD ioctl failure type %d base 0x%x num %d\n",
+			id_type, base, num);
+}
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num)
+{
+	struct dpaa_ioctl_id_reserve id = {
+		.id_type = id_type,
+		.base = base,
+		.num = num
+	};
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+	return ioctl(fd, DPAA_IOCTL_ID_RESERVE, &id);
+}
+
+/***************************************/
+/* Mapping and using QMan/BMan portals */
+/***************************************/
+
+#define DPAA_IOCTL_PORTAL_MAP \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x07, struct dpaa_ioctl_portal_map)
+#define DPAA_IOCTL_PORTAL_UNMAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x08, struct dpaa_portal_map)
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_MAP, params);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_MAP)");
+		return ret;
+	}
+	return 0;
+}
+
+int process_portal_unmap(struct dpaa_portal_map *map)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_PORTAL_UNMAP, map);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_PORTAL_UNMAP)");
+		return ret;
+	}
+	return 0;
+}
+
+#define DPAA_IOCTL_PORTAL_IRQ_MAP \
+	_IOW(DPAA_IOCTL_MAGIC, 0x09, struct dpaa_ioctl_irq_map)
+
+int process_portal_irq_map(int ifd, struct dpaa_ioctl_irq_map *map)
+{
+	map->fd = fd;
+	return ioctl(ifd, DPAA_IOCTL_PORTAL_IRQ_MAP, map);
+}
+
+int process_portal_irq_unmap(int ifd)
+{
+	return close(ifd);
+}
+
+struct dpaa_ioctl_raw_portal {
+	/* inputs */
+	enum dpaa_portal_type type; /* Type of portal to allocate */
+
+	uint8_t enable_stash; /* set to non zero to turn on stashing */
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+#define DPAA_IOCTL_ALLOC_RAW_PORTAL \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x0C, struct dpaa_ioctl_raw_portal)
+
+#define DPAA_IOCTL_FREE_RAW_PORTAL \
+	_IOR(DPAA_IOCTL_MAGIC, 0x0D, struct dpaa_ioctl_raw_portal)
+
+static int process_portal_allocate(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_ALLOC_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_ALLOC_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+static int process_portal_free(struct dpaa_ioctl_raw_portal *portal)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_FREE_RAW_PORTAL, portal);
+	if (ret) {
+		perror("ioctl(DPAA_IOCTL_FREE_RAW_PORTAL)");
+		return ret;
+	}
+	return 0;
+}
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.enable_stash = portal->enable_stash;
+	input.cpu = portal->cpu;
+	input.cache = portal->cache;
+	input.window = portal->window;
+	input.sdest = portal->sdest;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int qman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_qman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+	int ret;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.enable_stash = 0;
+
+	ret =  process_portal_allocate(&input);
+	if (ret)
+		return ret;
+	portal->index = input.index;
+	portal->cinh = input.cinh;
+	portal->cena  = input.cena;
+	return 0;
+}
+
+int bman_free_raw_portal(struct dpaa_raw_portal *portal)
+{
+	struct dpaa_ioctl_raw_portal input;
+
+	input.type = dpaa_portal_bman;
+	input.index = portal->index;
+	input.cinh = portal->cinh;
+	input.cena = portal->cena;
+
+	return process_portal_free(&input);
+}
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
new file mode 100644
index 0000000..4ff48c6
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_USD_H
+#define __FSL_USD_H
+
+#include <compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define QBMAN_ANY_PORTAL_IDX 0xffffffff
+
+/* Obtain and free raw (unitialized) portals */
+
+struct dpaa_raw_portal {
+	/* inputs */
+
+	/* set to non zero to turn on stashing */
+	uint8_t enable_stash;
+	/* Stashing attributes for the portal */
+	uint32_t cpu;
+	uint32_t cache;
+	uint32_t window;
+
+	/* Specifies the stash request queue this portal should use */
+	uint8_t sdest;
+
+	/* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX
+	 * for don't care.  The portal index will be populated by the
+	 * driver when the ioctl() successfully completes.
+	 */
+	uint32_t index;
+
+	/* outputs */
+	uint64_t cinh;
+	uint64_t cena;
+};
+
+int qman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int qman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
+int bman_free_raw_portal(struct dpaa_raw_portal *portal);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_USD_H */
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
new file mode 100644
index 0000000..989ddcd
--- /dev/null
+++ b/drivers/bus/dpaa/include/process.h
@@ -0,0 +1,107 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2011 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __PROCESS_H
+#define	__PROCESS_H
+
+#include <compat.h>
+
+/* The process device underlies process-wide user/kernel interactions, such as
+ * mapping dma_mem memory and providing accompanying ioctl()s. (This isn't used
+ * for portals, which use one UIO device each.).
+ */
+#define PROCESS_PATH		"/dev/fsl-usdpaa"
+
+/* Allocation of resource IDs uses a generic interface. This enum is used to
+ * distinguish between the type of underlying object being manipulated.
+ */
+enum dpaa_id_type {
+	dpaa_id_fqid,
+	dpaa_id_bpid,
+	dpaa_id_qpool,
+	dpaa_id_cgrid,
+	dpaa_id_max /* <-- not a valid type, represents the number of types */
+};
+
+int process_alloc(enum dpaa_id_type id_type, uint32_t *base, uint32_t num,
+		  uint32_t align, int partial);
+void process_release(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+int process_reserve(enum dpaa_id_type id_type, uint32_t base, uint32_t num);
+
+/* Mapping and using QMan/BMan portals */
+enum dpaa_portal_type {
+	dpaa_portal_qman,
+	dpaa_portal_bman,
+};
+
+struct dpaa_ioctl_portal_map {
+	/* Input parameter, is a qman or bman portal required. */
+	enum dpaa_portal_type type;
+	/* Specifes a specific portal index to map or 0xffffffff
+	 * for don't care.
+	 */
+	uint32_t index;
+
+	/* Return value if the map succeeds, this gives the mapped
+	 * cache-inhibited (cinh) and cache-enabled (cena) addresses.
+	 */
+	struct dpaa_portal_map {
+		void *cinh;
+		void *cena;
+	} addr;
+	/* Qman-specific return values */
+	u16 channel;
+	uint32_t pools;
+};
+
+int process_portal_map(struct dpaa_ioctl_portal_map *params);
+int process_portal_unmap(struct dpaa_portal_map *map);
+
+struct dpaa_ioctl_irq_map {
+	enum dpaa_portal_type type; /* Type of portal to map */
+	int fd; /* File descriptor that contains the portal */
+	void *portal_cinh; /* Cache inhibited area to identify the portal */
+};
+
+int process_portal_irq_map(int fd,  struct dpaa_ioctl_irq_map *irq);
+int process_portal_irq_unmap(int fd);
+
+#endif	/*  __PROCESS_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 08/40] bus/dpaa: add layer for interrupt emulation using pthread
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (6 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
                             ` (33 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

An interrupt manager is implemented by emulating over pthreads.
Handlers are registered by QBMAN layer for being notified about
any interrupt request from DPAA blocks in userspace.

Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile              |   3 +-
 drivers/bus/dpaa/base/qbman/dpaa_sys.c | 136 +++++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/dpaa_sys.h |  61 +++++++++++++++
 3 files changed, 199 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.c
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_sys.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index f06521c..5b76a4b 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -62,7 +62,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/fman_hw.c \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
-	base/qbman/process.c
+	base/qbman/process.c \
+	base/qbman/dpaa_sys.c
 
 # Link Pthread
 LDLIBS += -lpthread
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.c b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
new file mode 100644
index 0000000..0017da5
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.c
@@ -0,0 +1,136 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <process.h>
+#include "dpaa_sys.h"
+
+struct process_interrupt {
+	int irq;
+	irqreturn_t (*isr)(int irq, void *arg);
+	unsigned long flags;
+	const char *name;
+	void *arg;
+	struct list_head node;
+};
+
+static COMPAT_LIST_HEAD(process_irq_list);
+static pthread_mutex_t process_irq_lock = PTHREAD_MUTEX_INITIALIZER;
+
+static void process_interrupt_install(struct process_interrupt *irq)
+{
+	int ret;
+	/* Add the irq to the end of the list */
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_add_tail(&irq->node, &process_irq_list);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static void process_interrupt_remove(struct process_interrupt *irq)
+{
+	int ret;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_del(&irq->node);
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+}
+
+static struct process_interrupt *process_interrupt_find(int irq_num)
+{
+	int ret;
+	struct process_interrupt *i = NULL;
+
+	ret = pthread_mutex_lock(&process_irq_lock);
+	assert(!ret);
+	list_for_each_entry(i, &process_irq_list, node) {
+		if (i->irq == irq_num)
+			goto done;
+	}
+done:
+	ret = pthread_mutex_unlock(&process_irq_lock);
+	assert(!ret);
+	return i;
+}
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name,
+		      void *arg __maybe_unused)
+{
+	struct process_interrupt *irq_node =
+		kmalloc(sizeof(*irq_node), GFP_KERNEL);
+
+	if (!irq_node)
+		return -ENOMEM;
+	irq_node->irq = irq;
+	irq_node->isr = isr;
+	irq_node->flags = flags;
+	irq_node->name = name;
+	irq_node->arg = arg;
+	process_interrupt_install(irq_node);
+	return 0;
+}
+
+int qbman_free_irq(int irq, __maybe_unused void *arg)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (!irq_node)
+		return -EINVAL;
+	process_interrupt_remove(irq_node);
+	kfree(irq_node);
+	return 0;
+}
+
+/* This is the interface from the platform-specific driver code to obtain
+ * interrupt handlers that have been registered.
+ */
+void qbman_invoke_irq(int irq)
+{
+	struct process_interrupt *irq_node = process_interrupt_find(irq);
+
+	if (irq_node)
+		irq_node->isr(irq, irq_node->arg);
+}
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_sys.h b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
new file mode 100644
index 0000000..bee9fe5
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_sys.h
@@ -0,0 +1,61 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_SYS_H
+#define __DPAA_SYS_H
+
+#include <of.h>
+
+/* For 2-element tables related to cache-inhibited and cache-enabled mappings */
+#define DPAA_PORTAL_CE 0
+#define DPAA_PORTAL_CI 1
+
+#define DPAA_ASSERT(x) RTE_ASSERT(x)
+
+/* This is the interface from the platform-agnostic driver code to (de)register
+ * interrupt handlers. We simply create/destroy corresponding structs.
+ */
+int qbman_request_irq(int irq, irqreturn_t (*isr)(int irq, void *arg),
+		      unsigned long flags, const char *name, void *arg);
+int qbman_free_irq(int irq, void *arg);
+
+void qbman_invoke_irq(int irq);
+
+#endif /* __DPAA_SYS_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 09/40] bus/dpaa: add routines for managing a RB tree
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (7 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
                             ` (32 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

QMAN frames are managed over a RB tree data structure.
This patch introduces necessary routines for implementing a RB tree.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/include/dpaa_rbtree.h | 143 +++++++++++++++++++++++++++++++++
 1 file changed, 143 insertions(+)
 create mode 100644 drivers/bus/dpaa/include/dpaa_rbtree.h

diff --git a/drivers/bus/dpaa/include/dpaa_rbtree.h b/drivers/bus/dpaa/include/dpaa_rbtree.h
new file mode 100644
index 0000000..f8c9b59
--- /dev/null
+++ b/drivers/bus/dpaa/include/dpaa_rbtree.h
@@ -0,0 +1,143 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_RBTREE_H
+#define __DPAA_RBTREE_H
+
+#include <rte_common.h>
+/************/
+/* RB-trees */
+/************/
+
+/* Linux has a good RB-tree implementation, that we can't use (GPL). It also has
+ * a flat/hooked-in interface that virtually requires license-contamination in
+ * order to write a caller-compatible implementation. Instead, I've created an
+ * RB-tree encapsulation on top of linux's primitives (it does some of the work
+ * the client logic would normally do), and this gives us something we can
+ * reimplement on LWE. Unfortunately there's no good+free RB-tree
+ * implementations out there that are license-compatible and "flat" (ie. no
+ * dynamic allocation). I did find a malloc-based one that I could convert, but
+ * that will be a task for later on. For now, LWE's RB-tree is implemented using
+ * an ordered linked-list.
+ *
+ * Note, the only linux-esque type is "struct rb_node", because it's used
+ * statically in the exported header, so it can't be opaque. Our version doesn't
+ * include a "rb_parent_color" field because we're doing linked-list instead of
+ * a true rb-tree.
+ */
+
+struct rb_node {
+	struct rb_node *prev, *next;
+};
+
+struct dpa_rbtree {
+	struct rb_node *head, *tail;
+};
+
+#define DPAA_RBTREE { NULL, NULL }
+static inline void dpa_rbtree_init(struct dpa_rbtree *tree)
+{
+	tree->head = tree->tail = NULL;
+}
+
+#define QMAN_NODE2OBJ(ptr, type, node_field) \
+	(type *)((char *)ptr - offsetof(type, node_field))
+
+#define IMPLEMENT_DPAA_RBTREE(name, type, node_field, val_field) \
+static inline int name##_push(struct dpa_rbtree *tree, type *obj) \
+{ \
+	struct rb_node *node = tree->head; \
+	if (!node) { \
+		tree->head = tree->tail = &obj->node_field; \
+		obj->node_field.prev = obj->node_field.next = NULL; \
+		return 0; \
+	} \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (obj->val_field == item->val_field) \
+			return -EBUSY; \
+		if (obj->val_field < item->val_field) { \
+			if (tree->head == node) \
+				tree->head = &obj->node_field; \
+			else \
+				node->prev->next = &obj->node_field; \
+			obj->node_field.prev = node->prev; \
+			obj->node_field.next = node; \
+			node->prev = &obj->node_field; \
+			return 0; \
+		} \
+		node = node->next; \
+	} \
+	obj->node_field.prev = tree->tail; \
+	obj->node_field.next = NULL; \
+	tree->tail->next = &obj->node_field; \
+	tree->tail = &obj->node_field; \
+	return 0; \
+} \
+static inline void name##_del(struct dpa_rbtree *tree, type *obj) \
+{ \
+	if (tree->head == &obj->node_field) { \
+		if (tree->tail == &obj->node_field) \
+			/* Only item in the list */ \
+			tree->head = tree->tail = NULL; \
+		else { \
+			/* Is the head, next != NULL */ \
+			tree->head = tree->head->next; \
+			tree->head->prev = NULL; \
+		} \
+	} else { \
+		if (tree->tail == &obj->node_field) { \
+			/* Is the tail, prev != NULL */ \
+			tree->tail = tree->tail->prev; \
+			tree->tail->next = NULL; \
+		} else { \
+			/* Is neither the head nor the tail */ \
+			obj->node_field.prev->next = obj->node_field.next; \
+			obj->node_field.next->prev = obj->node_field.prev; \
+		} \
+	} \
+} \
+static inline type *name##_find(struct dpa_rbtree *tree, u32 val) \
+{ \
+	struct rb_node *node = tree->head; \
+	while (node) { \
+		type *item = QMAN_NODE2OBJ(node, type, node_field); \
+		if (val == item->val_field) \
+			return item; \
+		if (val < item->val_field) \
+			return NULL; \
+		node = node->next; \
+	} \
+	return NULL; \
+}
+
+#endif /* __DPAA_RBTREE_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 10/40] bus/dpaa: add QMAN interface driver
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (8 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
                             ` (31 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Queue Manager (QMan) is a hardware queue management block that
allows software and accelerators on the datapath to enqueue and dequeue
frames in order to communicate.

This part of QBMAN DPAA Block.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |    1 +
 drivers/bus/dpaa/base/qbman/qman_driver.c |  271 +++++++
 drivers/bus/dpaa/base/qbman/qman_priv.h   |  303 +++++++
 drivers/bus/dpaa/include/fsl_qman.h       | 1254 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   13 +
 5 files changed, 1842 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_qman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 5b76a4b..c9c15f8 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -63,6 +63,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman_driver.c \
 	base/qbman/dpaa_sys.c
 
 # Link Pthread
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
new file mode 100644
index 0000000..80dde20
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -0,0 +1,271 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "qman_priv.h"
+#include <sys/ioctl.h>
+#include <rte_branch_prediction.h>
+
+/* Global variable containing revision id (even on non-control plane systems
+ * where CCSR isn't available).
+ */
+u16 qman_ip_rev;
+u16 qm_channel_pool1 = QMAN_CHANNEL_POOL1;
+u16 qm_channel_caam = QMAN_CHANNEL_CAAM;
+u16 qm_channel_pme = QMAN_CHANNEL_PME;
+
+/* Ccsr map address to access ccsrbased register */
+void *qman_ccsr_map;
+/* The qman clock frequency */
+u32 qman_clk;
+
+static __thread int fd = -1;
+static __thread struct qm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_qman
+};
+
+static int fsl_qman_portal_init(uint32_t index, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu\n");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!\n");
+		return -EINVAL;
+	}
+
+	/* Allocate and map a qman portal */
+	map.index = index;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	pcfg.channel = map.channel;
+	pcfg.pools = map.pools;
+	pcfg.index = map.index;
+
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+
+	fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("QMan irq init failed\n");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
+	pcfg.is_shared = is_shared;
+	pcfg.node = NULL;
+	pcfg.irq = fd;
+
+	irq_map.type = dpaa_portal_qman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_qman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int qman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_qman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int qman_thread_finish(void)
+{
+	return fsl_qman_portal_finish();
+}
+
+void qman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int qman_global_init(void)
+{
+	const struct device_node *dt_node;
+	int ret = 0;
+	size_t lenp;
+	const u32 *chanid;
+	static int ccsr_map_fd;
+	const uint32_t *qman_addr;
+	uint64_t phys_addr;
+	uint64_t regs_size;
+	const u32 *clk;
+
+	static int done;
+
+	if (done)
+		return -EBUSY;
+
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman-portal");
+	if (!dt_node) {
+		pr_err("No qman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,qman-portal-1.0.0"))
+		pr_err("QMan rev1.0 on P4080 rev1 is not supported!\n");
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-1.1") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.1.0"))
+		qman_ip_rev = QMAN_REV11;
+	else if	(of_device_is_compatible(dt_node, "fsl,qman-portal-1.2") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-1.2.0"))
+		qman_ip_rev = QMAN_REV12;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-2.0.0"))
+		qman_ip_rev = QMAN_REV20;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.0.1"))
+		qman_ip_rev = QMAN_REV30;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,qman-portal-3.1.3"))
+		qman_ip_rev = QMAN_REV31;
+	else if (of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.0") ||
+		 of_device_is_compatible(dt_node, "fsl,qman-portal-3.2.1"))
+		qman_ip_rev = QMAN_REV32;
+	else
+		qman_ip_rev = QMAN_REV11;
+
+	if (!qman_ip_rev) {
+		pr_err("Unknown qman portal version\n");
+		return -ENODEV;
+	}
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30) {
+		qm_channel_pool1 = QMAN_CHANNEL_POOL1_REV3;
+		qm_channel_caam = QMAN_CHANNEL_CAAM_REV3;
+		qm_channel_pme = QMAN_CHANNEL_PME_REV3;
+	}
+
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,pool-channel-range");
+	if (!dt_node) {
+		pr_err("No qman pool channel range available\n");
+		return -ENODEV;
+	}
+	chanid = of_get_property(dt_node, "fsl,pool-channel-range", &lenp);
+	if (!chanid) {
+		pr_err("Can not get pool-channel-range property\n");
+		return -EINVAL;
+	}
+
+	/* get ccsr base */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,qman");
+	if (!dt_node) {
+		pr_err("No qman device node available\n");
+		return -ENODEV;
+	}
+	qman_addr = of_get_address(dt_node, 0, &regs_size, NULL);
+	if (!qman_addr) {
+		pr_err("of_get_address cannot return qman address\n");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(dt_node, qman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed\n");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open("/dev/mem", O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for qman ccsr map\n");
+		return ccsr_map_fd;
+	}
+
+	qman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+			     MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (qman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map qman ccsr base\n");
+		return -EINVAL;
+	}
+
+	clk = of_get_property(dt_node, "clock-frequency", NULL);
+	if (!clk)
+		pr_warn("Can't find Qman clock frequency\n");
+	else
+		qman_clk = be32_to_cpu(*clk);
+
+	return ret;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
new file mode 100644
index 0000000..4a11e40
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -0,0 +1,303 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __QMAN_PRIV_H
+#define __QMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_qman.h>
+
+/* Congestion Groups */
+/*
+ * This wrapper represents a bit-array for the state of the 256 QMan congestion
+ * groups. Is also used as a *mask* for congestion groups, eg. so we ignore
+ * those that don't concern us. We harness the structure and accessor details
+ * already used in the management command to query congestion groups.
+ */
+struct qman_cgrs {
+	struct __qm_mcr_querycongestion q;
+};
+
+static inline void qman_cgrs_init(struct qman_cgrs *c)
+{
+	memset(c, 0, sizeof(*c));
+}
+
+static inline void qman_cgrs_fill(struct qman_cgrs *c)
+{
+	memset(c, 0xff, sizeof(*c));
+}
+
+static inline int qman_cgrs_get(struct qman_cgrs *c, int num)
+{
+	return QM_MCR_QUERYCONGESTION(&c->q, num);
+}
+
+static inline void qman_cgrs_set(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] |= (0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline void qman_cgrs_unset(struct qman_cgrs *c, int num)
+{
+	c->q.state[__CGR_WORD(num)] &= ~(0x80000000 >> __CGR_SHIFT(num));
+}
+
+static inline int qman_cgrs_next(struct qman_cgrs *c, int num)
+{
+	while ((++num < (int)__CGR_NUM) && !qman_cgrs_get(c, num))
+		;
+	return num;
+}
+
+static inline void qman_cgrs_cp(struct qman_cgrs *dest,
+				const struct qman_cgrs *src)
+{
+	memcpy(dest, src, sizeof(*dest));
+}
+
+static inline void qman_cgrs_and(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) & *(_b++);
+}
+
+static inline void qman_cgrs_xor(struct qman_cgrs *dest,
+				 const struct qman_cgrs *a,
+				 const struct qman_cgrs *b)
+{
+	int ret;
+	u32 *_d = dest->q.state;
+	const u32 *_a = a->q.state;
+	const u32 *_b = b->q.state;
+
+	for (ret = 0; ret < 8; ret++)
+		*(_d++) = *(_a++) ^ *(_b++);
+}
+
+/* used by CCSR and portal interrupt code */
+enum qm_isr_reg {
+	qm_isr_status = 0,
+	qm_isr_enable = 1,
+	qm_isr_disable = 2,
+	qm_isr_inhibit = 3
+};
+
+struct qm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	struct device_node *node;
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* If the caller enables DQRR stashing (and thus wishes to operate the
+	 * portal from only one cpu), this is the logical CPU that the portal
+	 * will stash to. Whether stashing is enabled or not, this setting is
+	 * also used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* The portal's dedicated channel id, use this value for initialising
+	 * frame queues to target this portal when scheduled.
+	 */
+	u16 channel;
+	/* A mask of which pool channels this portal has dequeue access to
+	 * (using QM_SDQCR_CHANNELS_POOL(n) for the bitmask).
+	 */
+	u32 pools;
+
+};
+
+/* Revision info (for errata and feature handling) */
+#define QMAN_REV11 0x0101
+#define QMAN_REV12 0x0102
+#define QMAN_REV20 0x0200
+#define QMAN_REV30 0x0300
+#define QMAN_REV31 0x0301
+#define QMAN_REV32 0x0302
+extern u16 qman_ip_rev; /* 0 if uninitialised, otherwise QMAN_REVx */
+extern u32 qman_clk;
+
+int qm_set_wpm(int wpm);
+int qm_get_wpm(int *wpm);
+
+struct qman_portal *qman_create_affine_portal(
+			const struct qm_portal_config *config,
+			const struct qman_cgrs *cgrs);
+const struct qm_portal_config *qman_destroy_affine_portal(void);
+
+struct qm_portal_config *qm_get_unused_portal(void);
+struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx);
+
+void qm_put_unused_portal(struct qm_portal_config *pcfg);
+void qm_set_liodns(struct qm_portal_config *pcfg);
+
+/* This CGR feature is supported by h/w and required by unit-tests and the
+ * debugfs hooks, so is implemented in the driver. However it allows an explicit
+ * corruption of h/w fields by s/w that are usually incorruptible (because the
+ * counters are usually maintained entirely within h/w). As such, we declare
+ * this API internally.
+ */
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result);
+
+/*   QMan s/w corenet portal, low-level i/face	 */
+
+/*
+ * For Choose one SOURCE. Choose one COUNT. Choose one
+ * dequeue TYPE. Choose TOKEN (8-bit).
+ * If SOURCE == CHANNELS,
+ *   Choose CHANNELS_DEDICATED and/or CHANNELS_POOL(n).
+ *   You can choose DEDICATED_PRECEDENCE if the portal channel should have
+ *   priority.
+ * If SOURCE == SPECIFICWQ,
+ *     Either select the work-queue ID with SPECIFICWQ_WQ(), or select the
+ *     channel (SPECIFICWQ_DEDICATED or SPECIFICWQ_POOL()) and specify the
+ *     work-queue priority (0-7) with SPECIFICWQ_WQ() - either way, you get the
+ *     same value.
+ */
+#define QM_SDQCR_SOURCE_CHANNELS	0x0
+#define QM_SDQCR_SOURCE_SPECIFICWQ	0x40000000
+#define QM_SDQCR_COUNT_EXACT1		0x0
+#define QM_SDQCR_COUNT_UPTO3		0x20000000
+#define QM_SDQCR_DEDICATED_PRECEDENCE	0x10000000
+#define QM_SDQCR_TYPE_MASK		0x03000000
+#define QM_SDQCR_TYPE_NULL		0x0
+#define QM_SDQCR_TYPE_PRIO_QOS		0x01000000
+#define QM_SDQCR_TYPE_ACTIVE_QOS	0x02000000
+#define QM_SDQCR_TYPE_ACTIVE		0x03000000
+#define QM_SDQCR_TOKEN_MASK		0x00ff0000
+#define QM_SDQCR_TOKEN_SET(v)		(((v) & 0xff) << 16)
+#define QM_SDQCR_TOKEN_GET(v)		(((v) >> 16) & 0xff)
+#define QM_SDQCR_CHANNELS_DEDICATED	0x00008000
+#define QM_SDQCR_SPECIFICWQ_MASK	0x000000f7
+#define QM_SDQCR_SPECIFICWQ_DEDICATED	0x00000000
+#define QM_SDQCR_SPECIFICWQ_POOL(n)	((n) << 4)
+#define QM_SDQCR_SPECIFICWQ_WQ(n)	(n)
+
+#define QM_VDQCR_FQID_MASK		0x00ffffff
+#define QM_VDQCR_FQID(n)		((n) & QM_VDQCR_FQID_MASK)
+
+#define QM_EQCR_VERB_VBIT		0x80
+#define QM_EQCR_VERB_CMD_MASK		0x61	/* but only one value; */
+#define QM_EQCR_VERB_CMD_ENQUEUE	0x01
+#define QM_EQCR_VERB_COLOUR_MASK	0x18	/* 4 possible values; */
+#define QM_EQCR_VERB_COLOUR_GREEN	0x00
+#define QM_EQCR_VERB_COLOUR_YELLOW	0x08
+#define QM_EQCR_VERB_COLOUR_RED		0x10
+#define QM_EQCR_VERB_COLOUR_OVERRIDE	0x18
+#define QM_EQCR_VERB_INTERRUPT		0x04	/* on command consumption */
+#define QM_EQCR_VERB_ORP		0x02	/* enable order restoration */
+#define QM_EQCR_DCA_ENABLE		0x80
+#define QM_EQCR_DCA_PARK		0x40
+#define QM_EQCR_DCA_IDXMASK		0x0f	/* "DQRR::idx" goes here */
+#define QM_EQCR_SEQNUM_NESN		0x8000	/* Advance NESN */
+#define QM_EQCR_SEQNUM_NLIS		0x4000	/* More fragments to come */
+#define QM_EQCR_SEQNUM_SEQMASK		0x3fff	/* sequence number goes here */
+#define QM_EQCR_FQID_NULL		0	/* eg. for an ORP seqnum hole */
+
+#define QM_MCC_VERB_VBIT		0x80
+#define QM_MCC_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_MCC_VERB_INITFQ_PARKED	0x40
+#define QM_MCC_VERB_INITFQ_SCHED	0x41
+#define QM_MCC_VERB_QUERYFQ		0x44
+#define QM_MCC_VERB_QUERYFQ_NP		0x45	/* "non-programmable" fields */
+#define QM_MCC_VERB_QUERYWQ		0x46
+#define QM_MCC_VERB_QUERYWQ_DEDICATED	0x47
+#define QM_MCC_VERB_ALTER_SCHED		0x48	/* Schedule FQ */
+#define QM_MCC_VERB_ALTER_FE		0x49	/* Force Eligible FQ */
+#define QM_MCC_VERB_ALTER_RETIRE	0x4a	/* Retire FQ */
+#define QM_MCC_VERB_ALTER_OOS		0x4b	/* Take FQ out of service */
+#define QM_MCC_VERB_ALTER_FQXON		0x4d	/* FQ XON */
+#define QM_MCC_VERB_ALTER_FQXOFF	0x4e	/* FQ XOFF */
+#define QM_MCC_VERB_INITCGR		0x50
+#define QM_MCC_VERB_MODIFYCGR		0x51
+#define QM_MCC_VERB_CGRTESTWRITE	0x52
+#define QM_MCC_VERB_QUERYCGR		0x58
+#define QM_MCC_VERB_QUERYCONGESTION	0x59
+
+/*
+ * Used by all portal interrupt registers except 'inhibit'
+ * Channels with frame availability
+ */
+#define QM_PIRQ_DQAVAIL	0x0000ffff
+
+/* The DQAVAIL interrupt fields break down into these bits; */
+#define QM_DQAVAIL_PORTAL	0x8000		/* Portal channel */
+#define QM_DQAVAIL_POOL(n)	(0x8000 >> (n))	/* Pool channel, n==[1..15] */
+#define QM_DQAVAIL_MASK		0xffff
+/* This mask contains all the "irqsource" bits visible to API users */
+#define QM_PIRQ_VISIBLE	(QM_PIRQ_SLOW | QM_PIRQ_DQRI)
+
+/* These are qm_<reg>_<verb>(). So for example, qm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define qm_isr_status_read(qm)		__qm_isr_read(qm, qm_isr_status)
+#define qm_isr_status_clear(qm, m)	__qm_isr_write(qm, qm_isr_status, m)
+#define qm_isr_enable_read(qm)		__qm_isr_read(qm, qm_isr_enable)
+#define qm_isr_enable_write(qm, v)	__qm_isr_write(qm, qm_isr_enable, v)
+#define qm_isr_disable_read(qm)		__qm_isr_read(qm, qm_isr_disable)
+#define qm_isr_disable_write(qm, v)	__qm_isr_write(qm, qm_isr_disable, v)
+/* TODO: unfortunate name-clash here, reword? */
+#define qm_isr_inhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 1)
+#define qm_isr_uninhibit(qm)		__qm_isr_write(qm, qm_isr_inhibit, 0)
+
+#define QMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+
+#endif /* _QMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
new file mode 100644
index 0000000..784fe60
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -0,0 +1,1254 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_QMAN_H
+#define __FSL_QMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <dpaa_rbtree.h>
+
+/* Last updated for v00.800 of the BG */
+
+/* Hardware constants */
+#define QM_CHANNEL_SWPORTAL0 0
+#define QMAN_CHANNEL_POOL1 0x21
+#define QMAN_CHANNEL_CAAM 0x80
+#define QMAN_CHANNEL_PME 0xa0
+#define QMAN_CHANNEL_POOL1_REV3 0x401
+#define QMAN_CHANNEL_CAAM_REV3 0x840
+#define QMAN_CHANNEL_PME_REV3 0x860
+extern u16 qm_channel_pool1;
+extern u16 qm_channel_caam;
+extern u16 qm_channel_pme;
+enum qm_dc_portal {
+	qm_dc_portal_fman0 = 0,
+	qm_dc_portal_fman1 = 1,
+	qm_dc_portal_caam = 2,
+	qm_dc_portal_pme = 3
+};
+
+/* Portal processing (interrupt) sources */
+#define QM_PIRQ_CCSCI	0x00200000	/* CEETM Congestion State Change */
+#define QM_PIRQ_CSCI	0x00100000	/* Congestion State Change */
+#define QM_PIRQ_EQCI	0x00080000	/* Enqueue Command Committed */
+#define QM_PIRQ_EQRI	0x00040000	/* EQCR Ring (below threshold) */
+#define QM_PIRQ_DQRI	0x00020000	/* DQRR Ring (non-empty) */
+#define QM_PIRQ_MRI	0x00010000	/* MR Ring (non-empty) */
+/*
+ * This mask contains all the interrupt sources that need handling except DQRI,
+ * ie. that if present should trigger slow-path processing.
+ */
+#define QM_PIRQ_SLOW	(QM_PIRQ_CSCI | QM_PIRQ_EQCI | QM_PIRQ_EQRI | \
+			QM_PIRQ_MRI | QM_PIRQ_CCSCI)
+
+/* For qman_static_dequeue_*** APIs */
+#define QM_SDQCR_CHANNELS_POOL_MASK	0x00007fff
+/* for n in [1,15] */
+#define QM_SDQCR_CHANNELS_POOL(n)	(0x00008000 >> (n))
+/* for conversion from n of qm_channel */
+static inline u32 QM_SDQCR_CHANNELS_POOL_CONV(u16 channel)
+{
+	return QM_SDQCR_CHANNELS_POOL(channel + 1 - qm_channel_pool1);
+}
+
+/* For qman_volatile_dequeue(); Choose one PRECEDENCE. EXACT is optional. Use
+ * NUMFRAMES(n) (6-bit) or NUMFRAMES_TILLEMPTY to fill in the frame-count. Use
+ * FQID(n) to fill in the frame queue ID.
+ */
+#define QM_VDQCR_PRECEDENCE_VDQCR	0x0
+#define QM_VDQCR_PRECEDENCE_SDQCR	0x80000000
+#define QM_VDQCR_EXACT			0x40000000
+#define QM_VDQCR_NUMFRAMES_MASK		0x3f000000
+#define QM_VDQCR_NUMFRAMES_SET(n)	(((n) & 0x3f) << 24)
+#define QM_VDQCR_NUMFRAMES_GET(n)	(((n) >> 24) & 0x3f)
+#define QM_VDQCR_NUMFRAMES_TILLEMPTY	QM_VDQCR_NUMFRAMES_SET(0)
+
+/* --- QMan data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct qm_eqcr_entry;	/* EQCR (EnQueue Command Ring) entries */
+struct qm_dqrr_entry;	/* DQRR (DeQueue Response Ring) entries */
+struct qm_mr_entry;	/* MR (Message Ring) entries */
+struct qm_mc_command;	/* MC (Management Command) command */
+struct qm_mc_result;	/* MC result */
+
+#define QM_FD_FORMAT_SG		0x4
+#define QM_FD_FORMAT_LONG	0x2
+#define QM_FD_FORMAT_COMPOUND	0x1
+enum qm_fd_format {
+	/*
+	 * 'contig' implies a contiguous buffer, whereas 'sg' implies a
+	 * scatter-gather table. 'big' implies a 29-bit length with no offset
+	 * field, otherwise length is 20-bit and offset is 9-bit. 'compound'
+	 * implies a s/g-like table, where each entry itself represents a frame
+	 * (contiguous or scatter-gather) and the 29-bit "length" is
+	 * interpreted purely for congestion calculations, ie. a "congestion
+	 * weight".
+	 */
+	qm_fd_contig = 0,
+	qm_fd_contig_big = QM_FD_FORMAT_LONG,
+	qm_fd_sg = QM_FD_FORMAT_SG,
+	qm_fd_sg_big = QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG,
+	qm_fd_compound = QM_FD_FORMAT_COMPOUND
+};
+
+/* Capitalised versions are un-typed but can be used in static expressions */
+#define QM_FD_CONTIG	0
+#define QM_FD_CONTIG_BIG QM_FD_FORMAT_LONG
+#define QM_FD_SG	QM_FD_FORMAT_SG
+#define QM_FD_SG_BIG	(QM_FD_FORMAT_SG | QM_FD_FORMAT_LONG)
+#define QM_FD_COMPOUND	QM_FD_FORMAT_COMPOUND
+
+/* "Frame Descriptor (FD)" */
+struct qm_fd {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 dd:2;	/* dynamic debug */
+			u8 liodn_offset:6;
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 eliodn_offset:4;
+			u8 __reserved:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u8 liodn_offset:6;
+			u8 dd:2;	/* dynamic debug */
+			u8 bpid:8;	/* Buffer Pool ID */
+			u8 __reserved:4;
+			u8 eliodn_offset:4;
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#endif
+		};
+		struct {
+			u64 __notaddress:24;
+			/* More efficient address accessor */
+			u64 addr:40;
+		};
+		u64 opaque_addr;
+	};
+	/* The 'format' field indicates the interpretation of the remaining 29
+	 * bits of the 32-bit word. For packing reasons, it is duplicated in the
+	 * other union elements. Note, union'd structs are difficult to use with
+	 * static initialisation under gcc, in which case use the "opaque" form
+	 * with one of the macros.
+	 */
+	union {
+		/* For easier/faster copying of this part of the fd (eg. from a
+		 * DQRR entry to an EQCR entry) copy 'opaque'
+		 */
+		u32 opaque;
+		/* If 'format' is _contig or _sg, 20b length and 9b offset */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format format:3;
+			u16 offset:9;
+			u32 length20:20;
+#else
+			u32 length20:20;
+			u16 offset:9;
+			enum qm_fd_format format:3;
+#endif
+		};
+		/* If 'format' is _contig_big or _sg_big, 29b length */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format1:3;
+			u32 length29:29;
+#else
+			u32 length29:29;
+			enum qm_fd_format _format1:3;
+#endif
+		};
+		/* If 'format' is _compound, 29b "congestion weight" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			enum qm_fd_format _format2:3;
+			u32 cong_weight:29;
+#else
+			u32 cong_weight:29;
+			enum qm_fd_format _format2:3;
+#endif
+		};
+	};
+	union {
+		u32 cmd;
+		u32 status;
+	};
+} __attribute__((aligned(8)));
+#define QM_FD_DD_NULL		0x00
+#define QM_FD_PID_MASK		0x3f
+static inline u64 qm_fd_addr_get64(const struct qm_fd *fd)
+{
+	return fd->addr;
+}
+
+static inline dma_addr_t qm_fd_addr(const struct qm_fd *fd)
+{
+	return (dma_addr_t)fd->addr;
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_fd_addr_set64(fd, v) \
+	do { \
+		struct qm_fd *__fd931 = (fd); \
+		__fd931->addr = v; \
+	} while (0)
+
+/* Scatter/Gather table entry */
+struct qm_sg_entry {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1[3];
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+#else
+			u32 addr_lo;	/* low 32-bits of 40-bit address */
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u8 __reserved1[3];
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:24;
+			u64 addr:40;
+#else
+			u64 addr:40;
+			u64 __notaddress:24;
+#endif
+		};
+		u64 opaque;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 extension:1;	/* Extension bit */
+			u32 final:1;		/* Final bit */
+			u32 length:30;
+#else
+			u32 length:30;
+			u32 final:1;		/* Final bit */
+			u32 extension:1;	/* Extension bit */
+#endif
+		};
+		u32 val;
+	};
+	u8 __reserved2;
+	u8 bpid;
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved3:3;
+			u16 offset:13;
+#else
+			u16 offset:13;
+			u16 __reserved3:3;
+#endif
+		};
+		u16 val_off;
+	};
+} __packed;
+static inline u64 qm_sg_entry_get64(const struct qm_sg_entry *sg)
+{
+	return sg->addr;
+}
+
+static inline dma_addr_t qm_sg_addr(const struct qm_sg_entry *sg)
+{
+	return (dma_addr_t)sg->addr;
+}
+
+/* Macro, so we compile better if 'v' isn't always 64-bit */
+#define qm_sg_entry_set64(sg, v) \
+	do { \
+		struct qm_sg_entry *__sg931 = (sg); \
+		__sg931->addr = v; \
+	} while (0)
+
+/* See 1.5.8.1: "Enqueue Command" */
+struct qm_eqcr_entry {
+	u8 __dont_write_directly__verb;
+	u8 dca;
+	u16 seqnum;
+	u32 orp;	/* 24-bit */
+	u32 fqid;	/* 24-bit */
+	u32 tag;
+	struct qm_fd fd;
+	u8 __reserved3[32];
+} __packed;
+
+
+/* "Frame Dequeue Response" */
+struct qm_dqrr_entry {
+	u8 verb;
+	u8 stat;
+	u16 seqnum;	/* 15-bit */
+	u8 tok;
+	u8 __reserved2[3];
+	u32 fqid;	/* 24-bit */
+	u32 contextB;
+	struct qm_fd fd;
+	u8 __reserved4[32];
+};
+
+#define QM_DQRR_VERB_VBIT		0x80
+#define QM_DQRR_VERB_MASK		0x7f	/* where the verb contains; */
+#define QM_DQRR_VERB_FRAME_DEQUEUE	0x60	/* "this format" */
+#define QM_DQRR_STAT_FQ_EMPTY		0x80	/* FQ empty */
+#define QM_DQRR_STAT_FQ_HELDACTIVE	0x40	/* FQ held active */
+#define QM_DQRR_STAT_FQ_FORCEELIGIBLE	0x20	/* FQ was force-eligible'd */
+#define QM_DQRR_STAT_FD_VALID		0x10	/* has a non-NULL FD */
+#define QM_DQRR_STAT_UNSCHEDULED	0x02	/* Unscheduled dequeue */
+#define QM_DQRR_STAT_DQCR_EXPIRED	0x01	/* VDQCR or PDQCR expired*/
+
+
+/* "ERN Message Response" */
+/* "FQ State Change Notification" */
+struct qm_mr_entry {
+	u8 verb;
+	union {
+		struct {
+			u8 dca;
+			u16 seqnum;
+			u8 rc;		/* Rejection Code */
+			u32 orp:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed ern;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+			u8 __reserved1:4;
+			enum qm_dc_portal portal:2;
+#else
+			enum qm_dc_portal portal:3;
+			u8 __reserved1:3;
+			u8 colour:2;	/* See QM_MR_DCERN_COLOUR_* */
+#endif
+			u16 __reserved2;
+			u8 rc;		/* Rejection Code */
+			u32 __reserved3:24;
+			u32 fqid;	/* 24-bit */
+			u32 tag;
+			struct qm_fd fd;
+		} __packed dcern;
+		struct {
+			u8 fqs;		/* Frame Queue Status */
+			u8 __reserved1[6];
+			u32 fqid;	/* 24-bit */
+			u32 contextB;
+			u8 __reserved2[16];
+		} __packed fq;		/* FQRN/FQRNI/FQRL/FQPN */
+	};
+	u8 __reserved2[32];
+} __packed;
+#define QM_MR_VERB_VBIT			0x80
+/*
+ * ERNs originating from direct-connect portals ("dcern") use 0x20 as a verb
+ * which would be invalid as a s/w enqueue verb. A s/w ERN can be distinguished
+ * from the other MR types by noting if the 0x20 bit is unset.
+ */
+#define QM_MR_VERB_TYPE_MASK		0x27
+#define QM_MR_VERB_DC_ERN		0x20
+#define QM_MR_VERB_FQRN			0x21
+#define QM_MR_VERB_FQRNI		0x22
+#define QM_MR_VERB_FQRL			0x23
+#define QM_MR_VERB_FQPN			0x24
+#define QM_MR_RC_MASK			0xf0	/* contains one of; */
+#define QM_MR_RC_CGR_TAILDROP		0x00
+#define QM_MR_RC_WRED			0x10
+#define QM_MR_RC_ERROR			0x20
+#define QM_MR_RC_ORPWINDOW_EARLY	0x30
+#define QM_MR_RC_ORPWINDOW_LATE		0x40
+#define QM_MR_RC_FQ_TAILDROP		0x50
+#define QM_MR_RC_ORPWINDOW_RETIRED	0x60
+#define QM_MR_RC_ORP_ZERO		0x70
+#define QM_MR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+#define QM_MR_DCERN_COLOUR_GREEN	0x00
+#define QM_MR_DCERN_COLOUR_YELLOW	0x01
+#define QM_MR_DCERN_COLOUR_RED		0x02
+#define QM_MR_DCERN_COLOUR_OVERRIDE	0x03
+/*
+ * An identical structure of FQD fields is present in the "Init FQ" command and
+ * the "Query FQ" result, it's suctioned out into the "struct qm_fqd" type.
+ * Within that, the 'stashing' and 'taildrop' pieces are also factored out, the
+ * latter has two inlines to assist with converting to/from the mant+exp
+ * representation.
+ */
+struct qm_fqd_stashing {
+	/* See QM_STASHING_EXCL_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 exclusive;
+	u8 __reserved1:2;
+	/* Numbers of cachelines */
+	u8 annotation_cl:2;
+	u8 data_cl:2;
+	u8 context_cl:2;
+#else
+	u8 context_cl:2;
+	u8 data_cl:2;
+	u8 annotation_cl:2;
+	u8 __reserved1:2;
+	u8 exclusive;
+#endif
+} __packed;
+struct qm_fqd_taildrop {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved1:3;
+	u16 mant:8;
+	u16 exp:5;
+#else
+	u16 exp:5;
+	u16 mant:8;
+	u16 __reserved1:3;
+#endif
+} __packed;
+struct qm_fqd_oac {
+	/* "Overhead Accounting Control", see QM_OAC_<...> */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 oac:2; /* "Overhead Accounting Control" */
+	u8 __reserved1:6;
+#else
+	u8 __reserved1:6;
+	u8 oac:2; /* "Overhead Accounting Control" */
+#endif
+	/* Two's-complement value (-128 to +127) */
+	signed char oal; /* "Overhead Accounting Length" */
+} __packed;
+struct qm_fqd {
+	union {
+		u8 orpc;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1:2;
+			u8 orprws:3;
+			u8 oa:1;
+			u8 olws:2;
+#else
+			u8 olws:2;
+			u8 oa:1;
+			u8 orprws:3;
+			u8 __reserved1:2;
+#endif
+		} __packed;
+	};
+	u8 cgid;
+	u16 fq_ctrl;	/* See QM_FQCTRL_<...> */
+	union {
+		u16 dest_wq;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 channel:13; /* qm_channel */
+			u16 wq:3;
+#else
+			u16 wq:3;
+			u16 channel:13; /* qm_channel */
+#endif
+		} __packed dest;
+	};
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#else
+	u16 __reserved2:1;
+	u16 ics_cred:15;
+#endif
+	/*
+	 * For "Initialize Frame Queue" commands, the write-enable mask
+	 * determines whether 'td' or 'oac_init' is observed. For query
+	 * commands, this field is always 'td', and 'oac_query' (below) reflects
+	 * the Overhead ACcounting values.
+	 */
+	union {
+		uint16_t opaque_td;
+		struct qm_fqd_taildrop td;
+		struct qm_fqd_oac oac_init;
+	};
+	u32 context_b;
+	union {
+		/* Treat it as 64-bit opaque */
+		u64 opaque;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 hi;
+			u32 lo;
+#else
+			u32 lo;
+			u32 hi;
+#endif
+		};
+		/* Treat it as s/w portal stashing config */
+		/* see "FQD Context_A field used for [...]" */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			struct qm_fqd_stashing stashing;
+			/*
+			 * 48-bit address of FQ context to
+			 * stash, must be cacheline-aligned
+			 */
+			u16 context_hi;
+			u32 context_lo;
+#else
+			u32 context_lo;
+			u16 context_hi;
+			struct qm_fqd_stashing stashing;
+#endif
+		} __packed;
+	} context_a;
+	struct qm_fqd_oac oac_query;
+} __packed;
+/* 64-bit converters for context_hi/lo */
+static inline u64 qm_fqd_stashing_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.context_hi << 32) |
+		(u64)fqd->context_a.context_lo;
+}
+
+static inline dma_addr_t qm_fqd_stashing_addr(const struct qm_fqd *fqd)
+{
+	return (dma_addr_t)qm_fqd_stashing_get64(fqd);
+}
+
+static inline u64 qm_fqd_context_a_get64(const struct qm_fqd *fqd)
+{
+	return ((u64)fqd->context_a.hi << 32) |
+		(u64)fqd->context_a.lo;
+}
+
+static inline void qm_fqd_stashing_set64(struct qm_fqd *fqd, u64 addr)
+{
+		fqd->context_a.context_hi = upper_32_bits(addr);
+		fqd->context_a.context_lo = lower_32_bits(addr);
+}
+
+static inline void qm_fqd_context_a_set64(struct qm_fqd *fqd, u64 addr)
+{
+	fqd->context_a.hi = upper_32_bits(addr);
+	fqd->context_a.lo = lower_32_bits(addr);
+}
+
+/* convert a threshold value into mant+exp representation */
+static inline int qm_fqd_taildrop_set(struct qm_fqd_taildrop *td, u32 val,
+				      int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	if (val > 0xe0000000)
+		return -ERANGE;
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	td->exp = e;
+	td->mant = val;
+	return 0;
+}
+
+/* and the other direction */
+static inline u32 qm_fqd_taildrop_get(const struct qm_fqd_taildrop *td)
+{
+	return (u32)td->mant << td->exp;
+}
+
+
+/* See "Frame Queue Descriptor (FQD)" */
+/* Frame Queue Descriptor (FQD) field 'fq_ctrl' uses these constants */
+#define QM_FQCTRL_MASK		0x07ff	/* 'fq_ctrl' flags; */
+#define QM_FQCTRL_CGE		0x0400	/* Congestion Group Enable */
+#define QM_FQCTRL_TDE		0x0200	/* Tail-Drop Enable */
+#define QM_FQCTRL_ORP		0x0100	/* ORP Enable */
+#define QM_FQCTRL_CTXASTASHING	0x0080	/* Context-A stashing */
+#define QM_FQCTRL_CPCSTASH	0x0040	/* CPC Stash Enable */
+#define QM_FQCTRL_FORCESFDR	0x0008	/* High-priority SFDRs */
+#define QM_FQCTRL_AVOIDBLOCK	0x0004	/* Don't block active */
+#define QM_FQCTRL_HOLDACTIVE	0x0002	/* Hold active in portal */
+#define QM_FQCTRL_PREFERINCACHE	0x0001	/* Aggressively cache FQD */
+#define QM_FQCTRL_LOCKINCACHE	QM_FQCTRL_PREFERINCACHE /* older naming */
+
+/* See "FQD Context_A field used for [...] */
+/* Frame Queue Descriptor (FQD) field 'CONTEXT_A' uses these constants */
+#define QM_STASHING_EXCL_ANNOTATION	0x04
+#define QM_STASHING_EXCL_DATA		0x02
+#define QM_STASHING_EXCL_CTX		0x01
+
+/* See "Intra Class Scheduling" */
+/* FQD field 'OAC' (Overhead ACcounting) uses these constants */
+#define QM_OAC_ICS		0x2 /* Accounting for Intra-Class Scheduling */
+#define QM_OAC_CG		0x1 /* Accounting for Congestion Groups */
+
+/*
+ * This struct represents the 32-bit "WR_PARM_[GYR]" parameters in CGR fields
+ * and associated commands/responses. The WRED parameters are calculated from
+ * these fields as follows;
+ *   MaxTH = MA * (2 ^ Mn)
+ *   Slope = SA / (2 ^ Sn)
+ *    MaxP = 4 * (Pn + 1)
+ */
+struct qm_cgr_wr_parm {
+	union {
+		u32 word;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 MA:8;
+			u32 Mn:5;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Sn:6;
+			u32 Pn:6;
+#else
+			u32 Pn:6;
+			u32 Sn:6;
+			u32 SA:7; /* must be between 64-127 */
+			u32 Mn:5;
+			u32 MA:8;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This struct represents the 13-bit "CS_THRES" CGR field. In the corresponding
+ * management commands, this is padded to a 16-bit structure field, so that's
+ * how we represent it here. The congestion state threshold is calculated from
+ * these fields as follows;
+ *   CS threshold = TA * (2 ^ Tn)
+ */
+struct qm_cgr_cs_thres {
+	union {
+		u16 hword;
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 __reserved:3;
+			u16 TA:8;
+			u16 Tn:5;
+#else
+			u16 Tn:5;
+			u16 TA:8;
+			u16 __reserved:3;
+#endif
+		} __packed;
+	};
+} __packed;
+/*
+ * This identical structure of CGR fields is present in the "Init/Modify CGR"
+ * commands and the "Query CGR" result. It's suctioned out here into its own
+ * struct.
+ */
+struct __qm_mc_cgr {
+	struct qm_cgr_wr_parm wr_parm_g;
+	struct qm_cgr_wr_parm wr_parm_y;
+	struct qm_cgr_wr_parm wr_parm_r;
+	u8 wr_en_g;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_y;	/* boolean, use QM_CGR_EN */
+	u8 wr_en_r;	/* boolean, use QM_CGR_EN */
+	u8 cscn_en;	/* boolean, use QM_CGR_EN */
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+#else
+			u16 cscn_targ_dcp_low;  /* CSCN_TARG_DCP low-16bits */
+			u16 cscn_targ_upd_ctrl; /* use QM_CSCN_TARG_UDP_ */
+#endif
+		};
+		u32 cscn_targ;	/* use QM_CGR_TARG_* */
+	};
+	u8 cstd_en;	/* boolean, use QM_CGR_EN */
+	u8 cs;		/* boolean, only used in query response */
+	union {
+		struct qm_cgr_cs_thres cs_thres;
+		/* use qm_cgr_cs_thres_set64() */
+		u16 __cs_thres;
+	};
+	u8 mode;	/* QMAN_CGR_MODE_FRAME not supported in rev1.0 */
+} __packed;
+#define QM_CGR_EN		0x01 /* For wr_en_*, cscn_en, cstd_en */
+#define QM_CGR_TARG_UDP_CTRL_WRITE_BIT	0x8000 /* value written to portal bit*/
+#define QM_CGR_TARG_UDP_CTRL_DCP	0x4000 /* 0: SWP, 1: DCP */
+#define QM_CGR_TARG_PORTAL(n)	(0x80000000 >> (n)) /* s/w portal, 0-9 */
+#define QM_CGR_TARG_FMAN0	0x00200000 /* direct-connect portal: fman0 */
+#define QM_CGR_TARG_FMAN1	0x00100000 /*			   : fman1 */
+/* Convert CGR thresholds to/from "cs_thres" format */
+static inline u64 qm_cgr_cs_thres_get64(const struct qm_cgr_cs_thres *th)
+{
+	return (u64)th->TA << th->Tn;
+}
+
+static inline int qm_cgr_cs_thres_set64(struct qm_cgr_cs_thres *th, u64 val,
+					int roundup)
+{
+	u32 e = 0;
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	th->Tn = e;
+	th->TA = val;
+	return 0;
+}
+
+/* See 1.5.8.5.1: "Initialize FQ" */
+/* See 1.5.8.5.2: "Query FQ" */
+/* See 1.5.8.5.3: "Query FQ Non-Programmable Fields" */
+/* See 1.5.8.5.4: "Alter FQ State Commands " */
+/* See 1.5.8.6.1: "Initialize/Modify CGR" */
+/* See 1.5.8.6.2: "CGR Test Write" */
+/* See 1.5.8.6.3: "Query CGR" */
+/* See 1.5.8.6.4: "Query Congestion Group State" */
+struct qm_mcc_initfq {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	u32 fqid;	/* 24-bit */
+	u16 count;	/* Initialises 'count+1' FQDs */
+	struct qm_fqd fqd; /* the FQD fields go here */
+	u8 __reserved3[30];
+} __packed;
+struct qm_mcc_queryfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_queryfq_np {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2[56];
+} __packed;
+struct qm_mcc_alterfq {
+	u8 __reserved1[3];
+	u32 fqid;	/* 24-bit */
+	u8 __reserved2;
+	u8 count;	/* number of consecutive FQID */
+	u8 __reserved3[10];
+	u32 context_b;	/* frame queue context b */
+	u8 __reserved4[40];
+} __packed;
+struct qm_mcc_initcgr {
+	u8 __reserved1;
+	u16 we_mask;	/* Write Enable Mask */
+	struct __qm_mc_cgr cgr;	/* CGR fields */
+	u8 __reserved2[2];
+	u8 cgid;
+	u8 __reserved4[32];
+} __packed;
+struct qm_mcc_cgrtestwrite {
+	u8 __reserved1[2];
+	u8 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u8 __reserved2[23];
+	u8 cgid;
+	u8 __reserved3[32];
+} __packed;
+struct qm_mcc_querycgr {
+	u8 __reserved1[30];
+	u8 cgid;
+	u8 __reserved2[32];
+} __packed;
+struct qm_mcc_querycongestion {
+	u8 __reserved[63];
+} __packed;
+struct qm_mcc_querywq {
+	u8 __reserved;
+	/* select channel if verb != QUERYWQ_DEDICATED */
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved1:3;
+#else
+			u16 __reserved1:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved2[60];
+} __packed;
+
+struct qm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct qm_mcc_initfq initfq;
+		struct qm_mcc_queryfq queryfq;
+		struct qm_mcc_queryfq_np queryfq_np;
+		struct qm_mcc_alterfq alterfq;
+		struct qm_mcc_initcgr initcgr;
+		struct qm_mcc_cgrtestwrite cgrtestwrite;
+		struct qm_mcc_querycgr querycgr;
+		struct qm_mcc_querycongestion querycongestion;
+		struct qm_mcc_querywq querywq;
+	};
+} __packed;
+
+/* INITFQ-specific flags */
+#define QM_INITFQ_WE_MASK		0x01ff	/* 'Write Enable' flags; */
+#define QM_INITFQ_WE_OAC		0x0100
+#define QM_INITFQ_WE_ORPC		0x0080
+#define QM_INITFQ_WE_CGID		0x0040
+#define QM_INITFQ_WE_FQCTRL		0x0020
+#define QM_INITFQ_WE_DESTWQ		0x0010
+#define QM_INITFQ_WE_ICSCRED		0x0008
+#define QM_INITFQ_WE_TDTHRESH		0x0004
+#define QM_INITFQ_WE_CONTEXTB		0x0002
+#define QM_INITFQ_WE_CONTEXTA		0x0001
+/* INITCGR/MODIFYCGR-specific flags */
+#define QM_CGR_WE_MASK			0x07ff	/* 'Write Enable Mask'; */
+#define QM_CGR_WE_WR_PARM_G		0x0400
+#define QM_CGR_WE_WR_PARM_Y		0x0200
+#define QM_CGR_WE_WR_PARM_R		0x0100
+#define QM_CGR_WE_WR_EN_G		0x0080
+#define QM_CGR_WE_WR_EN_Y		0x0040
+#define QM_CGR_WE_WR_EN_R		0x0020
+#define QM_CGR_WE_CSCN_EN		0x0010
+#define QM_CGR_WE_CSCN_TARG		0x0008
+#define QM_CGR_WE_CSTD_EN		0x0004
+#define QM_CGR_WE_CS_THRES		0x0002
+#define QM_CGR_WE_MODE			0x0001
+
+struct qm_mcr_initfq {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_queryfq {
+	u8 __reserved1[8];
+	struct qm_fqd fqd;	/* the FQD fields are here */
+	u8 __reserved2[30];
+} __packed;
+struct qm_mcr_queryfq_np {
+	u8 __reserved1;
+	u8 state;	/* QM_MCR_NP_STATE_*** */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	u8 __reserved2;
+	u32 fqd_link:24;
+	u16 __reserved3:2;
+	u16 odp_seq:14;
+	u16 __reserved4:2;
+	u16 orp_nesn:14;
+	u16 __reserved5:1;
+	u16 orp_ea_hseq:15;
+	u16 __reserved6:1;
+	u16 orp_ea_tseq:15;
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+	u8 __reserved11[5];
+	u8 __reserved12:7;
+	u8 is:1;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#else
+	u8 __reserved2;
+	u32 fqd_link:24;
+
+	u16 odp_seq:14;
+	u16 __reserved3:2;
+
+	u16 orp_nesn:14;
+	u16 __reserved4:2;
+
+	u16 orp_ea_hseq:15;
+	u16 __reserved5:1;
+
+	u16 orp_ea_tseq:15;
+	u16 __reserved6:1;
+
+	u8 __reserved7;
+	u32 orp_ea_hptr:24;
+
+	u8 __reserved8;
+	u32 orp_ea_tptr:24;
+
+	u8 __reserved9;
+	u32 pfdr_hptr:24;
+
+	u8 __reserved10;
+	u32 pfdr_tptr:24;
+
+	u8 __reserved11[5];
+	u8 is:1;
+	u8 __reserved12:7;
+	u16 ics_surp;
+	u32 byte_cnt;
+	u8 __reserved13;
+	u32 frm_cnt:24;
+	u32 __reserved14;
+	u16 ra1_sfdr;	/* QM_MCR_NP_RA1_*** */
+	u16 ra2_sfdr;	/* QM_MCR_NP_RA2_*** */
+	u16 __reserved15;
+	u16 od1_sfdr;	/* QM_MCR_NP_OD1_*** */
+	u16 od2_sfdr;	/* QM_MCR_NP_OD2_*** */
+	u16 od3_sfdr;	/* QM_MCR_NP_OD3_*** */
+#endif
+} __packed;
+
+struct qm_mcr_alterfq {
+	u8 fqs;		/* Frame Queue Status */
+	u8 __reserved1[61];
+} __packed;
+struct qm_mcr_initcgr {
+	u8 __reserved1[62];
+} __packed;
+struct qm_mcr_cgrtestwrite {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	u32 __reserved3:24;
+	u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+	u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+	u32 __reserved4:24;
+	u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+	u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+	u16 lgt;	/* Last Group Tick */
+	u16 wr_prob_g;
+	u16 wr_prob_y;
+	u16 wr_prob_r;
+	u8 __reserved5[8];
+} __packed;
+struct qm_mcr_querycgr {
+	u16 __reserved1;
+	struct __qm_mc_cgr cgr; /* CGR fields */
+	u8 __reserved2[3];
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved3:24;
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 i_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 i_bcnt_hi:8;/* high 8-bits of 40-bit "Instant" */
+			u32 __reserved3:24;
+#endif
+		};
+		u64 i_bcnt;
+	};
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u32 __reserved4:24;
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+#else
+			u32 a_bcnt_lo;	/* low 32-bits of 40-bit */
+			u32 a_bcnt_hi:8;/* high 8-bits of 40-bit "Average" */
+			u32 __reserved4:24;
+#endif
+		};
+		u64 a_bcnt;
+	};
+	union {
+		u32 cscn_targ_swp[4];
+		u8 __reserved5[16];
+	};
+} __packed;
+
+struct __qm_mcr_querycongestion {
+	u32 state[8];
+};
+
+struct qm_mcr_querycongestion {
+	u8 __reserved[30];
+	/* Access this struct using QM_MCR_QUERYCONGESTION() */
+	struct __qm_mcr_querycongestion state;
+} __packed;
+struct qm_mcr_querywq {
+	union {
+		u16 channel_wq; /* ignores wq (3 lsbits) */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u16 id:13; /* qm_channel */
+			u16 __reserved:3;
+#else
+			u16 __reserved:3;
+			u16 id:13; /* qm_channel */
+#endif
+		} __packed channel;
+	};
+	u8 __reserved[28];
+	u32 wq_len[8];
+} __packed;
+
+struct qm_mc_result {
+	u8 verb;
+	u8 result;
+	union {
+		struct qm_mcr_initfq initfq;
+		struct qm_mcr_queryfq queryfq;
+		struct qm_mcr_queryfq_np queryfq_np;
+		struct qm_mcr_alterfq alterfq;
+		struct qm_mcr_initcgr initcgr;
+		struct qm_mcr_cgrtestwrite cgrtestwrite;
+		struct qm_mcr_querycgr querycgr;
+		struct qm_mcr_querycongestion querycongestion;
+		struct qm_mcr_querywq querywq;
+	};
+} __packed;
+
+#define QM_MCR_VERB_RRID		0x80
+#define QM_MCR_VERB_MASK		QM_MCC_VERB_MASK
+#define QM_MCR_VERB_INITFQ_PARKED	QM_MCC_VERB_INITFQ_PARKED
+#define QM_MCR_VERB_INITFQ_SCHED	QM_MCC_VERB_INITFQ_SCHED
+#define QM_MCR_VERB_QUERYFQ		QM_MCC_VERB_QUERYFQ
+#define QM_MCR_VERB_QUERYFQ_NP		QM_MCC_VERB_QUERYFQ_NP
+#define QM_MCR_VERB_QUERYWQ		QM_MCC_VERB_QUERYWQ
+#define QM_MCR_VERB_QUERYWQ_DEDICATED	QM_MCC_VERB_QUERYWQ_DEDICATED
+#define QM_MCR_VERB_ALTER_SCHED		QM_MCC_VERB_ALTER_SCHED
+#define QM_MCR_VERB_ALTER_FE		QM_MCC_VERB_ALTER_FE
+#define QM_MCR_VERB_ALTER_RETIRE	QM_MCC_VERB_ALTER_RETIRE
+#define QM_MCR_VERB_ALTER_OOS		QM_MCC_VERB_ALTER_OOS
+#define QM_MCR_RESULT_NULL		0x00
+#define QM_MCR_RESULT_OK		0xf0
+#define QM_MCR_RESULT_ERR_FQID		0xf1
+#define QM_MCR_RESULT_ERR_FQSTATE	0xf2
+#define QM_MCR_RESULT_ERR_NOTEMPTY	0xf3	/* OOS fails if FQ is !empty */
+#define QM_MCR_RESULT_ERR_BADCHANNEL	0xf4
+#define QM_MCR_RESULT_PENDING		0xf8
+#define QM_MCR_RESULT_ERR_BADCOMMAND	0xff
+#define QM_MCR_NP_STATE_FE		0x10
+#define QM_MCR_NP_STATE_R		0x08
+#define QM_MCR_NP_STATE_MASK		0x07	/* Reads FQD::STATE; */
+#define QM_MCR_NP_STATE_OOS		0x00
+#define QM_MCR_NP_STATE_RETIRED		0x01
+#define QM_MCR_NP_STATE_TEN_SCHED	0x02
+#define QM_MCR_NP_STATE_TRU_SCHED	0x03
+#define QM_MCR_NP_STATE_PARKED		0x04
+#define QM_MCR_NP_STATE_ACTIVE		0x05
+#define QM_MCR_NP_PTR_MASK		0x07ff	/* for RA[12] & OD[123] */
+#define QM_MCR_NP_RA1_NRA(v)		(((v) >> 14) & 0x3)	/* FQD::NRA */
+#define QM_MCR_NP_RA2_IT(v)		(((v) >> 14) & 0x1)	/* FQD::IT */
+#define QM_MCR_NP_OD1_NOD(v)		(((v) >> 14) & 0x3)	/* FQD::NOD */
+#define QM_MCR_NP_OD3_NPC(v)		(((v) >> 14) & 0x3)	/* FQD::NPC */
+#define QM_MCR_FQS_ORLPRESENT		0x02	/* ORL fragments to come */
+#define QM_MCR_FQS_NOTEMPTY		0x01	/* FQ has enqueued frames */
+/* This extracts the state for congestion group 'n' from a query response.
+ * Eg.
+ *   u8 cgr = [...];
+ *   struct qm_mc_result *res = [...];
+ *   printf("congestion group %d congestion state: %d\n", cgr,
+ *       QM_MCR_QUERYCONGESTION(&res->querycongestion.state, cgr));
+ */
+#define __CGR_WORD(num)		(num >> 5)
+#define __CGR_SHIFT(num)	(num & 0x1f)
+#define __CGR_NUM		(sizeof(struct __qm_mcr_querycongestion) << 3)
+static inline int QM_MCR_QUERYCONGESTION(struct __qm_mcr_querycongestion *p,
+					 u8 cgr)
+{
+	return p->state[__CGR_WORD(cgr)] & (0x80000000 >> __CGR_SHIFT(cgr));
+}
+
+	/* Portal and Frame Queues */
+/* Represents a managed portal */
+struct qman_portal;
+
+/*
+ * This object type represents QMan frame queue descriptors (FQD), it is
+ * cacheline-aligned, and initialised by qman_create_fq(). The structure is
+ * defined further down.
+ */
+struct qman_fq;
+
+/*
+ * This object type represents a QMan congestion group, it is defined further
+ * down.
+ */
+struct qman_cgr;
+
+/*
+ * This enum, and the callback type that returns it, are used when handling
+ * dequeued frames via DQRR. Note that for "null" callbacks registered with the
+ * portal object (for handling dequeues that do not demux because context_b is
+ * NULL), the return value *MUST* be qman_cb_dqrr_consume.
+ */
+enum qman_cb_dqrr_result {
+	/* DQRR entry can be consumed */
+	qman_cb_dqrr_consume,
+	/* Like _consume, but requests parking - FQ must be held-active */
+	qman_cb_dqrr_park,
+	/* Does not consume, for DCA mode only. This allows out-of-order
+	 * consumes by explicit calls to qman_dca() and/or the use of implicit
+	 * DCA via EQCR entries.
+	 */
+	qman_cb_dqrr_defer,
+	/*
+	 * Stop processing without consuming this ring entry. Exits the current
+	 * qman_p_poll_dqrr() or interrupt-handling, as appropriate. If within
+	 * an interrupt handler, the callback would typically call
+	 * qman_irqsource_remove(QM_PIRQ_DQRI) before returning this value,
+	 * otherwise the interrupt will reassert immediately.
+	 */
+	qman_cb_dqrr_stop,
+	/* Like qman_cb_dqrr_stop, but consumes the current entry. */
+	qman_cb_dqrr_consume_stop
+};
+
+typedef enum qman_cb_dqrr_result (*qman_cb_dqrr)(struct qman_portal *qm,
+					struct qman_fq *fq,
+					const struct qm_dqrr_entry *dqrr);
+
+/*
+ * This callback type is used when handling ERNs, FQRNs and FQRLs via MR. They
+ * are always consumed after the callback returns.
+ */
+typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq,
+				const struct qm_mr_entry *msg);
+
+/* This callback type is used when handling DCP ERNs */
+typedef void (*qman_cb_dc_ern)(struct qman_portal *qm,
+				const struct qm_mr_entry *msg);
+/*
+ * s/w-visible states. Ie. tentatively scheduled + truly scheduled + active +
+ * held-active + held-suspended are just "sched". Things like "retired" will not
+ * be assumed until it is complete (ie. QMAN_FQ_STATE_CHANGING is set until
+ * then, to indicate it's completing and to gate attempts to retry the retire
+ * command). Note, park commands do not set QMAN_FQ_STATE_CHANGING because it's
+ * technically impossible in the case of enqueue DCAs (which refer to DQRR ring
+ * index rather than the FQ that ring entry corresponds to), so repeated park
+ * commands are allowed (if you're silly enough to try) but won't change FQ
+ * state, and the resulting park notifications move FQs from "sched" to
+ * "parked".
+ */
+enum qman_fq_state {
+	qman_fq_state_oos,
+	qman_fq_state_parked,
+	qman_fq_state_sched,
+	qman_fq_state_retired
+};
+
+
+/*
+ * Frame queue objects (struct qman_fq) are stored within memory passed to
+ * qman_create_fq(), as this allows stashing of caller-provided demux callback
+ * pointers at no extra cost to stashing of (driver-internal) FQ state. If the
+ * caller wishes to add per-FQ state and have it benefit from dequeue-stashing,
+ * they should;
+ *
+ * (a) extend the qman_fq structure with their state; eg.
+ *
+ *     // myfq is allocated and driver_fq callbacks filled in;
+ *     struct my_fq {
+ *	   struct qman_fq base;
+ *	   int an_extra_field;
+ *	   [ ... add other fields to be associated with each FQ ...]
+ *     } *myfq = some_my_fq_allocator();
+ *     struct qman_fq *fq = qman_create_fq(fqid, flags, &myfq->base);
+ *
+ *     // in a dequeue callback, access extra fields from 'fq' via a cast;
+ *     struct my_fq *myfq = (struct my_fq *)fq;
+ *     do_something_with(myfq->an_extra_field);
+ *     [...]
+ *
+ * (b) when and if configuring the FQ for context stashing, specify how ever
+ *     many cachelines are required to stash 'struct my_fq', to accelerate not
+ *     only the QMan driver but the callback as well.
+ */
+
+struct qman_fq_cb {
+	qman_cb_dqrr dqrr;	/* for dequeued frames */
+	qman_cb_mr ern;		/* for s/w ERNs */
+	qman_cb_mr fqs;		/* frame-queue state changes*/
+};
+
+struct qman_fq {
+	/* Caller of qman_create_fq() provides these demux callbacks */
+	struct qman_fq_cb cb;
+	/*
+	 * These are internal to the driver, don't touch. In particular, they
+	 * may change, be removed, or extended (so you shouldn't rely on
+	 * sizeof(qman_fq) being a constant).
+	 */
+	spinlock_t fqlock;
+	u32 fqid;
+	/* DPDK Interface */
+	void *dpaa_intf;
+
+	volatile unsigned long flags;
+	enum qman_fq_state state;
+	int cgr_groupid;
+	struct rb_node node;
+};
+
+/*
+ * This callback type is used when handling congestion group entry/exit.
+ * 'congested' is non-zero on congestion-entry, and zero on congestion-exit.
+ */
+typedef void (*qman_cb_cgr)(struct qman_portal *qm,
+			    struct qman_cgr *cgr, int congested);
+
+struct qman_cgr {
+	/* Set these prior to qman_create_cgr() */
+	u32 cgrid; /* 0..255, but u32 to allow specials like -1, 256, etc.*/
+	qman_cb_cgr cb;
+	/* These are private to the driver */
+	u16 chan; /* portal channel this object is created on */
+	struct list_head node;
+};
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_QMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index 4ff48c6..b0d953f 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -47,6 +47,10 @@
 extern "C" {
 #endif
 
+/* Thread-entry/exit hooks; */
+int qman_thread_init(void);
+int qman_thread_finish(void);
+
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
 /* Obtain and free raw (unitialized) portals */
@@ -81,6 +85,15 @@ int qman_free_raw_portal(struct dpaa_raw_portal *portal);
 int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
 int bman_free_raw_portal(struct dpaa_raw_portal *portal);
 
+/* Post-process interrupts. NB, the kernel IRQ handler disables the interrupt
+ * line before notifying us, and this post-processing re-enables it once
+ * processing is complete. As such, it is essential to call this before going
+ * into another blocking read/select/poll.
+ */
+void qman_thread_irq(void);
+
+/* Global setup */
+int qman_global_init(void);
 #ifdef __cplusplus
 }
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 11/40] bus/dpaa: add QMan driver core routines
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (9 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
                             ` (30 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/defconfig_arm64-dpaa-linuxapp-gcc  |    1 +
 drivers/bus/dpaa/Makefile                 |    2 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |   88 ++
 drivers/bus/dpaa/base/qbman/qman.c        | 2402 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h        |  888 +++++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c |   12 +
 drivers/bus/dpaa/include/fsl_qman.h       |  755 +++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |    1 +
 8 files changed, 4149 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h

diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 8316fc9..4d6b046 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -49,3 +49,4 @@ CONFIG_RTE_PKTMBUF_HEADROOM=128
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_DPAA_HWDEBUG=n
diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index c9c15f8..5957c15 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -63,7 +63,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
+	base/qbman/dpaa_alloc.c \
 	base/qbman/dpaa_sys.c
 
 # Link Pthread
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
new file mode 100644
index 0000000..690576a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2009-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "dpaa_sys.h"
+#include <process.h>
+#include <fsl_qman.h>
+
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_fqid, result, count, align, partial);
+}
+
+void qman_release_fqid_range(u32 fqid, u32 count)
+{
+	process_release(dpaa_id_fqid, fqid, count);
+}
+
+int qman_reserve_fqid_range(u32 fqid, unsigned int count)
+{
+	return process_reserve(dpaa_id_fqid, fqid, count);
+}
+
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_qpool, result, count, align, partial);
+}
+
+void qman_release_pool_range(u32 pool, u32 count)
+{
+	process_release(dpaa_id_qpool, pool, count);
+}
+
+int qman_reserve_pool_range(u32 pool, u32 count)
+{
+	return process_reserve(dpaa_id_qpool, pool, count);
+}
+
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_cgrid, result, count, align, partial);
+}
+
+void qman_release_cgrid_range(u32 cgrid, u32 count)
+{
+	process_release(dpaa_id_cgrid, cgrid, count);
+}
+
+int qman_reserve_cgrid_range(u32 cgrid, u32 count)
+{
+	return process_reserve(dpaa_id_cgrid, cgrid, count);
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
new file mode 100644
index 0000000..9b1630b
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -0,0 +1,2402 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define DQRR_MAXFILL	15
+#define EQCR_ITHRESH	4	/* if EQCR congests, interrupt threshold */
+#define IRQNAME		"QMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "QMan portal %d" */
+/* maximum number of DQRR entries to process in qman_poll() */
+#define FSL_QMAN_POLL_LIMIT 8
+
+/* Lock/unlock frame queues, subject to the "LOCKED" flag. This is about
+ * inter-processor locking only. Note, FQLOCK() is always called either under a
+ * local_irq_save() or from interrupt context - hence there's no need for irq
+ * protection (and indeed, attempting to nest irq-protection doesn't work, as
+ * the "irq en/disable" machinery isn't recursive...).
+ */
+#define FQLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_lock(&__fq478->fqlock); \
+	} while (0)
+#define FQUNLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_unlock(&__fq478->fqlock); \
+	} while (0)
+
+static inline void fq_set(struct qman_fq *fq, u32 mask)
+{
+	dpaa_set_bits(mask, &fq->flags);
+}
+
+static inline void fq_clear(struct qman_fq *fq, u32 mask)
+{
+	dpaa_clear_bits(mask, &fq->flags);
+}
+
+static inline int fq_isset(struct qman_fq *fq, u32 mask)
+{
+	return fq->flags & mask;
+}
+
+static inline int fq_isclear(struct qman_fq *fq, u32 mask)
+{
+	return !(fq->flags & mask);
+}
+
+struct qman_portal {
+	struct qm_portal p;
+	/* PORTAL_BITS_*** - dynamic, strictly internal */
+	unsigned long bits;
+	/* interrupt sources processed by portal_isr(), configurable */
+	unsigned long irq_sources;
+	u32 use_eqcr_ci_stashing;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* only 1 volatile dequeue at a time */
+	struct qman_fq *vdqcr_owned;
+	u32 sdqcr;
+	int dqrr_disable_ref;
+	/* A portal-specific handler for DCP ERNs. If this is NULL, the global
+	 * handler is called instead.
+	 */
+	qman_cb_dc_ern cb_dc_ern;
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct qm_portal_config *config;
+	struct dpa_rbtree retire_table;
+	char irqname[MAX_IRQNAME];
+	/* 2-element array. cgrs[0] is mask, cgrs[1] is snapshot. */
+	struct qman_cgrs *cgrs;
+	/* linked-list of CSCN handlers. */
+	struct list_head cgr_cbs;
+	/* list lock */
+	spinlock_t cgr_lock;
+	/* track if memory was allocated by the driver */
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* Keep a shadow copy of the DQRR on LE systems as the SW needs to
+	 * do byte swaps of DQRR read only memory.  First entry must be aligned
+	 * to 2 ** 10 to ensure DQRR index calculations based shadow copy
+	 * address (6 bits for address shift + 4 bits for the DQRR size).
+	 */
+	struct qm_dqrr_entry shadow_dqrr[QM_DQRR_SIZE]
+		    __attribute__((aligned(1024)));
+#endif
+};
+
+/* Global handler for DCP ERNs. Used when the portal receiving the message does
+ * not have a portal-specific handler.
+ */
+static qman_cb_dc_ern cb_dc_ern;
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static u16 affine_channels[NR_CPUS];
+static RTE_DEFINE_PER_LCORE(struct qman_portal, qman_affine_portal);
+
+static inline struct qman_portal *get_affine_portal(void)
+{
+	return &RTE_PER_LCORE(qman_affine_portal);
+}
+
+/* This gives a FQID->FQ lookup to cover the fact that we can't directly demux
+ * retirement notifications (the fact they are sometimes h/w-consumed means that
+ * contextB isn't always a s/w demux - and as we can't know which case it is
+ * when looking at the notification, we have to use the slow lookup for all of
+ * them). NB, it's possible to have multiple FQ objects refer to the same FQID
+ * (though at most one of them should be the consumer), so this table isn't for
+ * all FQs - FQs are added when retirement commands are issued, and removed when
+ * they complete, which also massively reduces the size of this table.
+ */
+IMPLEMENT_DPAA_RBTREE(fqtree, struct qman_fq, node, fqid);
+/*
+ * This is what everything can wait on, even if it migrates to a different cpu
+ * to the one whose affine portal it is waiting on.
+ */
+static DECLARE_WAIT_QUEUE_HEAD(affine_queue);
+
+static inline int table_push_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	int ret = fqtree_push(&p->retire_table, fq);
+
+	if (ret)
+		pr_err("ERROR: double FQ-retirement %d\n", fq->fqid);
+	return ret;
+}
+
+static inline void table_del_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	fqtree_del(&p->retire_table, fq);
+}
+
+static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
+{
+	return fqtree_find(&p->retire_table, fqid);
+}
+
+static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to HW format */
+	fqd->fq_ctrl = cpu_to_be16(fqd->fq_ctrl);
+	fqd->dest_wq = cpu_to_be16(fqd->dest_wq);
+	fqd->ics_cred = cpu_to_be16(fqd->ics_cred);
+	fqd->context_b = cpu_to_be32(fqd->context_b);
+	fqd->context_a.opaque = cpu_to_be64(fqd->context_a.opaque);
+	fqd->opaque_td = cpu_to_be16(fqd->opaque_td);
+}
+
+static inline void hw_fqd_to_cpu(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to CPU format */
+	fqd->fq_ctrl = be16_to_cpu(fqd->fq_ctrl);
+	fqd->dest_wq = be16_to_cpu(fqd->dest_wq);
+	fqd->ics_cred = be16_to_cpu(fqd->ics_cred);
+	fqd->context_b = be32_to_cpu(fqd->context_b);
+	fqd->context_a.opaque = be64_to_cpu(fqd->context_a.opaque);
+}
+
+static inline void cpu_to_hw_fd(struct qm_fd *fd)
+{
+	fd->addr = cpu_to_be40(fd->addr);
+	fd->status = cpu_to_be32(fd->status);
+	fd->opaque = cpu_to_be32(fd->opaque);
+}
+
+static inline void hw_fd_to_cpu(struct qm_fd *fd)
+{
+	fd->addr = be40_to_cpu(fd->addr);
+	fd->status = be32_to_cpu(fd->status);
+	fd->opaque = be32_to_cpu(fd->opaque);
+}
+
+/* In the case that slow- and fast-path handling are both done by qman_poll()
+ * (ie. because there is no interrupt handling), we ought to balance how often
+ * we do the fast-path poll versus the slow-path poll. We'll use two decrementer
+ * sources, so we call the fast poll 'n' times before calling the slow poll
+ * once. The idle decrementer constant is used when the last slow-poll detected
+ * no work to do, and the busy decrementer constant when the last slow-poll had
+ * work to do.
+ */
+#define SLOW_POLL_IDLE   1000
+#define SLOW_POLL_BUSY   10
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is);
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit);
+
+/* Portal interrupt handler */
+static irqreturn_t portal_isr(__always_unused int irq, void *ptr)
+{
+	struct qman_portal *p = ptr;
+	/*
+	 * The CSCI/CCSCI source is cleared inside __poll_portal_slow(), because
+	 * it could race against a Query Congestion State command also given
+	 * as part of the handling of this interrupt source. We mustn't
+	 * clear it a second time in this top-level function.
+	 */
+	u32 clear = QM_DQAVAIL_MASK | (p->irq_sources &
+		~(QM_PIRQ_CSCI | QM_PIRQ_CCSCI));
+	u32 is = qm_isr_status_read(&p->p) & p->irq_sources;
+	/* DQRR-handling if it's interrupt-driven */
+	if (is & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+	/* Handling of anything else that's interrupt-driven */
+	clear |= __poll_portal_slow(p, is);
+	qm_isr_status_clear(&p->p, clear);
+	return IRQ_HANDLED;
+}
+
+/* This inner version is used privately by qman_create_affine_portal(), as well
+ * as by the exported qman_stop_dequeues().
+ */
+static inline void qman_stop_dequeues_ex(struct qman_portal *p)
+{
+	if (!(p->dqrr_disable_ref++))
+		qm_dqrr_set_maxfill(&p->p, 0);
+}
+
+static int drain_mr_fqrni(struct qm_portal *p)
+{
+	const struct qm_mr_entry *msg;
+loop:
+	msg = qm_mr_current(p);
+	if (!msg) {
+		/*
+		 * if MR was full and h/w had other FQRNI entries to produce, we
+		 * need to allow it time to produce those entries once the
+		 * existing entries are consumed. A worst-case situation
+		 * (fully-loaded system) means h/w sequencers may have to do 3-4
+		 * other things before servicing the portal's MR pump, each of
+		 * which (if slow) may take ~50 qman cycles (which is ~200
+		 * processor cycles). So rounding up and then multiplying this
+		 * worst-case estimate by a factor of 10, just to be
+		 * ultra-paranoid, goes as high as 10,000 cycles. NB, we consume
+		 * one entry at a time, so h/w has an opportunity to produce new
+		 * entries well before the ring has been fully consumed, so
+		 * we're being *really* paranoid here.
+		 */
+		u64 now, then = mfatb();
+
+		do {
+			now = mfatb();
+		} while ((then + 10000) > now);
+		msg = qm_mr_current(p);
+		if (!msg)
+			return 0;
+	}
+	if ((msg->verb & QM_MR_VERB_TYPE_MASK) != QM_MR_VERB_FQRNI) {
+		/* We aren't draining anything but FQRNIs */
+		pr_err("Found verb 0x%x in MR\n", msg->verb);
+		return -1;
+	}
+	qm_mr_next(p);
+	qm_mr_cci_consume(p, 1);
+	goto loop;
+}
+
+static inline int qm_eqcr_init(struct qm_portal *portal,
+			       enum qm_eqcr_pmode pmode,
+			       unsigned int eq_stash_thresh,
+			       int eq_stash_prio)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u32 cfg;
+	u8 pi;
+
+	eqcr->ring = portal->addr.ce + QM_CL_EQCR;
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	eqcr->cursor = eqcr->ring + pi;
+	eqcr->vbit = (qm_in(EQCR_PI_CINH) & QM_EQCR_SIZE) ?
+			QM_EQCR_VERB_VBIT : 0;
+	eqcr->available = QM_EQCR_SIZE - 1 -
+			qm_cyc_diff(QM_EQCR_SIZE, eqcr->ci, pi);
+	eqcr->ithresh = qm_in(EQCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 0;
+	eqcr->pmode = pmode;
+#endif
+	cfg = (qm_in(CFG) & 0x00ffffff) |
+		(eq_stash_thresh << 28) | /* QCSP_CFG: EST */
+		(eq_stash_prio << 26)	| /* QCSP_CFG: EP */
+		((pmode & 0x3) << 24);	/* QCSP_CFG::EPM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_eqcr_finish(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 pi, ci;
+	u32 cfg;
+
+	/*
+	 * Disable EQCI stashing because the QMan only
+	 * presents the value it previously stashed to
+	 * maintain coherency.  Setting the stash threshold
+	 * to 1 then 0 ensures that QMan has resyncronized
+	 * its internal copy so that the portal is clean
+	 * when it is reinitialized in the future
+	 */
+	cfg = (qm_in(CFG) & 0x0fffffff) |
+		(1 << 28); /* QCSP_CFG: EST */
+	qm_out(CFG, cfg);
+	cfg &= 0x0fffffff; /* stash threshold = 0 */
+	qm_out(CFG, cfg);
+
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+
+	/* Refresh EQCR CI cache value */
+	qm_cl_invalidate(EQCR_CI);
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (pi != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("losing uncommitted EQCR entries\n");
+	if (ci != eqcr->ci)
+		pr_crit("missing existing EQCR completions\n");
+	if (eqcr->ci != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("EQCR destroyed unquiesced\n");
+}
+
+static inline int qm_dqrr_init(struct qm_portal *portal,
+			__maybe_unused const struct qm_portal_config *config,
+			enum qm_dqrr_dmode dmode,
+			__maybe_unused enum qm_dqrr_pmode pmode,
+			enum qm_dqrr_cmode cmode, u8 max_fill)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u32 cfg;
+
+	/* Make sure the DQRR will be idle when we enable */
+	qm_out(DQRR_SDQCR, 0);
+	qm_out(DQRR_VDQCR, 0);
+	qm_out(DQRR_PDQCR, 0);
+	dqrr->ring = portal->addr.ce + QM_CL_DQRR;
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->cursor = dqrr->ring + dqrr->ci;
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+	dqrr->vbit = (qm_in(DQRR_PI_CINH) & QM_DQRR_SIZE) ?
+			QM_DQRR_VERB_VBIT : 0;
+	dqrr->ithresh = qm_in(DQRR_ITR);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	dqrr->dmode = dmode;
+	dqrr->pmode = pmode;
+	dqrr->cmode = cmode;
+#endif
+	/* Invalidate every ring entry before beginning */
+	for (cfg = 0; cfg < QM_DQRR_SIZE; cfg++)
+		dccivac(qm_cl(dqrr->ring, cfg));
+	cfg = (qm_in(CFG) & 0xff000f00) |
+		((max_fill & (QM_DQRR_SIZE - 1)) << 20) | /* DQRR_MF */
+		((dmode & 1) << 18) |			/* DP */
+		((cmode & 3) << 16) |			/* DCM */
+		0xa0 |					/* RE+SE */
+		(0 ? 0x40 : 0) |			/* Ignore RP */
+		(0 ? 0x10 : 0);				/* Ignore SP */
+	qm_out(CFG, cfg);
+	qm_dqrr_set_maxfill(portal, max_fill);
+	return 0;
+}
+
+static inline void qm_dqrr_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if ((dqrr->cmode != qm_dqrr_cdc) &&
+	    (dqrr->ci != DQRR_PTR2IDX(dqrr->cursor)))
+		pr_crit("Ignoring completed DQRR entries\n");
+#endif
+}
+
+static inline int qm_mr_init(struct qm_portal *portal,
+			     __maybe_unused enum qm_mr_pmode pmode,
+			     enum qm_mr_cmode cmode)
+{
+	register struct qm_mr *mr = &portal->mr;
+	u32 cfg;
+
+	mr->ring = portal->addr.ce + QM_CL_MR;
+	mr->pi = qm_in(MR_PI_CINH) & (QM_MR_SIZE - 1);
+	mr->ci = qm_in(MR_CI_CINH) & (QM_MR_SIZE - 1);
+	mr->cursor = mr->ring + mr->ci;
+	mr->fill = qm_cyc_diff(QM_MR_SIZE, mr->ci, mr->pi);
+	mr->vbit = (qm_in(MR_PI_CINH) & QM_MR_SIZE) ? QM_MR_VERB_VBIT : 0;
+	mr->ithresh = qm_in(MR_ITR);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mr->pmode = pmode;
+	mr->cmode = cmode;
+#endif
+	cfg = (qm_in(CFG) & 0xfffff0ff) |
+		((cmode & 1) << 8);		/* QCSP_CFG:MM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+	const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+	DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+		mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+		if (!mr->pi)
+			mr->vbit ^= QM_MR_VERB_VBIT;
+		mr->fill++;
+		res = MR_INC(res);
+	}
+	dcbit_ro(res);
+}
+
+static inline
+struct qman_portal *qman_create_portal(
+			struct qman_portal *portal,
+			      const struct qm_portal_config *c,
+			      const struct qman_cgrs *cgrs)
+{
+	struct qm_portal *p;
+	char buf[16];
+	int ret;
+	u32 isdr;
+
+	p = &portal->p;
+
+	portal->use_eqcr_ci_stashing = ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	/*
+	 * If CI-stashing is used, the current defaults use a threshold of 3,
+	 * and stash with high-than-DQRR priority.
+	 */
+	if (qm_eqcr_init(p, qm_eqcr_pvb,
+			 portal->use_eqcr_ci_stashing ? 3 : 0, 1)) {
+		pr_err("Qman EQCR initialisation failed\n");
+		goto fail_eqcr;
+	}
+	if (qm_dqrr_init(p, c, qm_dqrr_dpush, qm_dqrr_pvb,
+			 qm_dqrr_cdc, DQRR_MAXFILL)) {
+		pr_err("Qman DQRR initialisation failed\n");
+		goto fail_dqrr;
+	}
+	if (qm_mr_init(p, qm_mr_pvb, qm_mr_cci)) {
+		pr_err("Qman MR initialisation failed\n");
+		goto fail_mr;
+	}
+	if (qm_mc_init(p)) {
+		pr_err("Qman MC initialisation failed\n");
+		goto fail_mc;
+	}
+
+	/* static interrupt-gating controls */
+	qm_dqrr_set_ithresh(p, 0);
+	qm_mr_set_ithresh(p, 0);
+	qm_isr_set_iperiod(p, 0);
+	portal->cgrs = kmalloc(2 * sizeof(*cgrs), GFP_KERNEL);
+	if (!portal->cgrs)
+		goto fail_cgrs;
+	/* initial snapshot is no-depletion */
+	qman_cgrs_init(&portal->cgrs[1]);
+	if (cgrs)
+		portal->cgrs[0] = *cgrs;
+	else
+		/* if the given mask is NULL, assume all CGRs can be seen */
+		qman_cgrs_fill(&portal->cgrs[0]);
+	INIT_LIST_HEAD(&portal->cgr_cbs);
+	spin_lock_init(&portal->cgr_lock);
+	portal->bits = 0;
+	portal->slowpoll = 0;
+	portal->sdqcr = QM_SDQCR_SOURCE_CHANNELS | QM_SDQCR_COUNT_UPTO3 |
+			QM_SDQCR_DEDICATED_PRECEDENCE | QM_SDQCR_TYPE_PRIO_QOS |
+			QM_SDQCR_TOKEN_SET(0xab) | QM_SDQCR_CHANNELS_DEDICATED;
+	portal->dqrr_disable_ref = 0;
+	portal->cb_dc_ern = NULL;
+	sprintf(buf, "qportal-%d", c->channel);
+	dpa_rbtree_init(&portal->retire_table);
+	isdr = 0xffffffff;
+	qm_isr_disable_write(p, isdr);
+	portal->irq_sources = 0;
+	qm_isr_enable_write(p, portal->irq_sources);
+	qm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, portal_isr, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need EQCR to be empty before continuing */
+	isdr &= ~QM_PIRQ_EQCI;
+	qm_isr_disable_write(p, isdr);
+	ret = qm_eqcr_get_fill(p);
+	if (ret) {
+		pr_err("Qman EQCR unclean\n");
+		goto fail_eqcr_empty;
+	}
+	isdr &= ~(QM_PIRQ_DQRI | QM_PIRQ_MRI);
+	qm_isr_disable_write(p, isdr);
+	if (qm_dqrr_current(p)) {
+		pr_err("Qman DQRR unclean\n");
+		qm_dqrr_cdc_consume_n(p, 0xffff);
+	}
+	if (qm_mr_current(p) && drain_mr_fqrni(p)) {
+		/* special handling, drain just in case it's a few FQRNIs */
+		if (drain_mr_fqrni(p))
+			goto fail_dqrr_mr_empty;
+	}
+	/* Success */
+	portal->config = c;
+	qm_isr_disable_write(p, 0);
+	qm_isr_uninhibit(p);
+	/* Write a sane SDQCR */
+	qm_dqrr_sdqcr_set(p, portal->sdqcr);
+	return portal;
+fail_dqrr_mr_empty:
+fail_eqcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->cgrs);
+	spin_lock_destroy(&portal->cgr_lock);
+fail_cgrs:
+	qm_mc_finish(p);
+fail_mc:
+	qm_mr_finish(p);
+fail_mr:
+	qm_dqrr_finish(p);
+fail_dqrr:
+	qm_eqcr_finish(p);
+fail_eqcr:
+	return NULL;
+}
+
+struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c,
+					      const struct qman_cgrs *cgrs)
+{
+	struct qman_portal *res;
+	struct qman_portal *portal = get_affine_portal();
+	/* A criteria for calling this function (from qman_driver.c) is that
+	 * we're already affine to the cpu and won't schedule onto another cpu.
+	 */
+
+	res = qman_create_portal(portal, c, cgrs);
+	if (res) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		affine_channels[c->cpu] =
+			c->channel;
+		spin_unlock(&affine_mask_lock);
+	}
+	return res;
+}
+
+static inline
+void qman_destroy_portal(struct qman_portal *qm)
+{
+	const struct qm_portal_config *pcfg;
+
+	/* Stop dequeues on the portal */
+	qm_dqrr_sdqcr_set(&qm->p, 0);
+
+	/*
+	 * NB we do this to "quiesce" EQCR. If we add enqueue-completions or
+	 * something related to QM_PIRQ_EQCI, this may need fixing.
+	 * Also, due to the prefetching model used for CI updates in the enqueue
+	 * path, this update will only invalidate the CI cacheline *after*
+	 * working on it, so we need to call this twice to ensure a full update
+	 * irrespective of where the enqueue processing was at when the teardown
+	 * began.
+	 */
+	qm_eqcr_cce_update(&qm->p);
+	qm_eqcr_cce_update(&qm->p);
+	pcfg = qm->config;
+
+	free_irq(pcfg->irq, qm);
+
+	kfree(qm->cgrs);
+	qm_mc_finish(&qm->p);
+	qm_mr_finish(&qm->p);
+	qm_dqrr_finish(&qm->p);
+	qm_eqcr_finish(&qm->p);
+
+	qm->config = NULL;
+
+	spin_lock_destroy(&qm->cgr_lock);
+}
+
+const struct qm_portal_config *qman_destroy_affine_portal(void)
+{
+	/* We don't want to redirect if we're a slave, use "raw" */
+	struct qman_portal *qm = get_affine_portal();
+	const struct qm_portal_config *pcfg;
+	int cpu;
+
+	pcfg = qm->config;
+	cpu = pcfg->cpu;
+
+	qman_destroy_portal(qm);
+
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int qman_get_portal_index(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+/* Inline helper to reduce nesting in __poll_portal_slow() */
+static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,
+				   const struct qm_mr_entry *msg, u8 verb)
+{
+	FQLOCK(fq);
+	switch (verb) {
+	case QM_MR_VERB_FQRL:
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_ORL));
+		fq_clear(fq, QMAN_FQ_STATE_ORL);
+		table_del_fq(p, fq);
+		break;
+	case QM_MR_VERB_FQRN:
+		DPAA_ASSERT((fq->state == qman_fq_state_parked) ||
+			    (fq->state == qman_fq_state_sched));
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_CHANGING));
+		fq_clear(fq, QMAN_FQ_STATE_CHANGING);
+		if (msg->fq.fqs & QM_MR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (msg->fq.fqs & QM_MR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		fq->state = qman_fq_state_retired;
+		break;
+	case QM_MR_VERB_FQPN:
+		DPAA_ASSERT(fq->state == qman_fq_state_sched);
+		DPAA_ASSERT(fq_isclear(fq, QMAN_FQ_STATE_CHANGING));
+		fq->state = qman_fq_state_parked;
+	}
+	FQUNLOCK(fq);
+}
+
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
+{
+	const struct qm_mr_entry *msg;
+	struct qm_mr_entry swapped_msg;
+
+	if (is & QM_PIRQ_CSCI) {
+		struct qman_cgrs rr, c;
+		struct qm_mc_result *mcr;
+		struct qman_cgr *cgr;
+
+		spin_lock(&p->cgr_lock);
+		/*
+		 * The CSCI bit must be cleared _before_ issuing the
+		 * Query Congestion State command, to ensure that a long
+		 * CGR State Change callback cannot miss an intervening
+		 * state change.
+		 */
+		qm_isr_status_clear(&p->p, QM_PIRQ_CSCI);
+		qm_mc_start(&p->p);
+		qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+		while (!(mcr = qm_mc_result(&p->p)))
+			cpu_relax();
+		/* mask out the ones I'm not interested in */
+		qman_cgrs_and(&rr, (const struct qman_cgrs *)
+			&mcr->querycongestion.state, &p->cgrs[0]);
+		/* check previous snapshot for delta, enter/exit congestion */
+		qman_cgrs_xor(&c, &rr, &p->cgrs[1]);
+		/* update snapshot */
+		qman_cgrs_cp(&p->cgrs[1], &rr);
+		/* Invoke callback */
+		list_for_each_entry(cgr, &p->cgr_cbs, node)
+			if (cgr->cb && qman_cgrs_get(&c, cgr->cgrid))
+				cgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid));
+		spin_unlock(&p->cgr_lock);
+	}
+
+	if (is & QM_PIRQ_EQRI) {
+		qm_eqcr_cce_update(&p->p);
+		qm_eqcr_set_ithresh(&p->p, 0);
+		wake_up(&affine_queue);
+	}
+
+	if (is & QM_PIRQ_MRI) {
+		struct qman_fq *fq;
+		u8 verb, num = 0;
+mr_loop:
+		qm_mr_pvb_update(&p->p);
+		msg = qm_mr_current(&p->p);
+		if (!msg)
+			goto mr_done;
+		swapped_msg = *msg;
+		hw_fd_to_cpu(&swapped_msg.ern.fd);
+		verb = msg->verb & QM_MR_VERB_TYPE_MASK;
+		/* The message is a software ERN iff the 0x20 bit is set */
+		if (verb & 0x20) {
+			switch (verb) {
+			case QM_MR_VERB_FQRNI:
+				/* nada, we drop FQRNIs on the floor */
+				break;
+			case QM_MR_VERB_FQRN:
+			case QM_MR_VERB_FQRL:
+				/* Lookup in the retirement table */
+				fq = table_find_fq(p,
+						   be32_to_cpu(msg->fq.fqid));
+				DPAA_BUG_ON(!fq);
+				fq_state_change(p, fq, &swapped_msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_FQPN:
+				/* Parked */
+				fq = (void *)(uintptr_t)
+					be32_to_cpu(msg->fq.contextB);
+				fq_state_change(p, fq, msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_DC_ERN:
+				/* DCP ERN */
+				if (p->cb_dc_ern)
+					p->cb_dc_ern(p, msg);
+				else if (cb_dc_ern)
+					cb_dc_ern(p, msg);
+				else {
+					static int warn_once;
+
+					if (!warn_once) {
+						pr_crit("Leaking DCP ERNs!\n");
+						warn_once = 1;
+					}
+				}
+				break;
+			default:
+				pr_crit("Invalid MR verb 0x%02x\n", verb);
+			}
+		} else {
+			/* Its a software ERN */
+			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+			fq->cb.ern(p, fq, &swapped_msg);
+		}
+		num++;
+		qm_mr_next(&p->p);
+		goto mr_loop;
+mr_done:
+		qm_mr_cci_consume(&p->p, num);
+	}
+	/*
+	 * QM_PIRQ_CSCI/CCSCI has already been cleared, as part of its specific
+	 * processing. If that interrupt source has meanwhile been re-asserted,
+	 * we mustn't clear it here (or in the top-level interrupt handler).
+	 */
+	return is & (QM_PIRQ_EQCI | QM_PIRQ_EQRI | QM_PIRQ_MRI);
+}
+
+/*
+ * remove some slowish-path stuff from the "fast path" and make sure it isn't
+ * inlined.
+ */
+static noinline void clear_vdqcr(struct qman_portal *p, struct qman_fq *fq)
+{
+	p->vdqcr_owned = NULL;
+	FQLOCK(fq);
+	fq_clear(fq, QMAN_FQ_STATE_VDQCR);
+	FQUNLOCK(fq);
+	wake_up(&affine_queue);
+}
+
+/*
+ * The only states that would conflict with other things if they ran at the
+ * same time on the same cpu are:
+ *
+ *   (i) setting/clearing vdqcr_owned, and
+ *  (ii) clearing the NE (Not Empty) flag.
+ *
+ * Both are safe. Because;
+ *
+ *   (i) this clearing can only occur after qman_set_vdq() has set the
+ *	 vdqcr_owned field (which it does before setting VDQCR), and
+ *	 qman_volatile_dequeue() blocks interrupts and preemption while this is
+ *	 done so that we can't interfere.
+ *  (ii) the NE flag is only cleared after qman_retire_fq() has set it, and as
+ *	 with (i) that API prevents us from interfering until it's safe.
+ *
+ * The good thing is that qman_set_vdq() and qman_retire_fq() run far
+ * less frequently (ie. per-FQ) than __poll_portal_fast() does, so the nett
+ * advantage comes from this function not having to "lock" anything at all.
+ *
+ * Note also that the callbacks are invoked at points which are safe against the
+ * above potential conflicts, but that this function itself is not re-entrant
+ * (this is because the function tracks one end of each FIFO in the portal and
+ * we do *not* want to lock that). So the consequence is that it is safe for
+ * user callbacks to call into any QMan API.
+ */
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit)
+{
+	const struct qm_dqrr_entry *dq;
+	struct qman_fq *fq;
+	enum qman_cb_dqrr_result res;
+	unsigned int limit = 0;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+	do {
+		qm_dqrr_pvb_update(&p->p);
+		dq = qm_dqrr_current(&p->p);
+		if (!dq)
+			break;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* If running on an LE system the fields of the
+	 * dequeue entry must be swapper.  Because the
+	 * QMan HW will ignore writes the DQRR entry is
+	 * copied and the index stored within the copy
+	 */
+		shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+		*shadow = *dq;
+		dq = shadow;
+		shadow->fqid = be32_to_cpu(shadow->fqid);
+		shadow->contextB = be32_to_cpu(shadow->contextB);
+		shadow->seqnum = be16_to_cpu(shadow->seqnum);
+		hw_fd_to_cpu(&shadow->fd);
+#endif
+
+		if (dq->stat & QM_DQRR_STAT_UNSCHEDULED) {
+			/*
+			 * VDQCR: don't trust context_b as the FQ may have
+			 * been configured for h/w consumption and we're
+			 * draining it post-retirement.
+			 */
+			fq = p->vdqcr_owned;
+			/*
+			 * We only set QMAN_FQ_STATE_NE when retiring, so we
+			 * only need to check for clearing it when doing
+			 * volatile dequeues.  It's one less thing to check
+			 * in the critical path (SDQCR).
+			 */
+			if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+				fq_clear(fq, QMAN_FQ_STATE_NE);
+			/*
+			 * This is duplicated from the SDQCR code, but we
+			 * have stuff to do before *and* after this callback,
+			 * and we don't want multiple if()s in the critical
+			 * path (SDQCR).
+			 */
+			res = fq->cb.dqrr(p, fq, dq);
+			if (res == qman_cb_dqrr_stop)
+				break;
+			/* Check for VDQCR completion */
+			if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+				clear_vdqcr(p, fq);
+		} else {
+			/* SDQCR: context_b points to the FQ */
+			fq = (void *)(uintptr_t)dq->contextB;
+			/* Now let the callback do its stuff */
+			res = fq->cb.dqrr(p, fq, dq);
+			/*
+			 * The callback can request that we exit without
+			 * consuming this entry nor advancing;
+			 */
+			if (res == qman_cb_dqrr_stop)
+				break;
+		}
+		/* Interpret 'dq' from a driver perspective. */
+		/*
+		 * Parking isn't possible unless HELDACTIVE was set. NB,
+		 * FORCEELIGIBLE implies HELDACTIVE, so we only need to
+		 * check for HELDACTIVE to cover both.
+		 */
+		DPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||
+			    (res != qman_cb_dqrr_park));
+		/* just means "skip it, I'll consume it myself later on" */
+		if (res != qman_cb_dqrr_defer)
+			qm_dqrr_cdc_consume_1ptr(&p->p, dq,
+						 res == qman_cb_dqrr_park);
+		/* Move forward */
+		qm_dqrr_next(&p->p);
+		/*
+		 * Entry processed and consumed, increment our counter.  The
+		 * callback can request that we exit after consuming the
+		 * entry, and we also exit if we reach our processing limit,
+		 * so loop back only if neither of these conditions is met.
+		 */
+	} while (++limit < poll_limit && res != qman_cb_dqrr_consume_stop);
+
+	return limit;
+}
+
+u16 qman_affine_channel(int cpu)
+{
+	if (cpu < 0) {
+		struct qman_portal *portal = get_affine_portal();
+
+		cpu = portal->config->cpu;
+	}
+	DPAA_BUG_ON(!CPU_ISSET(cpu, &affine_mask));
+	return affine_channels[cpu];
+}
+
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)
+{
+	struct qman_portal *p = get_affine_portal();
+	const struct qm_dqrr_entry *dq;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+
+	qm_dqrr_pvb_update(&p->p);
+	dq = qm_dqrr_current(&p->p);
+	if (!dq)
+		return NULL;
+
+	if (!(dq->stat & QM_DQRR_STAT_FD_VALID)) {
+		/* Invalid DQRR - put the portal and consume the DQRR.
+		 * Return NULL to user as no packet is seen.
+		 */
+		qman_dqrr_consume(fq, (struct qm_dqrr_entry *)dq);
+		return NULL;
+	}
+
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+	*shadow = *dq;
+	dq = shadow;
+	shadow->fqid = be32_to_cpu(shadow->fqid);
+	shadow->contextB = be32_to_cpu(shadow->contextB);
+	shadow->seqnum = be16_to_cpu(shadow->seqnum);
+	hw_fd_to_cpu(&shadow->fd);
+#endif
+
+	if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+		fq_clear(fq, QMAN_FQ_STATE_NE);
+
+	return (struct qm_dqrr_entry *)dq;
+}
+
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+		clear_vdqcr(p, fq);
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, 0);
+	qm_dqrr_next(&p->p);
+}
+
+int qman_poll_dqrr(unsigned int limit)
+{
+	struct qman_portal *p = get_affine_portal();
+	int ret;
+
+	ret = __poll_portal_fast(p, limit);
+	return ret;
+}
+
+void qman_poll(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if ((~p->irq_sources) & QM_PIRQ_SLOW) {
+		if (!(p->slowpoll--)) {
+			u32 is = qm_isr_status_read(&p->p) & ~p->irq_sources;
+			u32 active = __poll_portal_slow(p, is);
+
+			if (active) {
+				qm_isr_status_clear(&p->p, active);
+				p->slowpoll = SLOW_POLL_BUSY;
+			} else
+				p->slowpoll = SLOW_POLL_IDLE;
+		}
+	}
+	if ((~p->irq_sources) & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+}
+
+void qman_stop_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qman_stop_dequeues_ex(p);
+}
+
+void qman_start_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	DPAA_ASSERT(p->dqrr_disable_ref > 0);
+	if (!(--p->dqrr_disable_ref))
+		qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
+}
+
+void qman_static_dequeue_add(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr |= pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+void qman_static_dequeue_del(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr &= ~pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+u32 qman_static_dequeue_get(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->sdqcr;
+}
+
+void qman_dca(struct qm_dqrr_entry *dq, int park_request)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, park_request);
+}
+
+/* Frame queue API */
+static const char *mcr_result_str(u8 result)
+{
+	switch (result) {
+	case QM_MCR_RESULT_NULL:
+		return "QM_MCR_RESULT_NULL";
+	case QM_MCR_RESULT_OK:
+		return "QM_MCR_RESULT_OK";
+	case QM_MCR_RESULT_ERR_FQID:
+		return "QM_MCR_RESULT_ERR_FQID";
+	case QM_MCR_RESULT_ERR_FQSTATE:
+		return "QM_MCR_RESULT_ERR_FQSTATE";
+	case QM_MCR_RESULT_ERR_NOTEMPTY:
+		return "QM_MCR_RESULT_ERR_NOTEMPTY";
+	case QM_MCR_RESULT_PENDING:
+		return "QM_MCR_RESULT_PENDING";
+	case QM_MCR_RESULT_ERR_BADCOMMAND:
+		return "QM_MCR_RESULT_ERR_BADCOMMAND";
+	}
+	return "<unknown MCR result>";
+}
+
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
+{
+	struct qm_fqd fqd;
+	struct qm_mcr_queryfq_np np;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID) {
+		int ret = qman_alloc_fqid(&fqid);
+
+		if (ret)
+			return ret;
+	}
+	spin_lock_init(&fq->fqlock);
+	fq->fqid = fqid;
+	fq->flags = flags;
+	fq->state = qman_fq_state_oos;
+	fq->cgr_groupid = 0;
+
+	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
+		return 0;
+	/* Everything else is AS_IS support */
+	p = get_affine_portal();
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(&fqd);
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ_NP);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ_NP failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	np = mcr->queryfq_np;
+	/* Phew, have queryfq and queryfq_np results, stitch together
+	 * the FQ object from those.
+	 */
+	fq->cgr_groupid = fqd.cgid;
+	switch (np.state & QM_MCR_NP_STATE_MASK) {
+	case QM_MCR_NP_STATE_OOS:
+		break;
+	case QM_MCR_NP_STATE_RETIRED:
+		fq->state = qman_fq_state_retired;
+		if (np.frm_cnt)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		break;
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+		fq->state = qman_fq_state_sched;
+		if (np.state & QM_MCR_NP_STATE_R)
+			fq_set(fq, QMAN_FQ_STATE_CHANGING);
+		break;
+	case QM_MCR_NP_STATE_PARKED:
+		fq->state = qman_fq_state_parked;
+		break;
+	default:
+		DPAA_ASSERT(NULL == "invalid FQ state");
+	}
+	if (fqd.fq_ctrl & QM_FQCTRL_CGE)
+		fq->state |= QMAN_FQ_STATE_CGR_EN;
+	return 0;
+err:
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID)
+		qman_release_fqid(fqid);
+	return -EIO;
+}
+
+void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
+{
+	/*
+	 * We don't need to lock the FQ as it is a pre-condition that the FQ be
+	 * quiesced. Instead, run some checks.
+	 */
+	switch (fq->state) {
+	case qman_fq_state_parked:
+		DPAA_ASSERT(flags & QMAN_FQ_DESTROY_PARKED);
+	case qman_fq_state_oos:
+		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
+			qman_release_fqid(fq->fqid);
+
+		return;
+	default:
+		break;
+	}
+	DPAA_ASSERT(NULL == "qman_free_fq() on unquiesced FQ!");
+}
+
+u32 qman_fq_fqid(struct qman_fq *fq)
+{
+	return fq->fqid;
+}
+
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags)
+{
+	if (state)
+		*state = fq->state;
+	if (flags)
+		*flags = fq->flags;
+}
+
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	u8 res, myverb = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		QM_MCC_VERB_INITFQ_SCHED : QM_MCC_VERB_INITFQ_PARKED;
+
+	if ((fq->state != qman_fq_state_oos) &&
+	    (fq->state != qman_fq_state_parked))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	if (opts && (opts->we_mask & QM_INITFQ_WE_OAC)) {
+		/* And can't be set at the same time as TDTHRESH */
+		if (opts->we_mask & QM_INITFQ_WE_TDTHRESH)
+			return -EINVAL;
+	}
+	/* Issue an INITFQ_[PARKED|SCHED] management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     ((fq->state != qman_fq_state_oos) &&
+				(fq->state != qman_fq_state_parked)))) {
+		FQUNLOCK(fq);
+		return -EBUSY;
+	}
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initfq = *opts;
+	mcc->initfq.fqid = cpu_to_be32(fq->fqid);
+	mcc->initfq.count = 0;
+	/*
+	 * If the FQ does *not* have the TO_DCPORTAL flag, context_b is set as a
+	 * demux pointer. Otherwise, the caller-provided value is allowed to
+	 * stand, don't overwrite it.
+	 */
+	if (fq_isclear(fq, QMAN_FQ_FLAG_TO_DCPORTAL)) {
+		dma_addr_t phys_fq;
+
+		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+		/*
+		 *  and the physical address - NB, if the user wasn't trying to
+		 * set CONTEXTA, clear the stashing settings.
+		 */
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_CONTEXTA)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;
+			memset(&mcc->initfq.fqd.context_a, 0,
+			       sizeof(mcc->initfq.fqd.context_a));
+		} else {
+			phys_fq = rte_mem_virt2phy(fq);
+			qm_fqd_stashing_set64(&mcc->initfq.fqd, phys_fq);
+		}
+	}
+	if (flags & QMAN_INITFQ_FLAG_LOCAL) {
+		mcc->initfq.fqd.dest.channel = p->config->channel;
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_DESTWQ)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_DESTWQ;
+			mcc->initfq.fqd.dest.wq = 4;
+		}
+	}
+	mcc->initfq.we_mask = cpu_to_be16(mcc->initfq.we_mask);
+	cpu_to_hw_fqd(&mcc->initfq.fqd);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		FQUNLOCK(fq);
+		return -EIO;
+	}
+	if (opts) {
+		if (opts->we_mask & QM_INITFQ_WE_FQCTRL) {
+			if (opts->fqd.fq_ctrl & QM_FQCTRL_CGE)
+				fq_set(fq, QMAN_FQ_STATE_CGR_EN);
+			else
+				fq_clear(fq, QMAN_FQ_STATE_CGR_EN);
+		}
+		if (opts->we_mask & QM_INITFQ_WE_CGID)
+			fq->cgr_groupid = opts->fqd.cgid;
+	}
+	fq->state = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		qman_fq_state_sched : qman_fq_state_parked;
+	FQUNLOCK(fq);
+	return 0;
+}
+
+int qman_schedule_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_parked)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTERFQ_SCHED management command */
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state != qman_fq_state_parked))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_SCHED);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_SCHED);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_sched;
+out:
+	FQUNLOCK(fq);
+
+	return ret;
+}
+
+int qman_retire_fq(struct qman_fq *fq, u32 *flags)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int rval;
+	u8 res;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_sched))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_retired) ||
+				(fq->state == qman_fq_state_oos))) {
+		rval = -EBUSY;
+		goto out;
+	}
+	rval = table_push_fq(p, fq);
+	if (rval)
+		goto out;
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_RETIRE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_RETIRE);
+	res = mcr->result;
+	/*
+	 * "Elegant" would be to treat OK/PENDING the same way; set CHANGING,
+	 * and defer the flags until FQRNI or FQRN (respectively) show up. But
+	 * "Friendly" is to process OK immediately, and not set CHANGING. We do
+	 * friendly, otherwise the caller doesn't necessarily have a fully
+	 * "retired" FQ on return even if the retirement was immediate. However
+	 * this does mean some code duplication between here and
+	 * fq_state_change().
+	 */
+	if (likely(res == QM_MCR_RESULT_OK)) {
+		rval = 0;
+		/* Process 'fq' right away, we'll ignore FQRNI */
+		if (mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		if (flags)
+			*flags = fq->flags;
+		fq->state = qman_fq_state_retired;
+		if (fq->cb.fqs) {
+			/*
+			 * Another issue with supporting "immediate" retirement
+			 * is that we're forced to drop FQRNIs, because by the
+			 * time they're seen it may already be "too late" (the
+			 * fq may have been OOS'd and free()'d already). But if
+			 * the upper layer wants a callback whether it's
+			 * immediate or not, we have to fake a "MR" entry to
+			 * look like an FQRNI...
+			 */
+			struct qm_mr_entry msg;
+
+			msg.verb = QM_MR_VERB_FQRNI;
+			msg.fq.fqs = mcr->alterfq.fqs;
+			msg.fq.fqid = fq->fqid;
+			msg.fq.contextB = (u32)(uintptr_t)fq;
+			fq->cb.fqs(p, fq, &msg);
+		}
+	} else if (res == QM_MCR_RESULT_PENDING) {
+		rval = 1;
+		fq_set(fq, QMAN_FQ_STATE_CHANGING);
+	} else {
+		rval = -EIO;
+		table_del_fq(p, fq);
+	}
+out:
+	FQUNLOCK(fq);
+	return rval;
+}
+
+int qman_oos_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_retired)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_BLOCKOOS)) ||
+		     (fq->state != qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_OOS);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_OOS);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_oos;
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_fq_flow_control(struct qman_fq *fq, int xon)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+	u8 myverb;
+
+	if ((fq->state == qman_fq_state_oos) ||
+	    (fq->state == qman_fq_state_retired) ||
+		(fq->state == qman_fq_state_parked))
+		return -EINVAL;
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTER_FQXON or ALTER_FQXOFF management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_parked) ||
+			(fq->state == qman_fq_state_oos) ||
+			(fq->state == qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = fq->fqid;
+	mcc->alterfq.count = 0;
+	myverb = xon ? QM_MCC_VERB_ALTER_FQXON : QM_MCC_VERB_ALTER_FQXOFF;
+
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(fqd);
+	if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_fq_has_pkts(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	int ret = 0;
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		ret = !!mcr->queryfq_np.frm_cnt;
+	return ret;
+}
+
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		*np = mcr->queryfq_np;
+		np->fqd_link = be24_to_cpu(np->fqd_link);
+		np->odp_seq = be16_to_cpu(np->odp_seq);
+		np->orp_nesn = be16_to_cpu(np->orp_nesn);
+		np->orp_ea_hseq  = be16_to_cpu(np->orp_ea_hseq);
+		np->orp_ea_tseq  = be16_to_cpu(np->orp_ea_tseq);
+		np->orp_ea_hptr = be24_to_cpu(np->orp_ea_hptr);
+		np->orp_ea_tptr = be24_to_cpu(np->orp_ea_tptr);
+		np->pfdr_hptr = be24_to_cpu(np->pfdr_hptr);
+		np->pfdr_tptr = be24_to_cpu(np->pfdr_tptr);
+		np->ics_surp = be16_to_cpu(np->ics_surp);
+		np->byte_cnt = be32_to_cpu(np->byte_cnt);
+		np->frm_cnt = be24_to_cpu(np->frm_cnt);
+		np->ra1_sfdr = be16_to_cpu(np->ra1_sfdr);
+		np->ra2_sfdr = be16_to_cpu(np->ra2_sfdr);
+		np->od1_sfdr = be16_to_cpu(np->od1_sfdr);
+		np->od2_sfdr = be16_to_cpu(np->od2_sfdr);
+		np->od3_sfdr = be16_to_cpu(np->od3_sfdr);
+	}
+	if (res == QM_MCR_RESULT_ERR_FQID)
+		return -ERANGE;
+	else if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res, myverb;
+
+	myverb = (query_dedicated) ? QM_MCR_VERB_QUERYWQ_DEDICATED :
+				 QM_MCR_VERB_QUERYWQ;
+	mcc = qm_mc_start(&p->p);
+	mcc->querywq.channel.id = cpu_to_be16(wq->channel.id);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		int i, array_len;
+
+		wq->channel.id = be16_to_cpu(mcr->querywq.channel.id);
+		array_len = ARRAY_SIZE(mcr->querywq.wq_len);
+		for (i = 0; i < array_len; i++)
+			wq->wq_len[i] = be32_to_cpu(mcr->querywq.wq_len[i]);
+	}
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERYWQ failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->cgrtestwrite.cgid = cgr->cgrid;
+	mcc->cgrtestwrite.i_bcnt_hi = (u8)(i_bcnt >> 32);
+	mcc->cgrtestwrite.i_bcnt_lo = (u32)i_bcnt;
+	qm_mc_commit(&p->p, QM_MCC_VERB_CGRTESTWRITE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_CGRTESTWRITE);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*result = mcr->cgrtestwrite;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("CGR TEST WRITE failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->querycgr.cgid = cgr->cgrid;
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCGR);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYCGR);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*cgrd = mcr->querycgr;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CGR failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	cgrd->cgr.wr_parm_g.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_g.word);
+	cgrd->cgr.wr_parm_y.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_y.word);
+	cgrd->cgr.wr_parm_r.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_r.word);
+	cgrd->cgr.cscn_targ =  be32_to_cpu(cgrd->cgr.cscn_targ);
+	cgrd->cgr.__cs_thres = be16_to_cpu(cgrd->cgr.__cs_thres);
+	for (i = 0; i < ARRAY_SIZE(cgrd->cscn_targ_swp); i++)
+		cgrd->cscn_targ_swp[i] =
+			be32_to_cpu(cgrd->cscn_targ_swp[i]);
+	return 0;
+}
+
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion)
+{
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	qm_mc_start(&p->p);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			QM_MCC_VERB_QUERYCONGESTION);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*congestion = mcr->querycongestion;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CONGESTION failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	for (i = 0; i < ARRAY_SIZE(congestion->state.state); i++)
+		congestion->state.state[i] =
+			be32_to_cpu(congestion->state.state[i]);
+	return 0;
+}
+
+int qman_set_vdq(struct qman_fq *fq, u16 num)
+{
+	struct qman_portal *p = get_affine_portal();
+	uint32_t vdqcr;
+	int ret = -EBUSY;
+
+	vdqcr = QM_VDQCR_EXACT;
+	vdqcr |= QM_VDQCR_NUMFRAMES_SET(num);
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired)) {
+		ret = -EINVAL;
+		goto out;
+	}
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR)) {
+		ret = -EBUSY;
+		goto out;
+	}
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (!ret)
+		qm_dqrr_vdqcr_set(&p->p, vdqcr);
+
+out:
+	return ret;
+}
+
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
+			  u32 vdqcr)
+{
+	struct qman_portal *p;
+	int ret = -EBUSY;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired))
+		return -EINVAL;
+	if (vdqcr & QM_VDQCR_FQID_MASK)
+		return -EINVAL;
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+		return -EBUSY;
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	p = get_affine_portal();
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (ret)
+		return ret;
+
+	/* VDQCR is set */
+	qm_dqrr_vdqcr_set(&p->p, vdqcr);
+	return 0;
+}
+
+static noinline void update_eqcr_ci(struct qman_portal *p, u8 avail)
+{
+	if (avail)
+		qm_eqcr_cce_prefetch(&p->p);
+	else
+		qm_eqcr_cce_update(&p->p);
+}
+
+int qman_eqcr_is_empty(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	u8 avail;
+
+	update_eqcr_ci(p, 0);
+	avail = qm_eqcr_get_fill(&p->p);
+	return (avail == 0);
+}
+
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine)
+{
+	if (affine) {
+		struct qman_portal *p = get_affine_portal();
+
+		p->cb_dc_ern = handler;
+	} else
+		cb_dc_ern = handler;
+}
+
+static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
+					struct qman_fq *fq,
+					const struct qm_fd *fd,
+					u32 flags)
+{
+	struct qm_eqcr_entry *eq;
+	u8 avail;
+
+	if (p->use_eqcr_ci_stashing) {
+		/*
+		 * The stashing case is easy, only update if we need to in
+		 * order to try and liberate ring entries.
+		 */
+		eq = qm_eqcr_start_stash(&p->p);
+	} else {
+		/*
+		 * The non-stashing case is harder, need to prefetch ahead of
+		 * time.
+		 */
+		avail = qm_eqcr_get_avail(&p->p);
+		if (avail < 2)
+			update_eqcr_ci(p, avail);
+		eq = qm_eqcr_start_no_stash(&p->p);
+	}
+
+	if (unlikely(!eq))
+		return NULL;
+
+	if (flags & QMAN_ENQUEUE_FLAG_DCA)
+		eq->dca = QM_EQCR_DCA_ENABLE |
+			((flags & QMAN_ENQUEUE_FLAG_DCA_PARK) ?
+					QM_EQCR_DCA_PARK : 0) |
+			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
+	eq->fqid = cpu_to_be32(fq->fqid);
+	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+	eq->fd = *fd;
+	cpu_to_hw_fd(&eq->fd);
+	return eq;
+}
+
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_CMD_ENQUEUE |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+	/* Factor the below out, it's used from qman_enqueue_orp() too */
+	return 0;
+}
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_portal *portal = &p->p;
+
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
+
+	u8 i, diff, old_ci, sent = 0;
+
+	/* Update the available entries if no entry is free */
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return 0;
+	}
+
+	/* try to send as many frames as possible */
+	while (eqcr->available && frames_to_send--) {
+		eq->fqid = cpu_to_be32(fq->fqid);
+		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+		eq->fd.opaque_addr = fd->opaque_addr;
+		eq->fd.addr = cpu_to_be40(fd->addr);
+		eq->fd.status = cpu_to_be32(fd->status);
+		eq->fd.opaque = cpu_to_be32(fd->opaque);
+
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		eqcr->available--;
+		sent++;
+		fd++;
+	}
+	lwsync();
+
+	/* In order for flushes to complete faster, all lines are recorded in
+	 * 32 bit word.
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		eq->__dont_write_directly__verb =
+			QM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;
+		prev_eq = eq;
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		if (unlikely((prev_eq + 1) != eq))
+			eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+	}
+
+	/* We need  to flush all the lines but without load/store operations
+	 * between them
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		dcbf(eq);
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+	}
+	/* Update cursor for the next call */
+	eqcr->cursor = eq;
+	return sent;
+}
+
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum)
+{
+	struct qman_portal *p  = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Process ORP-specifics here */
+	if (flags & QMAN_ENQUEUE_FLAG_NLIS)
+		orp_seqnum |= QM_EQCR_SEQNUM_NLIS;
+	else {
+		orp_seqnum &= ~QM_EQCR_SEQNUM_NLIS;
+		if (flags & QMAN_ENQUEUE_FLAG_NESN)
+			orp_seqnum |= QM_EQCR_SEQNUM_NESN;
+		else
+			/* No need to check 4 QMAN_ENQUEUE_FLAG_HOLE */
+			orp_seqnum &= ~QM_EQCR_SEQNUM_NESN;
+	}
+	eq->seqnum = cpu_to_be16(orp_seqnum);
+	eq->orp = cpu_to_be32(orp->fqid);
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_ORP |
+		((flags & (QMAN_ENQUEUE_FLAG_HOLE | QMAN_ENQUEUE_FLAG_NESN)) ?
+				0 : QM_EQCR_VERB_CMD_ENQUEUE) |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+
+	return 0;
+}
+
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+	u8 verb = QM_MCC_VERB_MODIFYCGR;
+
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initcgr = *opts;
+	mcc->initcgr.we_mask = cpu_to_be16(mcc->initcgr.we_mask);
+	mcc->initcgr.cgr.wr_parm_g.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_g.word);
+	mcc->initcgr.cgr.wr_parm_y.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_y.word);
+	mcc->initcgr.cgr.wr_parm_r.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_r.word);
+	mcc->initcgr.cgr.cscn_targ =  cpu_to_be32(mcc->initcgr.cgr.cscn_targ);
+	mcc->initcgr.cgr.__cs_thres = cpu_to_be16(mcc->initcgr.cgr.__cs_thres);
+
+	mcc->initcgr.cgid = cgr->cgrid;
+	if (flags & QMAN_CGR_FLAG_USE_INIT)
+		verb = QM_MCC_VERB_INITCGR;
+	qm_mc_commit(&p->p, verb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == verb);
+	res = mcr->result;
+	return (res == QM_MCR_RESULT_OK) ? 0 : -EIO;
+}
+
+#define TARG_MASK(n) (0x80000000 >> (n->config->channel - \
+					QM_CHANNEL_SWPORTAL0))
+#define TARG_DCP_MASK(n) (0x80000000 >> (10 + n))
+#define PORTAL_IDX(n) (n->config->channel - QM_CHANNEL_SWPORTAL0)
+
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret;
+	struct qman_portal *p;
+
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	p = get_affine_portal();
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	cgr->chan = p->config->channel;
+	spin_lock(&p->cgr_lock);
+
+	/* if no opts specified, just add it to the list */
+	if (!opts)
+		goto add_list;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		goto release_lock;
+	if (opts)
+		local_opts = *opts;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+			QM_CGR_TARG_UDP_CTRL_WRITE_BIT | PORTAL_IDX(p);
+	else
+		/* Overwrite TARG */
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+							TARG_MASK(p);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT, &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		goto release_lock;
+add_list:
+	list_add(&cgr->node, &p->cgr_cbs);
+
+	/* Determine if newly added object requires its callback to be called */
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret) {
+		/* we can't go back, so proceed and return success, but screen
+		 * and wail to the log file.
+		 */
+		pr_crit("CGR HW state partially modified\n");
+		ret = 0;
+		goto release_lock;
+	}
+	if (cgr->cb && cgr_state.cgr.cscn_en && qman_cgrs_get(&p->cgrs[1],
+							      cgr->cgrid))
+		cgr->cb(p, cgr, 1);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+	return ret;
+}
+
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcc_initcgr local_opts;
+	struct qm_mcr_querycgr cgr_state;
+	int ret;
+
+	if ((qman_ip_rev & 0xFF00) < QMAN_REV30) {
+		pr_warn("QMan version doesn't support CSCN => DCP portal\n");
+		return -EINVAL;
+	}
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		return ret;
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	if (opts)
+		local_opts = *opts;
+
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+				QM_CGR_TARG_UDP_CTRL_WRITE_BIT |
+				QM_CGR_TARG_UDP_CTRL_DCP | dcp_portal;
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+					TARG_DCP_MASK(dcp_portal);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT,
+				      &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+
+	return ret;
+}
+
+int qman_delete_cgr(struct qman_cgr *cgr)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret = 0;
+	struct qman_cgr *i;
+	struct qman_portal *p = get_affine_portal();
+
+	if (cgr->chan != p->config->channel) {
+		pr_crit("Attempting to delete cgr from different portal than"
+			" it was create: create 0x%x, delete 0x%x\n",
+			cgr->chan, p->config->channel);
+		ret = -EINVAL;
+		goto put_portal;
+	}
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	spin_lock(&p->cgr_lock);
+	list_del(&cgr->node);
+	/*
+	 * If there are no other CGR objects for this CGRID in the list,
+	 * update CSCN_TARG accordingly
+	 */
+	list_for_each_entry(i, &p->cgr_cbs, node)
+		if ((i->cgrid == cgr->cgrid) && i->cb)
+			goto release_lock;
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)  {
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+		goto release_lock;
+	}
+	/* Overwrite TARG */
+	local_opts.we_mask = QM_CGR_WE_CSCN_TARG;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl = PORTAL_IDX(p);
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ &
+							 ~(TARG_MASK(p));
+	ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+put_portal:
+	return ret;
+}
+
+int qman_shutdown_fq(u32 fqid)
+{
+	struct qman_portal *p;
+	struct qm_portal *low_p;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	u8 state;
+	int orl_empty, fq_empty, drain = 0;
+	u32 result;
+	u32 channel, wq;
+	u16 dest_wq;
+
+	p = get_affine_portal();
+	low_p = &p->p;
+
+	/* Determine the state of the FQID */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	state = mcr->queryfq_np.state & QM_MCR_NP_STATE_MASK;
+	if (state == QM_MCR_NP_STATE_OOS)
+		return 0; /* Already OOS, no need to do anymore checks */
+
+	/* Query which channel the FQ is using */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+
+	/* Need to store these since the MCR gets reused */
+	dest_wq = be16_to_cpu(mcr->queryfq.fqd.dest_wq);
+	channel = dest_wq & 0x7;
+	wq = dest_wq >> 3;
+
+	switch (state) {
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+	case QM_MCR_NP_STATE_PARKED:
+		orl_empty = 0;
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_RETIRE);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_RETIRE);
+		result = mcr->result; /* Make a copy as we reuse MCR below */
+
+		if (result == QM_MCR_RESULT_PENDING) {
+			/* Need to wait for the FQRN in the message ring, which
+			 * will only occur once the FQ has been drained.  In
+			 * order for the FQ to drain the portal needs to be set
+			 * to dequeue from the channel the FQ is scheduled on
+			 */
+			const struct qm_mr_entry *msg;
+			const struct qm_dqrr_entry *dqrr = NULL;
+			int found_fqrn = 0;
+			__maybe_unused u16 dequeue_wq = 0;
+
+			/* Flag that we need to drain FQ */
+			drain = 1;
+
+			if (channel >= qm_channel_pool1 &&
+			    channel < (u16)(qm_channel_pool1 + 15)) {
+				/* Pool channel, enable the bit in the portal */
+				dequeue_wq = (channel -
+					      qm_channel_pool1 + 1) << 4 | wq;
+			} else if (channel < qm_channel_pool1) {
+				/* Dedicated channel */
+				dequeue_wq = wq;
+			} else {
+				pr_info("Cannot recover FQ 0x%x,"
+					" it is scheduled on channel 0x%x",
+					fqid, channel);
+				return -EBUSY;
+			}
+			/* Set the sdqcr to drain this channel */
+			if (channel < qm_channel_pool1)
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+					  QM_SDQCR_CHANNELS_DEDICATED);
+			else
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+						  QM_SDQCR_CHANNELS_POOL_CONV
+						  (channel));
+			while (!found_fqrn) {
+				/* Keep draining DQRR while checking the MR*/
+				qm_dqrr_pvb_update(low_p);
+				dqrr = qm_dqrr_current(low_p);
+				while (dqrr) {
+					qm_dqrr_cdc_consume_1ptr(
+						low_p, dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+				/* Process message ring too */
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+				while (msg) {
+					if ((msg->verb &
+					     QM_MR_VERB_TYPE_MASK)
+					    == QM_MR_VERB_FQRN)
+						found_fqrn = 1;
+					qm_mr_next(low_p);
+					qm_mr_cci_consume_to_current(low_p);
+					qm_mr_pvb_update(low_p);
+					msg = qm_mr_current(low_p);
+				}
+				cpu_relax();
+			}
+		}
+		if (result != QM_MCR_RESULT_OK &&
+		    result !=  QM_MCR_RESULT_PENDING) {
+			/* error */
+			pr_err("qman_retire_fq failed on FQ 0x%x,"
+			       " result=0x%x\n", fqid, result);
+			return -1;
+		}
+		if (!(mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)) {
+			/* ORL had no entries, no need to wait until the
+			 * ERNs come in.
+			 */
+			orl_empty = 1;
+		}
+		/* Retirement succeeded, check to see if FQ needs
+		 * to be drained.
+		 */
+		if (drain || mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY) {
+			/* FQ is Not Empty, drain using volatile DQ commands */
+			fq_empty = 0;
+			do {
+				const struct qm_dqrr_entry *dqrr = NULL;
+				u32 vdqcr = fqid | QM_VDQCR_NUMFRAMES_SET(3);
+
+				qm_dqrr_vdqcr_set(low_p, vdqcr);
+
+				/* Wait for a dequeue to occur */
+				while (dqrr == NULL) {
+					qm_dqrr_pvb_update(low_p);
+					dqrr = qm_dqrr_current(low_p);
+					if (!dqrr)
+						cpu_relax();
+				}
+				/* Process the dequeues, making sure to
+				 * empty the ring completely.
+				 */
+				while (dqrr) {
+					if (dqrr->fqid == fqid &&
+					    dqrr->stat & QM_DQRR_STAT_FQ_EMPTY)
+						fq_empty = 1;
+					qm_dqrr_cdc_consume_1ptr(low_p,
+								 dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+			} while (fq_empty == 0);
+		}
+		qm_dqrr_sdqcr_set(low_p, 0);
+
+		/* Wait for the ORL to have been completely drained */
+		while (orl_empty == 0) {
+			const struct qm_mr_entry *msg;
+
+			qm_mr_pvb_update(low_p);
+			msg = qm_mr_current(low_p);
+			while (msg) {
+				if ((msg->verb & QM_MR_VERB_TYPE_MASK) ==
+				    QM_MR_VERB_FQRL)
+					orl_empty = 1;
+				qm_mr_next(low_p);
+				qm_mr_cci_consume_to_current(low_p);
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+			}
+			cpu_relax();
+		}
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result != QM_MCR_RESULT_OK) {
+			pr_err(
+			"OOS after drain Failed on FQID 0x%x, result 0x%x\n",
+			       fqid, mcr->result);
+			return -1;
+		}
+		return 0;
+
+	case QM_MCR_NP_STATE_RETIRED:
+		/* Send OOS Command */
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result) {
+			pr_err("OOS Failed on FQID 0x%x\n", fqid);
+			return -1;
+		}
+		return 0;
+
+	}
+	return -1;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.h b/drivers/bus/dpaa/base/qbman/qman.h
new file mode 100644
index 0000000..7c645f4
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.h
@@ -0,0 +1,888 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman_priv.h"
+
+/***************************/
+/* Portal register assists */
+/***************************/
+#define QM_REG_EQCR_PI_CINH	0x3000
+#define QM_REG_EQCR_CI_CINH	0x3040
+#define QM_REG_EQCR_ITR		0x3080
+#define QM_REG_DQRR_PI_CINH	0x3100
+#define QM_REG_DQRR_CI_CINH	0x3140
+#define QM_REG_DQRR_ITR		0x3180
+#define QM_REG_DQRR_DCAP	0x31C0
+#define QM_REG_DQRR_SDQCR	0x3200
+#define QM_REG_DQRR_VDQCR	0x3240
+#define QM_REG_DQRR_PDQCR	0x3280
+#define QM_REG_MR_PI_CINH	0x3300
+#define QM_REG_MR_CI_CINH	0x3340
+#define QM_REG_MR_ITR		0x3380
+#define QM_REG_CFG		0x3500
+#define QM_REG_ISR		0x3600
+#define QM_REG_IIR              0x36C0
+#define QM_REG_ITPR		0x3740
+
+/* Cache-enabled register offsets */
+#define QM_CL_EQCR		0x0000
+#define QM_CL_DQRR		0x1000
+#define QM_CL_MR		0x2000
+#define QM_CL_EQCR_PI_CENA	0x3000
+#define QM_CL_EQCR_CI_CENA	0x3040
+#define QM_CL_DQRR_PI_CENA	0x3100
+#define QM_CL_DQRR_CI_CENA	0x3140
+#define QM_CL_MR_PI_CENA	0x3300
+#define QM_CL_MR_CI_CENA	0x3340
+#define QM_CL_CR		0x3800
+#define QM_CL_RR0		0x3900
+#define QM_CL_RR1		0x3940
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __qm_in(qm, o)		be32_to_cpu(__raw_readl((qm)->ci  + (o)))
+#define __qm_out(qm, o, val)	__raw_writel((cpu_to_be32(val)), \
+					     (qm)->ci + (o))
+#define qm_in(reg)		__qm_in(&portal->addr, QM_REG_##reg)
+#define qm_out(reg, val)	__qm_out(&portal->addr, QM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __qm_cl_touch_ro(qm, o) dcbt_ro((qm)->ce + (o))
+#define __qm_cl_touch_rw(qm, o) dcbt_rw((qm)->ce + (o))
+#define __qm_cl_in(qm, o)	be32_to_cpu(__raw_readl((qm)->ce + (o)))
+#define __qm_cl_out(qm, o, val) \
+	do { \
+		u32 *__tmpclout = (qm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __qm_cl_invalidate(qm, o) dccivac((qm)->ce + (o))
+#define qm_cl_touch_ro(reg) __qm_cl_touch_ro(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_touch_rw(reg) __qm_cl_touch_rw(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_in(reg)	    __qm_cl_in(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_out(reg, val) __qm_cl_out(&portal->addr, QM_CL_##reg##_CENA, val)
+#define qm_cl_invalidate(reg)\
+	__qm_cl_invalidate(&portal->addr, QM_CL_##reg##_CENA)
+
+/* Cache-enabled ring access */
+#define qm_cl(base, idx)	((void *)base + ((idx) << 6))
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 qm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *     dmode == h/w dequeue mode.
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ *     dc == DCA (Discrete Consumption Acknowledgment), DQRR-only
+ *   As for "enum qm_dqrr_dmode", it should be self-explanatory.
+ */
+enum qm_eqcr_pmode {		/* matches QCSP_CFG::EPM */
+	qm_eqcr_pci = 0,	/* PI index, cache-inhibited */
+	qm_eqcr_pce = 1,	/* PI index, cache-enabled */
+	qm_eqcr_pvb = 2		/* valid-bit */
+};
+
+enum qm_dqrr_dmode {		/* matches QCSP_CFG::DP */
+	qm_dqrr_dpush = 0,	/* SDQCR  + VDQCR */
+	qm_dqrr_dpull = 1	/* PDQCR */
+};
+
+enum qm_dqrr_pmode {		/* s/w-only */
+	qm_dqrr_pci,		/* reads DQRR_PI_CINH */
+	qm_dqrr_pce,		/* reads DQRR_PI_CENA */
+	qm_dqrr_pvb		/* reads valid-bit */
+};
+
+enum qm_dqrr_cmode {		/* matches QCSP_CFG::DCM */
+	qm_dqrr_cci = 0,	/* CI index, cache-inhibited */
+	qm_dqrr_cce = 1,	/* CI index, cache-enabled */
+	qm_dqrr_cdc = 2		/* Discrete Consumption Acknowledgment */
+};
+
+enum qm_mr_pmode {		/* s/w-only */
+	qm_mr_pci,		/* reads MR_PI_CINH */
+	qm_mr_pce,		/* reads MR_PI_CENA */
+	qm_mr_pvb		/* reads valid-bit */
+};
+
+enum qm_mr_cmode {		/* matches QCSP_CFG::MM */
+	qm_mr_cci = 0,		/* CI index, cache-inhibited */
+	qm_mr_cce = 1		/* CI index, cache-enabled */
+};
+
+/* ------------------------- */
+/* --- Portal structures --- */
+
+#define QM_EQCR_SIZE		8
+#define QM_DQRR_SIZE		16
+#define QM_MR_SIZE		8
+
+struct qm_eqcr {
+	struct qm_eqcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	u32 busy;
+	enum qm_eqcr_pmode pmode;
+#endif
+};
+
+struct qm_dqrr {
+	const struct qm_dqrr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	enum qm_dqrr_dmode dmode;
+	enum qm_dqrr_pmode pmode;
+	enum qm_dqrr_cmode cmode;
+#endif
+};
+
+struct qm_mr {
+	const struct qm_mr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	enum qm_mr_pmode pmode;
+	enum qm_mr_cmode cmode;
+#endif
+};
+
+struct qm_mc {
+	struct qm_mc_command *cr;
+	struct qm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	enum {
+		/* Can be _mc_start()ed */
+		qman_mc_idle,
+		/* Can be _mc_commit()ed or _mc_abort()ed */
+		qman_mc_user,
+		/* Can only be _mc_retry()ed */
+		qman_mc_hw
+	} state;
+#endif
+};
+
+#define QM_PORTAL_ALIGNMENT ____cacheline_aligned
+
+struct qm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct qm_portal {
+	struct qm_addr addr;
+	struct qm_eqcr eqcr;
+	struct qm_dqrr dqrr;
+	struct qm_mr mr;
+	struct qm_mc mc;
+} QM_PORTAL_ALIGNMENT;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define EQCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_EQCR_SIZE << 6)))
+
+extern dma_addr_t rte_mem_virt2phy(const void *addr);
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 EQCR_PTR2IDX(struct qm_eqcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_EQCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void EQCR_INC(struct qm_eqcr *eqcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates fast
+	 * code with essentially no branching overheads. We increment to the
+	 * next EQCR pointer and handle overflow and 'vbit'.
+	 */
+	struct qm_eqcr_entry *partial = eqcr->cursor + 1;
+
+	eqcr->cursor = EQCR_CARRYCLEAR(partial);
+	if (partial != eqcr->cursor)
+		eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_no_stash(struct qm_portal
+								 *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available)
+		return NULL;
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 1;
+#endif
+
+	return eqcr->cursor;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_stash(struct qm_portal
+								*portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return NULL;
+	}
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 1;
+#endif
+	return eqcr->cursor;
+}
+
+static inline void qm_eqcr_abort(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 0;
+#endif
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_pend_and_next(
+					struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+	DPAA_ASSERT(eqcr->pmode != qm_eqcr_pvb);
+	if (eqcr->available == 1)
+		return NULL;
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcr->cursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	return eqcr->cursor;
+}
+
+#define EQCR_COMMIT_CHECKS(eqcr) \
+do { \
+	DPAA_ASSERT(eqcr->busy); \
+	DPAA_ASSERT(eqcr->cursor->orp == (eqcr->cursor->orp & 0x00ffffff)); \
+	DPAA_ASSERT(eqcr->cursor->fqid == (eqcr->cursor->fqid & 0x00ffffff)); \
+} while (0)
+
+static inline void qm_eqcr_pci_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pci);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	hwsync();
+	qm_out(EQCR_PI_CINH, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	qm_cl_invalidate(EQCR_PI);
+	qm_cl_touch_rw(EQCR_PI);
+}
+
+static inline void qm_eqcr_pce_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	lwsync();
+	qm_cl_out(EQCR_PI, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pvb_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eqcursor;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pvb);
+	lwsync();
+	eqcursor = eqcr->cursor;
+	eqcursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	eqcr->busy = 0;
+#endif
+}
+
+static inline u8 qm_eqcr_cci_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline void qm_eqcr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	qm_cl_touch_ro(EQCR_CI);
+}
+
+static inline u8 qm_eqcr_cce_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline u8 qm_eqcr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->ithresh;
+}
+
+static inline void qm_eqcr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	eqcr->ithresh = ithresh;
+	qm_out(EQCR_ITR, ithresh);
+}
+
+static inline u8 qm_eqcr_get_avail(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->available;
+}
+
+static inline u8 qm_eqcr_get_fill(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return QM_EQCR_SIZE - 1 - eqcr->available;
+}
+
+#define DQRR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_DQRR_SIZE << 6)))
+
+static inline u8 DQRR_PTR2IDX(const struct qm_dqrr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_DQRR_SIZE - 1);
+}
+
+static inline const struct qm_dqrr_entry *DQRR_INC(
+						const struct qm_dqrr_entry *e)
+{
+	return DQRR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_dqrr_set_maxfill(struct qm_portal *portal, u8 mf)
+{
+	qm_out(CFG, (qm_in(CFG) & 0xff0fffff) |
+		((mf & (QM_DQRR_SIZE - 1)) << 20));
+}
+
+static inline const struct qm_dqrr_entry *qm_dqrr_current(
+						struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	if (!dqrr->fill)
+		return NULL;
+	return dqrr->cursor;
+}
+
+static inline u8 qm_dqrr_cursor(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return DQRR_PTR2IDX(dqrr->cursor);
+}
+
+static inline u8 qm_dqrr_next(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->fill);
+	dqrr->cursor = DQRR_INC(dqrr->cursor);
+	return --dqrr->fill;
+}
+
+static inline u8 qm_dqrr_pci_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pci);
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	qm_cl_invalidate(DQRR_PI);
+	qm_cl_touch_ro(DQRR_PI);
+}
+
+static inline u8 qm_dqrr_pce_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	dqrr->pi = qm_cl_in(DQRR_PI) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	const struct qm_dqrr_entry *res = qm_cl(dqrr->ring, dqrr->pi);
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_DQRR_VERB_VBIT) == dqrr->vbit) {
+		dqrr->pi = (dqrr->pi + 1) & (QM_DQRR_SIZE - 1);
+		if (!dqrr->pi)
+			dqrr->vbit ^= QM_DQRR_VERB_VBIT;
+		dqrr->fill++;
+	}
+}
+
+static inline void qm_dqrr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_rw(DQRR_CI);
+}
+
+static inline void qm_dqrr_cce_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cdc_consume_1(struct qm_portal *portal, u8 idx,
+					 int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |	/* S */
+		((park ? 1 : 0) << 6) |	/* PK */
+		idx);			/* DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_1ptr(struct qm_portal *portal,
+					    const struct qm_dqrr_entry *dq,
+					int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 idx = DQRR_PTR2IDX(dq);
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* DQRR_DCAP::S */
+		((park ? 1 : 0) << 6) |		/* DQRR_DCAP::PK */
+		idx);				/* DQRR_DCAP::DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_n(struct qm_portal *portal, u16 bitmask)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (1 << 8) |		/* DQRR_DCAP::S */
+		((u32)bitmask << 16));		/* DQRR_DCAP::DCAP_CI */
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+}
+
+static inline u8 qm_dqrr_cdc_cci(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+}
+
+static inline void qm_dqrr_cdc_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_ro(DQRR_CI);
+}
+
+static inline u8 qm_dqrr_cdc_cce(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_cl_in(DQRR_CI) & (QM_DQRR_SIZE - 1);
+}
+
+static inline u8 qm_dqrr_get_ci(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	return dqrr->ci;
+}
+
+static inline void qm_dqrr_park(struct qm_portal *portal, u8 idx)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		(idx & (QM_DQRR_SIZE - 1)));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_park_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		DQRR_PTR2IDX(dqrr->cursor));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_sdqcr_set(struct qm_portal *portal, u32 sdqcr)
+{
+	qm_out(DQRR_SDQCR, sdqcr);
+}
+
+static inline u32 qm_dqrr_sdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_SDQCR);
+}
+
+static inline void qm_dqrr_vdqcr_set(struct qm_portal *portal, u32 vdqcr)
+{
+	qm_out(DQRR_VDQCR, vdqcr);
+}
+
+static inline u32 qm_dqrr_vdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_VDQCR);
+}
+
+static inline u8 qm_dqrr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return dqrr->ithresh;
+}
+
+static inline void qm_dqrr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(DQRR_ITR, ithresh);
+}
+
+static inline u8 qm_dqrr_get_maxfill(struct qm_portal *portal)
+{
+	return (qm_in(CFG) & 0x00f00000) >> 20;
+}
+
+/* -------------- */
+/* --- MR API --- */
+
+#define MR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_MR_SIZE << 6)))
+
+static inline u8 MR_PTR2IDX(const struct qm_mr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_MR_SIZE - 1);
+}
+
+static inline const struct qm_mr_entry *MR_INC(const struct qm_mr_entry *e)
+{
+	return MR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_mr_finish(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (mr->ci != MR_PTR2IDX(mr->cursor))
+		pr_crit("Ignoring completed MR entries\n");
+}
+
+static inline const struct qm_mr_entry *qm_mr_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (!mr->fill)
+		return NULL;
+	return mr->cursor;
+}
+
+static inline u8 qm_mr_next(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->fill);
+	mr->cursor = MR_INC(mr->cursor);
+	return --mr->fill;
+}
+
+static inline void qm_mr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = (mr->ci + num) & (QM_MR_SIZE - 1);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = MR_PTR2IDX(mr->cursor);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(MR_ITR, ithresh);
+}
+
+/* ------------------------------ */
+/* --- Management command API --- */
+static inline int qm_mc_init(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + QM_CL_CR;
+	mc->rr = portal->addr.ce + QM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			QM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? QM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = qman_mc_idle;
+#endif
+	return 0;
+}
+
+static inline void qm_mc_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (mc->state != qman_mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct qm_mc_command *qm_mc_start(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = qman_mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void qm_mc_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = qman_mc_hw;
+#endif
+}
+
+static inline struct qm_mc_result *qm_mc_result(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= QM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = qman_mc_idle;
+#endif
+	return rr;
+}
+
+/* Portal interrupt register API */
+static inline void qm_isr_set_iperiod(struct qm_portal *portal, u16 iperiod)
+{
+	qm_out(ITPR, iperiod);
+}
+
+static inline u32 __qm_isr_read(struct qm_portal *portal, enum qm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 6));
+#else
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __qm_isr_write(struct qm_portal *portal, enum qm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 6), val);
+#else
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 2), val);
+#endif
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 80dde20..90fb130 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -66,6 +66,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_qman_portal_init(uint32_t index, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct qman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -116,6 +117,14 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 	pcfg.node = NULL;
 	pcfg.irq = fd;
 
+	portal = qman_create_affine_portal(&pcfg, NULL);
+	if (!portal) {
+		pr_err("Qman portal initialisation failed (%d)\n",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	irq_map.type = dpaa_portal_qman;
 	irq_map.portal_cinh = map.addr.cinh;
 	process_portal_irq_map(fd, &irq_map);
@@ -124,10 +133,13 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 
 static int fsl_qman_portal_finish(void)
 {
+	__maybe_unused const struct qm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = qman_destroy_affine_portal();
+	DPAA_BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 784fe60..85ae13b 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1246,6 +1246,761 @@ struct qman_cgr {
 	struct list_head node;
 };
 
+/* Flags to qman_create_fq() */
+#define QMAN_FQ_FLAG_NO_ENQUEUE      0x00000001 /* can't enqueue */
+#define QMAN_FQ_FLAG_NO_MODIFY       0x00000002 /* can only enqueue */
+#define QMAN_FQ_FLAG_TO_DCPORTAL     0x00000004 /* consumed by CAAM/PME/Fman */
+#define QMAN_FQ_FLAG_LOCKED          0x00000008 /* multi-core locking */
+#define QMAN_FQ_FLAG_AS_IS           0x00000010 /* query h/w state */
+#define QMAN_FQ_FLAG_DYNAMIC_FQID    0x00000020 /* (de)allocate fqid */
+
+/* Flags to qman_destroy_fq() */
+#define QMAN_FQ_DESTROY_PARKED       0x00000001 /* FQ can be parked or OOS */
+
+/* Flags from qman_fq_state() */
+#define QMAN_FQ_STATE_CHANGING       0x80000000 /* 'state' is changing */
+#define QMAN_FQ_STATE_NE             0x40000000 /* retired FQ isn't empty */
+#define QMAN_FQ_STATE_ORL            0x20000000 /* retired FQ has ORL */
+#define QMAN_FQ_STATE_BLOCKOOS       0xe0000000 /* if any are set, no OOS */
+#define QMAN_FQ_STATE_CGR_EN         0x10000000 /* CGR enabled */
+#define QMAN_FQ_STATE_VDQCR          0x08000000 /* being volatile dequeued */
+
+/* Flags to qman_init_fq() */
+#define QMAN_INITFQ_FLAG_SCHED       0x00000001 /* schedule rather than park */
+#define QMAN_INITFQ_FLAG_LOCAL       0x00000004 /* set dest portal */
+
+/* Flags to qman_enqueue(). NB, the strange numbering is to align with hardware,
+ * bit-wise. (NB: the PME API is sensitive to these precise numberings too, so
+ * any change here should be audited in PME.)
+ */
+#define QMAN_ENQUEUE_FLAG_WATCH_CGR  0x00080000 /* watch congestion state */
+#define QMAN_ENQUEUE_FLAG_DCA        0x00008000 /* perform enqueue-DCA */
+#define QMAN_ENQUEUE_FLAG_DCA_PARK   0x00004000 /* If DCA, requests park */
+#define QMAN_ENQUEUE_FLAG_DCA_PTR(p)		/* If DCA, p is DQRR entry */ \
+		(((u32)(p) << 2) & 0x00000f00)
+#define QMAN_ENQUEUE_FLAG_C_GREEN    0x00000000 /* choose one C_*** flag */
+#define QMAN_ENQUEUE_FLAG_C_YELLOW   0x00000008
+#define QMAN_ENQUEUE_FLAG_C_RED      0x00000010
+#define QMAN_ENQUEUE_FLAG_C_OVERRIDE 0x00000018
+/* For the ORP-specific qman_enqueue_orp() variant;
+ * - this flag indicates "Not Last In Sequence", ie. all but the final fragment
+ *   of a frame.
+ */
+#define QMAN_ENQUEUE_FLAG_NLIS       0x01000000
+/* - this flag performs no enqueue but fills in an ORP sequence number that
+ *   would otherwise block it (eg. if a frame has been dropped).
+ */
+#define QMAN_ENQUEUE_FLAG_HOLE       0x02000000
+/* - this flag performs no enqueue but advances NESN to the given sequence
+ *   number.
+ */
+#define QMAN_ENQUEUE_FLAG_NESN       0x04000000
+
+/* Flags to qman_modify_cgr() */
+#define QMAN_CGR_FLAG_USE_INIT       0x00000001
+#define QMAN_CGR_MODE_FRAME          0x00000001
+
+/**
+ * qman_get_portal_index - get portal configuration index
+ */
+int qman_get_portal_index(void);
+
+/**
+ * qman_affine_channel - return the channel ID of an portal
+ * @cpu: the cpu whose affine portal is the subject of the query
+ *
+ * If @cpu is -1, the affine portal for the current CPU will be used. It is a
+ * bug to call this function for any value of @cpu (other than -1) that is not a
+ * member of the cpu mask.
+ */
+u16 qman_affine_channel(int cpu);
+
+/**
+ * qman_set_vdq - Issue a volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @num: Number of Frames requested for volatile dequeue
+ *
+ * This function will issue a volatile dequeue command to the QMAN.
+ */
+int qman_set_vdq(struct qman_fq *fq, u16 num);
+
+/**
+ * qman_dequeue - Get the DQRR entry after volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ *
+ * This function will return the DQRR entry after a volatile dequeue command
+ * is issued. It will keep returning NULL until there is no packet available on
+ * the DQRR.
+ */
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
+
+/**
+ * qman_dqrr_consume - Consume the DQRR entriy after volatile dequeue
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @dq: DQRR entry to consume. This is the one which is provided by the
+ *    'qbman_dequeue' command.
+ *
+ * This will consume the DQRR enrey and make it available for next volatile
+ * dequeue.
+ */
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq);
+
+/**
+ * qman_poll_dqrr - process DQRR (fast-path) entries
+ * @limit: the maximum number of DQRR entries to process
+ *
+ * Use of this function requires that DQRR processing not be interrupt-driven.
+ * Ie. the value returned by qman_irqsource_get() should not include
+ * QM_PIRQ_DQRI. If the current CPU is sharing a portal hosted on another CPU,
+ * this function will return -EINVAL, otherwise the return value is >=0 and
+ * represents the number of DQRR entries processed.
+ */
+int qman_poll_dqrr(unsigned int limit);
+
+/**
+ * qman_poll
+ *
+ * Dispatcher logic on a cpu can use this to trigger any maintenance of the
+ * affine portal. There are two classes of portal processing in question;
+ * fast-path (which involves demuxing dequeue ring (DQRR) entries and tracking
+ * enqueue ring (EQCR) consumption), and slow-path (which involves EQCR
+ * thresholds, congestion state changes, etc). This function does whatever
+ * processing is not triggered by interrupts.
+ *
+ * Note, if DQRR and some slow-path processing are poll-driven (rather than
+ * interrupt-driven) then this function uses a heuristic to determine how often
+ * to run slow-path processing - as slow-path processing introduces at least a
+ * minimum latency each time it is run, whereas fast-path (DQRR) processing is
+ * close to zero-cost if there is no work to be done.
+ */
+void qman_poll(void);
+
+/**
+ * qman_stop_dequeues - Stop h/w dequeuing to the s/w portal
+ *
+ * Disables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_stop_dequeues(void);
+
+/**
+ * qman_start_dequeues - (Re)start h/w dequeuing to the s/w portal
+ *
+ * Enables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_start_dequeues(void);
+
+/**
+ * qman_static_dequeue_add - Add pool channels to the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Adds a set of pool channels to the portal's static dequeue command register
+ * (SDQCR). The requested pools are limited to those the portal has dequeue
+ * access to.
+ */
+void qman_static_dequeue_add(u32 pools);
+
+/**
+ * qman_static_dequeue_del - Remove pool channels from the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Removes a set of pool channels from the portal's static dequeue command
+ * register (SDQCR). The requested pools are limited to those the portal has
+ * dequeue access to.
+ */
+void qman_static_dequeue_del(u32 pools);
+
+/**
+ * qman_static_dequeue_get - return the portal's current SDQCR
+ *
+ * Returns the portal's current static dequeue command register (SDQCR). The
+ * entire register is returned, so if only the currently-enabled pool channels
+ * are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
+ */
+u32 qman_static_dequeue_get(void);
+
+/**
+ * qman_dca - Perform a Discrete Consumption Acknowledgment
+ * @dq: the DQRR entry to be consumed
+ * @park_request: indicates whether the held-active @fq should be parked
+ *
+ * Only allowed in DCA-mode portals, for DQRR entries whose handler callback had
+ * previously returned 'qman_cb_dqrr_defer'. NB, as with the other APIs, this
+ * does not take a 'portal' argument but implies the core affine portal from the
+ * cpu that is currently executing the function. For reasons of locking, this
+ * function must be called from the same CPU as that which processed the DQRR
+ * entry in the first place.
+ */
+void qman_dca(struct qm_dqrr_entry *dq, int park_request);
+
+/**
+ * qman_eqcr_is_empty - Determine if portal's EQCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * enqueues for the local portal have been processed by Qman but can't use the
+ * QMAN_ENQUEUE_FLAG_WAIT_SYNC flag to do this from the final qman_enqueue().
+ * The function forces tracking of EQCR consumption (which normally doesn't
+ * happen until enqueue processing needs to find space to put new enqueue
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int qman_eqcr_is_empty(void);
+
+/**
+ * qman_set_dc_ern - Set the handler for DCP enqueue rejection notifications
+ * @handler: callback for processing DCP ERNs
+ * @affine: whether this handler is specific to the locally affine portal
+ *
+ * If a hardware block's interface to Qman (ie. its direct-connect portal, or
+ * DCP) is configured not to receive enqueue rejections, then any enqueues
+ * through that DCP that are rejected will be sent to a given software portal.
+ * If @affine is non-zero, then this handler will only be used for DCP ERNs
+ * received on the portal affine to the current CPU. If multiple CPUs share a
+ * portal and they all call this function, they will be setting the handler for
+ * the same portal! If @affine is zero, then this handler will be global to all
+ * portals handled by this instance of the driver. Only those portals that do
+ * not have their own affine handler will use the global handler.
+ */
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
+
+	/* FQ management */
+	/* ------------- */
+/**
+ * qman_create_fq - Allocates a FQ
+ * @fqid: the index of the FQD to encapsulate, must be "Out of Service"
+ * @flags: bit-mask of QMAN_FQ_FLAG_*** options
+ * @fq: memory for storing the 'fq', with callbacks filled in
+ *
+ * Creates a frame queue object for the given @fqid, unless the
+ * QMAN_FQ_FLAG_DYNAMIC_FQID flag is set in @flags, in which case a FQID is
+ * dynamically allocated (or the function fails if none are available). Once
+ * created, the caller should not touch the memory at 'fq' except as extended to
+ * adjacent memory for user-defined fields (see the definition of "struct
+ * qman_fq" for more info). NO_MODIFY is only intended for enqueuing to
+ * pre-existing frame-queues that aren't to be otherwise interfered with, it
+ * prevents all other modifications to the frame queue. The TO_DCPORTAL flag
+ * causes the driver to honour any contextB modifications requested in the
+ * qm_init_fq() API, as this indicates the frame queue will be consumed by a
+ * direct-connect portal (PME, CAAM, or Fman). When frame queues are consumed by
+ * software portals, the contextB field is controlled by the driver and can't be
+ * modified by the caller. If the AS_IS flag is specified, management commands
+ * will be used on portal @p to query state for frame queue @fqid and construct
+ * a frame queue object based on that, rather than assuming/requiring that it be
+ * Out of Service.
+ */
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
+
+/**
+ * qman_destroy_fq - Deallocates a FQ
+ * @fq: the frame queue object to release
+ * @flags: bit-mask of QMAN_FQ_FREE_*** options
+ *
+ * The memory for this frame queue object ('fq' provided in qman_create_fq()) is
+ * not deallocated but the caller regains ownership, to do with as desired. The
+ * FQ must be in the 'out-of-service' state unless the QMAN_FQ_FREE_PARKED flag
+ * is specified, in which case it may also be in the 'parked' state.
+ */
+void qman_destroy_fq(struct qman_fq *fq, u32 flags);
+
+/**
+ * qman_fq_fqid - Queries the frame queue ID of a FQ object
+ * @fq: the frame queue object to query
+ */
+u32 qman_fq_fqid(struct qman_fq *fq);
+
+/**
+ * qman_fq_state - Queries the state of a FQ object
+ * @fq: the frame queue object to query
+ * @state: pointer to state enum to return the FQ scheduling state
+ * @flags: pointer to state flags to receive QMAN_FQ_STATE_*** bitmask
+ *
+ * Queries the state of the FQ object, without performing any h/w commands.
+ * This captures the state, as seen by the driver, at the time the function
+ * executes.
+ */
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
+
+/**
+ * qman_init_fq - Initialises FQ fields, leaves the FQ "parked" or "scheduled"
+ * @fq: the frame queue object to modify, must be 'parked' or new.
+ * @flags: bit-mask of QMAN_INITFQ_FLAG_*** options
+ * @opts: the FQ-modification settings, as defined in the low-level API
+ *
+ * The @opts parameter comes from the low-level portal API. Select
+ * QMAN_INITFQ_FLAG_SCHED in @flags to cause the frame queue to be scheduled
+ * rather than parked. NB, @opts can be NULL.
+ *
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver;
+ * 1. the 'count' and 'fqid' fields are always ignored (this operation only
+ * affects one frame queue: @fq).
+ * 2. the QM_INITFQ_WE_CONTEXTB option of the 'we_mask' field and the associated
+ * 'fqd' structure's 'context_b' field are sometimes overwritten;
+ *   - if @fq was not created with QMAN_FQ_FLAG_TO_DCPORTAL, then context_b is
+ *     initialised to a value used by the driver for demux.
+ *   - if context_b is initialised for demux, so is context_a in case stashing
+ *     is requested (see item 4).
+ * (So caller control of context_b is only possible for TO_DCPORTAL frame queue
+ * objects.)
+ * 3. if @flags contains QMAN_INITFQ_FLAG_LOCAL, the 'fqd' structure's
+ * 'dest::channel' field will be overwritten to match the portal used to issue
+ * the command. If the WE_DESTWQ write-enable bit had already been set by the
+ * caller, the channel workqueue will be left as-is, otherwise the write-enable
+ * bit is set and the workqueue is set to a default of 4. If the "LOCAL" flag
+ * isn't set, the destination channel/workqueue fields and the write-enable bit
+ * are left as-is.
+ * 4. if the driver overwrites context_a/b for demux, then if
+ * QM_INITFQ_WE_CONTEXTA is set, the driver will only overwrite
+ * context_a.address fields and will leave the stashing fields provided by the
+ * user alone, otherwise it will zero out the context_a.stashing fields.
+ */
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
+
+/**
+ * qman_schedule_fq - Schedules a FQ
+ * @fq: the frame queue object to schedule, must be 'parked'
+ *
+ * Schedules the frame queue, which must be Parked, which takes it to
+ * Tentatively-Scheduled or Truly-Scheduled depending on its fill-level.
+ */
+int qman_schedule_fq(struct qman_fq *fq);
+
+/**
+ * qman_retire_fq - Retires a FQ
+ * @fq: the frame queue object to retire
+ * @flags: FQ flags (as per qman_fq_state) if retirement completes immediately
+ *
+ * Retires the frame queue. This returns zero if it succeeds immediately, +1 if
+ * the retirement was started asynchronously, otherwise it returns negative for
+ * failure. When this function returns zero, @flags is set to indicate whether
+ * the retired FQ is empty and/or whether it has any ORL fragments (to show up
+ * as ERNs). Otherwise the corresponding flags will be known when a subsequent
+ * FQRN message shows up on the portal's message ring.
+ *
+ * NB, if the retirement is asynchronous (the FQ was in the Truly Scheduled or
+ * Active state), the completion will be via the message ring as a FQRN - but
+ * the corresponding callback may occur before this function returns!! Ie. the
+ * caller should be prepared to accept the callback as the function is called,
+ * not only once it has returned.
+ */
+int qman_retire_fq(struct qman_fq *fq, u32 *flags);
+
+/**
+ * qman_oos_fq - Puts a FQ "out of service"
+ * @fq: the frame queue object to be put out-of-service, must be 'retired'
+ *
+ * The frame queue must be retired and empty, and if any order restoration list
+ * was released as ERNs at the time of retirement, they must all be consumed.
+ */
+int qman_oos_fq(struct qman_fq *fq);
+
+/**
+ * qman_fq_flow_control - Set the XON/XOFF state of a FQ
+ * @fq: the frame queue object to be set to XON/XOFF state, must not be 'oos',
+ * or 'retired' or 'parked' state
+ * @xon: boolean to set fq in XON or XOFF state
+ *
+ * The frame should be in Tentatively Scheduled state or Truly Schedule sate,
+ * otherwise the IFSI interrupt will be asserted.
+ */
+int qman_fq_flow_control(struct qman_fq *fq, int xon);
+
+/**
+ * qman_query_fq - Queries FQD fields (via h/w query command)
+ * @fq: the frame queue object to be queried
+ * @fqd: storage for the queried FQD fields
+ */
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd);
+
+/**
+ * qman_query_fq_has_pkts - Queries non-programmable FQD fields and returns '1'
+ * if packets are in the frame queue. If there are no packets on frame
+ * queue '0' is returned.
+ * @fq: the frame queue object to be queried
+ */
+int qman_query_fq_has_pkts(struct qman_fq *fq);
+
+/**
+ * qman_query_fq_np - Queries non-programmable FQD fields
+ * @fq: the frame queue object to be queried
+ * @np: storage for the queried FQD fields
+ */
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
+
+/**
+ * qman_query_wq - Queries work queue lengths
+ * @query_dedicated: If non-zero, query length of WQs in the channel dedicated
+ *		to this software portal. Otherwise, query length of WQs in a
+ *		channel  specified in wq.
+ * @wq: storage for the queried WQs lengths. Also specified the channel to
+ *	to query if query_dedicated is zero.
+ */
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
+
+/**
+ * qman_volatile_dequeue - Issue a volatile dequeue command
+ * @fq: the frame queue object to dequeue from
+ * @flags: a bit-mask of QMAN_VOLATILE_FLAG_*** options
+ * @vdqcr: bit mask of QM_VDQCR_*** options, as per qm_dqrr_vdqcr_set()
+ *
+ * Attempts to lock access to the portal's VDQCR volatile dequeue functionality.
+ * The function will block and sleep if QMAN_VOLATILE_FLAG_WAIT is specified and
+ * the VDQCR is already in use, otherwise returns non-zero for failure. If
+ * QMAN_VOLATILE_FLAG_FINISH is specified, the function will only return once
+ * the VDQCR command has finished executing (ie. once the callback for the last
+ * DQRR entry resulting from the VDQCR command has been called). If not using
+ * the FINISH flag, completion can be determined either by detecting the
+ * presence of the QM_DQRR_STAT_UNSCHEDULED and QM_DQRR_STAT_DQCR_EXPIRED bits
+ * in the "stat" field of the "struct qm_dqrr_entry" passed to the FQ's dequeue
+ * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
+ * "flags" retrieved from qman_fq_state().
+ */
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
+
+/**
+ * qman_enqueue - Enqueue a frame to a frame queue
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ *
+ * Fills an entry in the EQCR of portal @qm to enqueue the frame described by
+ * @fd. The descriptor details are copied from @fd to the EQCR entry, the 'pid'
+ * field is ignored. The return value is non-zero on error, such as ring full
+ * (and FLAG_WAIT not specified), congestion avoidance (FLAG_WATCH_CGR
+ * specified), etc. If the ring is full and FLAG_WAIT is specified, this
+ * function will block. If FLAG_INTERRUPT is set, the EQCI bit of the portal
+ * interrupt will assert when Qman consumes the EQCR entry (subject to "status
+ * disable", "enable", and "inhibit" registers). If FLAG_DCA is set, Qman will
+ * perform an implied "discrete consumption acknowledgment" on the dequeue
+ * ring's (DQRR) entry, at the ring index specified by the FLAG_DCA_IDX(x)
+ * macro. (As an alternative to issuing explicit DCA actions on DQRR entries,
+ * this implicit DCA can delay the release of a "held active" frame queue
+ * corresponding to a DQRR entry until Qman consumes the EQCR entry - providing
+ * order-preservation semantics in packet-forwarding scenarios.) If FLAG_DCA is
+ * set, then FLAG_DCA_PARK can also be set to imply that the DQRR consumption
+ * acknowledgment should "park request" the "held active" frame queue. Ie.
+ * when the portal eventually releases that frame queue, it will be left in the
+ * Parked state rather than Tentatively Scheduled or Truly Scheduled. If the
+ * portal is watching congestion groups, the QMAN_ENQUEUE_FLAG_WATCH_CGR flag
+ * is requested, and the FQ is a member of a congestion group, then this
+ * function returns -EAGAIN if the congestion group is currently congested.
+ * Note, this does not eliminate ERNs, as the async interface means we can be
+ * sending enqueue commands to an un-congested FQ that becomes congested before
+ * the enqueue commands are processed, but it does minimise needless thrashing
+ * of an already busy hardware resource by throttling many of the to-be-dropped
+ * enqueues "at the source".
+ */
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send);
+
+typedef int (*qman_cb_precommit) (void *arg);
+
+/**
+ * qman_enqueue_orp - Enqueue a frame to a frame queue using an ORP
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ * @orp: the frame queue object used as an order restoration point.
+ * @orp_seqnum: the sequence number of this frame in the order restoration path
+ *
+ * Similar to qman_enqueue(), but with the addition of an Order Restoration
+ * Point (@orp) and corresponding sequence number (@orp_seqnum) for this
+ * enqueue operation to employ order restoration. Each frame queue object acts
+ * as an Order Definition Point (ODP) by providing each frame dequeued from it
+ * with an incrementing sequence number, this value is generally ignored unless
+ * that sequence of dequeued frames will need order restoration later. Each
+ * frame queue object also encapsulates an Order Restoration Point (ORP), which
+ * is a re-assembly context for re-ordering frames relative to their sequence
+ * numbers as they are enqueued. The ORP does not have to be within the frame
+ * queue that receives the enqueued frame, in fact it is usually the frame
+ * queue from which the frames were originally dequeued. For the purposes of
+ * order restoration, multiple frames (or "fragments") can be enqueued for a
+ * single sequence number by setting the QMAN_ENQUEUE_FLAG_NLIS flag for all
+ * enqueues except the final fragment of a given sequence number. Ordering
+ * between sequence numbers is guaranteed, even if fragments of different
+ * sequence numbers are interlaced with one another. Fragments of the same
+ * sequence number will retain the order in which they are enqueued. If no
+ * enqueue is to performed, QMAN_ENQUEUE_FLAG_HOLE indicates that the given
+ * sequence number is to be "skipped" by the ORP logic (eg. if a frame has been
+ * dropped from a sequence), or QMAN_ENQUEUE_FLAG_NESN indicates that the given
+ * sequence number should become the ORP's "Next Expected Sequence Number".
+ *
+ * Side note: a frame queue object can be used purely as an ORP, without
+ * carrying any frames at all. Care should be taken not to deallocate a frame
+ * queue object that is being actively used as an ORP, as a future allocation
+ * of the frame queue object may start using the internal ORP before the
+ * previous use has finished.
+ */
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum);
+
+/**
+ * qman_alloc_fqid_range - Allocate a contiguous range of FQIDs
+ * @result: is set by the API to the base FQID of the allocated range
+ * @count: the number of FQIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count FQIDs
+ *
+ * Returns the number of frame queues allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * FQs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_fqid(u32 *result)
+{
+	int ret = qman_alloc_fqid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_fqid_range - Release the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of FQIDs
+ * that it can subsequently allocate from.
+ */
+void qman_release_fqid_range(u32 fqid, unsigned int count);
+static inline void qman_release_fqid(u32 fqid)
+{
+	qman_release_fqid_range(fqid, 1);
+}
+
+void qman_seed_fqid_range(u32 fqid, unsigned int count);
+
+int qman_shutdown_fq(u32 fqid);
+
+/**
+ * qman_reserve_fqid_range - Reserve the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ */
+int qman_reserve_fqid_range(u32 fqid, unsigned int count);
+static inline int qman_reserve_fqid(u32 fqid)
+{
+	return qman_reserve_fqid_range(fqid, 1);
+}
+
+/* Pool-channel management */
+/**
+ * qman_alloc_pool_range - Allocate a contiguous range of pool-channel IDs
+ * @result: is set by the API to the base pool-channel ID of the allocated range
+ * @count: the number of pool-channel IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of pool-channel IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_pool(u32 *result)
+{
+	int ret = qman_alloc_pool_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_pool_range - Release the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to deallocate
+ * @count: the number of pool-channel IDs in the range
+ */
+void qman_release_pool_range(u32 id, unsigned int count);
+static inline void qman_release_pool(u32 id)
+{
+	qman_release_pool_range(id, 1);
+}
+
+/**
+ * qman_reserve_pool_range - Reserve the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to reserve
+ * @count: the number of pool-channel IDs in the range
+ */
+int qman_reserve_pool_range(u32 id, unsigned int count);
+static inline int qman_reserve_pool(u32 id)
+{
+	return qman_reserve_pool_range(id, 1);
+}
+
+void qman_seed_pool_range(u32 id, unsigned int count);
+
+	/* CGR management */
+	/* -------------- */
+/**
+ * qman_create_cgr - Register a congestion group object
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: optional state of CGR settings
+ *
+ * Registers this object to receiving congestion entry/exit callbacks on the
+ * portal affine to the cpu portal on which this API is executed. If opts is
+ * NULL then only the callback (cgr->cb) function is registered. If @flags
+ * contains QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset
+ * any unspecified parameters) will be used rather than a modify hw hardware
+ * (which only modifies the specified parameters).
+ */
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_create_cgr_to_dcp - Register a congestion group object to DCP portal
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @dcp_portal: the DCP portal to which the cgr object is registered.
+ * @opts: optional state of CGR settings
+ *
+ */
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_delete_cgr - Deregisters a congestion group object
+ * @cgr: the 'cgr' object to deregister
+ *
+ * "Unplugs" this CGR object from the portal affine to the cpu on which this API
+ * is executed. This must be excuted on the same affine portal on which it was
+ * created.
+ */
+int qman_delete_cgr(struct qman_cgr *cgr);
+
+/**
+ * qman_modify_cgr - Modify CGR fields
+ * @cgr: the 'cgr' object to modify
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: the CGR-modification settings
+ *
+ * The @opts parameter comes from the low-level portal API, and can be NULL.
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver, in particular the 'cgrid' field is ignored (this operation
+ * only affects the given CGR object). If @flags contains
+ * QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset any
+ * unspecified parameters) will be used rather than a modify hw hardware (which
+ * only modifies the specified parameters).
+ */
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_query_cgr - Queries CGR fields
+ * @cgr: the 'cgr' object to query
+ * @result: storage for the queried congestion group record
+ */
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *result);
+
+/**
+ * qman_query_congestion - Queries the state of all congestion groups
+ * @congestion: storage for the queried state of all congestion groups
+ */
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
+
+/**
+ * qman_alloc_cgrid_range - Allocate a contiguous range of CGR IDs
+ * @result: is set by the API to the base CGR ID of the allocated range
+ * @count: the number of CGR IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of CGR IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_cgrid(u32 *result)
+{
+	int ret = qman_alloc_cgrid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_cgrid_range - Release the specified range of CGR IDs
+ * @id: the base CGR ID of the range to deallocate
+ * @count: the number of CGR IDs in the range
+ */
+void qman_release_cgrid_range(u32 id, unsigned int count);
+static inline void qman_release_cgrid(u32 id)
+{
+	qman_release_cgrid_range(id, 1);
+}
+
+/**
+ * qman_reserve_cgrid_range - Reserve the specified range of CGR ID
+ * @id: the base CGR ID of the range to reserve
+ * @count: the number of CGR IDs in the range
+ */
+int qman_reserve_cgrid_range(u32 id, unsigned int count);
+static inline int qman_reserve_cgrid(u32 id)
+{
+	return qman_reserve_cgrid_range(id, 1);
+}
+
+void qman_seed_cgrid_range(u32 id, unsigned int count);
+
+	/* Helpers */
+	/* ------- */
+/**
+ * qman_poll_fq_for_init - Check if an FQ has been initialised from OOS
+ * @fqid: the FQID that will be initialised by other s/w
+ *
+ * In many situations, a FQID is provided for communication between s/w
+ * entities, and whilst the consumer is responsible for initialising and
+ * scheduling the FQ, the producer(s) generally create a wrapper FQ object using
+ * and only call qman_enqueue() (no FQ initialisation, scheduling, etc). Ie;
+ *     qman_create_fq(..., QMAN_FQ_FLAG_NO_MODIFY, ...);
+ * However, data can not be enqueued to the FQ until it is initialised out of
+ * the OOS state - this function polls for that condition. It is particularly
+ * useful for users of IPC functions - each endpoint's Rx FQ is the other
+ * endpoint's Tx FQ, so each side can initialise and schedule their Rx FQ object
+ * and then use this API on the (NO_MODIFY) Tx FQ object in order to
+ * synchronise. The function returns zero for success, +1 if the FQ is still in
+ * the OOS state, or negative if there was an error.
+ */
+static inline int qman_poll_fq_for_init(struct qman_fq *fq)
+{
+	struct qm_mcr_queryfq_np np;
+	int err;
+
+	err = qman_query_fq_np(fq, &np);
+	if (err)
+		return err;
+	if ((np.state & QM_MCR_NP_STATE_MASK) == QM_MCR_NP_STATE_OOS)
+		return 1;
+	return 0;
+}
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+#define cpu_to_hw_sg(x) (x)
+#define hw_sg_to_cpu(x) (x)
+#else
+#define cpu_to_hw_sg(x)  __cpu_to_hw_sg(x)
+#define hw_sg_to_cpu(x)  __hw_sg_to_cpu(x)
+
+static inline void __cpu_to_hw_sg(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = cpu_to_be64(sgentry->opaque);
+	sgentry->val = cpu_to_be32(sgentry->val);
+	sgentry->val_off = cpu_to_be16(sgentry->val_off);
+}
+
+static inline void __hw_sg_to_cpu(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = be64_to_cpu(sgentry->opaque);
+	sgentry->val = be32_to_cpu(sgentry->val);
+	sgentry->val_off = be16_to_cpu(sgentry->val_off);
+}
+#endif
 
 #ifdef __cplusplus
 }
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index b0d953f..a4897b0 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -42,6 +42,7 @@
 #define __FSL_USD_H
 
 #include <compat.h>
+#include <fsl_qman.h>
 
 #ifdef __cplusplus
 extern "C" {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 12/40] bus/dpaa: add BMAN driver core
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (10 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 13/40] bus/dpaa: support FMAN frame queue lookup Shreyansh Jain
                             ` (29 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

The Buffer Manager (BMan) is a hardware buffer pool management block that
allows software and accelerators on the datapath to acquire and release
buffers in order to build frames.

This patch adds the core routines.

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman_driver.c | 311 +++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_priv.h   | 125 ++++++++++
 drivers/bus/dpaa/include/fsl_bman.h       | 375 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |   5 +
 5 files changed, 817 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_driver.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman_priv.h
 create mode 100644 drivers/bus/dpaa/include/fsl_bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index 5957c15..e1415e4 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -63,6 +63,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
 	base/qbman/dpaa_alloc.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
new file mode 100644
index 0000000..fb3c50e
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -0,0 +1,311 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_branch_prediction.h>
+
+#include <fsl_usd.h>
+#include <process.h>
+#include "bman_priv.h"
+#include <sys/ioctl.h>
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+u16 bman_ip_rev;
+u16 bman_pool_max;
+void *bman_ccsr_map;
+
+/*****************/
+/* Portal driver */
+/*****************/
+
+static __thread int fd = -1;
+static __thread struct bm_portal_config pcfg;
+static __thread struct dpaa_ioctl_portal_map map = {
+	.type = dpaa_portal_bman
+};
+
+static int fsl_bman_portal_init(uint32_t idx, int is_shared)
+{
+	cpu_set_t cpuset;
+	int loop, ret;
+	struct dpaa_ioctl_irq_map irq_map;
+
+	/* Verify the thread's cpu-affinity */
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+				     &cpuset);
+	if (ret) {
+		error(0, ret, "pthread_getaffinity_np()");
+		return ret;
+	}
+	pcfg.cpu = -1;
+	for (loop = 0; loop < CPU_SETSIZE; loop++)
+		if (CPU_ISSET(loop, &cpuset)) {
+			if (pcfg.cpu != -1) {
+				pr_err("Thread is not affine to 1 cpu");
+				return -EINVAL;
+			}
+			pcfg.cpu = loop;
+		}
+	if (pcfg.cpu == -1) {
+		pr_err("Bug in getaffinity handling!");
+		return -EINVAL;
+	}
+	/* Allocate and map a bman portal */
+	map.index = idx;
+	ret = process_portal_map(&map);
+	if (ret) {
+		error(0, ret, "process_portal_map()");
+		return ret;
+	}
+	/* Make the portal's cache-[enabled|inhibited] regions */
+	pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+	pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+	pcfg.is_shared = is_shared;
+	pcfg.index = map.index;
+	bman_depletion_fill(&pcfg.mask);
+
+	fd = open(BMAN_PORTAL_IRQ_PATH, O_RDONLY);
+	if (fd == -1) {
+		pr_err("BMan irq init failed");
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+	/* Use the IRQ FD as a unique IRQ number */
+	pcfg.irq = fd;
+
+	/* Set the IRQ number */
+	irq_map.type = dpaa_portal_bman;
+	irq_map.portal_cinh = map.addr.cinh;
+	process_portal_irq_map(fd, &irq_map);
+	return 0;
+}
+
+static int fsl_bman_portal_finish(void)
+{
+	int ret;
+
+	process_portal_irq_unmap(fd);
+
+	ret = process_portal_unmap(&map.addr);
+	if (ret)
+		error(0, ret, "process_portal_unmap()");
+	return ret;
+}
+
+int bman_thread_init(void)
+{
+	/* Convert from contiguous/virtual cpu numbering to real cpu when
+	 * calling into the code that is dependent on the device naming.
+	 */
+	return fsl_bman_portal_init(QBMAN_ANY_PORTAL_IDX, 0);
+}
+
+int bman_thread_finish(void)
+{
+	return fsl_bman_portal_finish();
+}
+
+void bman_thread_irq(void)
+{
+	qbman_invoke_irq(pcfg.irq);
+	/* Now we need to uninhibit interrupts. This is the only code outside
+	 * the regular portal driver that manipulates any portal register, so
+	 * rather than breaking that encapsulation I am simply hard-coding the
+	 * offset to the inhibit register here.
+	 */
+	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+int bman_init_ccsr(const struct device_node *node)
+{
+	static int ccsr_map_fd;
+	uint64_t phys_addr;
+	const uint32_t *bman_addr;
+	uint64_t regs_size;
+
+	bman_addr = of_get_address(node, 0, &regs_size, NULL);
+	if (!bman_addr) {
+		pr_err("of_get_address cannot return BMan address");
+		return -EINVAL;
+	}
+	phys_addr = of_translate_address(node, bman_addr);
+	if (!phys_addr) {
+		pr_err("of_translate_address failed");
+		return -EINVAL;
+	}
+
+	ccsr_map_fd = open(BMAN_CCSR_MAP, O_RDWR);
+	if (unlikely(ccsr_map_fd < 0)) {
+		pr_err("Can not open /dev/mem for BMan CCSR map");
+		return ccsr_map_fd;
+	}
+
+	bman_ccsr_map = mmap(NULL, regs_size, PROT_READ |
+			     PROT_WRITE, MAP_SHARED, ccsr_map_fd, phys_addr);
+	if (bman_ccsr_map == MAP_FAILED) {
+		pr_err("Can not map BMan CCSR base Bman: "
+		       "0x%x Phys: 0x%lx size 0x%lx",
+		       *bman_addr, phys_addr, regs_size);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int bman_global_init(void)
+{
+	const struct device_node *dt_node;
+	static int done;
+
+	if (done)
+		return -EBUSY;
+	/* Use the device-tree to determine IP revision until something better
+	 * is devised.
+	 */
+	dt_node = of_find_compatible_node(NULL, NULL, "fsl,bman-portal");
+	if (!dt_node) {
+		pr_err("No bman portals available for any CPU\n");
+		return -ENODEV;
+	}
+	if (of_device_is_compatible(dt_node, "fsl,bman-portal-1.0") ||
+	    of_device_is_compatible(dt_node, "fsl,bman-portal-1.0.0")) {
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.0.8")) {
+		bman_ip_rev = BMAN_REV20;
+		bman_pool_max = 8;
+	} else if (of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.0") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.1") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.2") ||
+		of_device_is_compatible(dt_node, "fsl,bman-portal-2.1.3")) {
+		bman_ip_rev = BMAN_REV21;
+		bman_pool_max = 64;
+	} else {
+		pr_warn("unknown BMan version in portal node,default "
+			"to rev1.0");
+		bman_ip_rev = BMAN_REV10;
+		bman_pool_max = 64;
+	}
+
+	if (!bman_ip_rev) {
+		pr_err("Unknown bman portal version\n");
+		return -ENODEV;
+	}
+	{
+		const struct device_node *dn = of_find_compatible_node(NULL,
+							NULL, "fsl,bman");
+		if (!dn)
+			pr_err("No bman device node available");
+
+		if (bman_init_ccsr(dn))
+			pr_err("BMan CCSR map failed.");
+	}
+
+	done = 1;
+	return 0;
+}
+
+#define BMAN_POOL_CONTENT(n) (0x0600 + ((n) * 0x04))
+u32 bm_pool_free_buffers(u32 bpid)
+{
+	return in_be32(bman_ccsr_map + BMAN_POOL_CONTENT(bpid));
+}
+
+static u32 __generate_thresh(u32 val, int roundup)
+{
+	u32 e = 0;      /* co-efficient, exponent */
+	int oddbit = 0;
+
+	while (val > 0xff) {
+		oddbit = val & 1;
+		val >>= 1;
+		e++;
+		if (roundup && oddbit)
+			val++;
+	}
+	DPAA_ASSERT(e < 0x10);
+	return (val | (e << 8));
+}
+
+#define POOL_SWDET(n)       (0x0000 + ((n) * 0x04))
+#define POOL_HWDET(n)       (0x0100 + ((n) * 0x04))
+#define POOL_SWDXT(n)       (0x0200 + ((n) * 0x04))
+#define POOL_HWDXT(n)       (0x0300 + ((n) * 0x04))
+int bm_pool_set(u32 bpid, const u32 *thresholds)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	out_be32(bman_ccsr_map + POOL_SWDET(bpid),
+		 __generate_thresh(thresholds[0], 0));
+	out_be32(bman_ccsr_map + POOL_SWDXT(bpid),
+		 __generate_thresh(thresholds[1], 1));
+	out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+		 __generate_thresh(thresholds[2], 0));
+	out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+		 __generate_thresh(thresholds[3], 1));
+	return 0;
+}
+
+#define BMAN_LOW_DEFAULT_THRESH		0x40
+#define BMAN_HIGH_DEFAULT_THRESH		0x80
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh)
+{
+	if (!bman_ccsr_map)
+		return -ENODEV;
+	if (bpid >= bman_pool_max)
+		return -EINVAL;
+	if (low_thresh && high_thresh) {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(low_thresh, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(high_thresh, 1));
+	} else {
+		out_be32(bman_ccsr_map + POOL_HWDET(bpid),
+			 __generate_thresh(BMAN_LOW_DEFAULT_THRESH, 0));
+		out_be32(bman_ccsr_map + POOL_HWDXT(bpid),
+			 __generate_thresh(BMAN_HIGH_DEFAULT_THRESH, 1));
+	}
+	return 0;
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman_priv.h b/drivers/bus/dpaa/base/qbman/bman_priv.h
new file mode 100644
index 0000000..07d9cec
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman_priv.h
@@ -0,0 +1,125 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_PRIV_H
+#define __BMAN_PRIV_H
+
+#include "dpaa_sys.h"
+#include <fsl_bman.h>
+
+/* Revision info (for errata and feature handling) */
+#define BMAN_REV10 0x0100
+#define BMAN_REV20 0x0200
+#define BMAN_REV21 0x0201
+
+#define BMAN_PORTAL_IRQ_PATH "/dev/fsl-usdpaa-irq"
+#define BMAN_CCSR_MAP "/dev/mem"
+
+/* This mask contains all the "irqsource" bits visible to API users */
+#define BM_PIRQ_VISIBLE	(BM_PIRQ_RCRI | BM_PIRQ_BSCN)
+
+/* These are bm_<reg>_<verb>(). So for example, bm_disable_write() means "write
+ * the disable register" rather than "disable the ability to write".
+ */
+#define bm_isr_status_read(bm)		__bm_isr_read(bm, bm_isr_status)
+#define bm_isr_status_clear(bm, m)	__bm_isr_write(bm, bm_isr_status, m)
+#define bm_isr_enable_read(bm)		__bm_isr_read(bm, bm_isr_enable)
+#define bm_isr_enable_write(bm, v)	__bm_isr_write(bm, bm_isr_enable, v)
+#define bm_isr_disable_read(bm)		__bm_isr_read(bm, bm_isr_disable)
+#define bm_isr_disable_write(bm, v)	__bm_isr_write(bm, bm_isr_disable, v)
+#define bm_isr_inhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 1)
+#define bm_isr_uninhibit(bm)		__bm_isr_write(bm, bm_isr_inhibit, 0)
+
+/*
+ * Global variables of the max portal/pool number this bman version supported
+ */
+extern u16 bman_pool_max;
+
+/* used by CCSR and portal interrupt code */
+enum bm_isr_reg {
+	bm_isr_status = 0,
+	bm_isr_enable = 1,
+	bm_isr_disable = 2,
+	bm_isr_inhibit = 3
+};
+
+struct bm_portal_config {
+	/*
+	 * Corenet portal addresses;
+	 * [0]==cache-enabled, [1]==cache-inhibited.
+	 */
+	void __iomem *addr_virt[2];
+	/* Allow these to be joined in lists */
+	struct list_head list;
+	/* User-visible portal configuration settings */
+	/* This is used for any "core-affine" portals, ie. default portals
+	 * associated to the corresponding cpu. -1 implies that there is no
+	 * core affinity configured.
+	 */
+	int cpu;
+	/* portal interrupt line */
+	int irq;
+	/* the unique index of this portal */
+	u32 index;
+	/* Is this portal shared? (If so, it has coarser locking and demuxes
+	 * processing on behalf of other CPUs.).
+	 */
+	int is_shared;
+	/* These are the buffer pool IDs that may be used via this portal. */
+	struct bman_depletion mask;
+
+};
+
+int bman_init_ccsr(const struct device_node *node);
+
+struct bman_portal *bman_create_affine_portal(
+			const struct bm_portal_config *config);
+const struct bm_portal_config *bman_destroy_affine_portal(void);
+
+/* Set depletion thresholds associated with a buffer pool. Requires that the
+ * operating system have access to Bman CCSR (ie. compiled in support and
+ * run-time access courtesy of the device-tree).
+ */
+int bm_pool_set(u32 bpid, const u32 *thresholds);
+
+/* Read the free buffer count for a given buffer */
+u32 bm_pool_free_buffers(u32 bpid);
+
+#endif /* __BMAN_PRIV_H */
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
new file mode 100644
index 0000000..383106b
--- /dev/null
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -0,0 +1,375 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FSL_BMAN_H
+#define __FSL_BMAN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* This wrapper represents a bit-array for the depletion state of the 64 Bman
+ * buffer pools.
+ */
+struct bman_depletion {
+	u32 state[2];
+};
+
+static inline void bman_depletion_init(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = 0;
+}
+
+static inline void bman_depletion_fill(struct bman_depletion *c)
+{
+	c->state[0] = c->state[1] = ~0;
+}
+
+/* --- Bman data structures (and associated constants) --- */
+
+/* Represents s/w corenet portal mapped data structures */
+struct bm_rcr_entry;	/* RCR (Release Command Ring) entries */
+struct bm_mc_command;	/* MC (Management Command) command */
+struct bm_mc_result;	/* MC result */
+
+/* Code-reduction, define a wrapper for 48-bit buffers. In cases where a buffer
+ * pool id specific to this buffer is needed (BM_RCR_VERB_CMD_BPID_MULTI,
+ * BM_MCC_VERB_ACQUIRE), the 'bpid' field is used.
+ */
+struct bm_buffer {
+	union {
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u8 __reserved1;
+			u8 bpid;
+			u16 hi; /* High 16-bits of 48-bit address */
+			u32 lo; /* Low 32-bits of 48-bit address */
+#else
+			u32 lo;
+			u16 hi;
+			u8 bpid;
+			u8 __reserved;
+#endif
+		};
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			u64 __notaddress:16;
+			u64 addr:48;
+#else
+			u64 addr:48;
+			u64 __notaddress:16;
+#endif
+		};
+		u64 opaque;
+	};
+} __attribute__((aligned(8)));
+static inline u64 bm_buffer_get64(const struct bm_buffer *buf)
+{
+	return buf->addr;
+}
+
+static inline dma_addr_t bm_buf_addr(const struct bm_buffer *buf)
+{
+	return (dma_addr_t)buf->addr;
+}
+
+#define bm_buffer_set64(buf, v) \
+	do { \
+		struct bm_buffer *__buf931 = (buf); \
+		__buf931->hi = upper_32_bits(v); \
+		__buf931->lo = lower_32_bits(v); \
+	} while (0)
+
+/* See 1.5.3.5.4: "Release Command" */
+struct bm_rcr_entry {
+	union {
+		struct {
+			u8 __dont_write_directly__verb;
+			u8 bpid; /* used with BM_RCR_VERB_CMD_BPID_SINGLE */
+			u8 __reserved1[62];
+		};
+		struct bm_buffer bufs[8];
+	};
+} __packed;
+#define BM_RCR_VERB_VBIT		0x80
+#define BM_RCR_VERB_CMD_MASK		0x70	/* one of two values; */
+#define BM_RCR_VERB_CMD_BPID_SINGLE	0x20
+#define BM_RCR_VERB_CMD_BPID_MULTI	0x30
+#define BM_RCR_VERB_BUFCOUNT_MASK	0x0f	/* values 1..8 */
+
+/* See 1.5.3.1: "Acquire Command" */
+/* See 1.5.3.2: "Query Command" */
+struct bm_mcc_acquire {
+	u8 bpid;
+	u8 __reserved1[62];
+} __packed;
+struct bm_mcc_query {
+	u8 __reserved2[63];
+} __packed;
+struct bm_mc_command {
+	u8 __dont_write_directly__verb;
+	union {
+		struct bm_mcc_acquire acquire;
+		struct bm_mcc_query query;
+	};
+} __packed;
+#define BM_MCC_VERB_VBIT		0x80
+#define BM_MCC_VERB_CMD_MASK		0x70	/* where the verb contains; */
+#define BM_MCC_VERB_CMD_ACQUIRE		0x10
+#define BM_MCC_VERB_CMD_QUERY		0x40
+#define BM_MCC_VERB_ACQUIRE_BUFCOUNT	0x0f	/* values 1..8 go here */
+
+/* See 1.5.3.3: "Acquire Response" */
+/* See 1.5.3.4: "Query Response" */
+struct bm_pool_state {
+	u8 __reserved1[32];
+	/* "availability state" and "depletion state" */
+	struct {
+		u8 __reserved1[8];
+		/* Access using bman_depletion_***() */
+		struct bman_depletion state;
+	} as, ds;
+};
+
+struct bm_mc_result {
+	union {
+		struct {
+			u8 verb;
+			u8 __reserved1[63];
+		};
+		union {
+			struct {
+				u8 __reserved1;
+				u8 bpid;
+				u8 __reserved2[62];
+			};
+			struct bm_buffer bufs[8];
+		} acquire;
+		struct bm_pool_state query;
+	};
+} __packed;
+#define BM_MCR_VERB_VBIT		0x80
+#define BM_MCR_VERB_CMD_MASK		BM_MCC_VERB_CMD_MASK
+#define BM_MCR_VERB_CMD_ACQUIRE		BM_MCC_VERB_CMD_ACQUIRE
+#define BM_MCR_VERB_CMD_QUERY		BM_MCC_VERB_CMD_QUERY
+#define BM_MCR_VERB_CMD_ERR_INVALID	0x60
+#define BM_MCR_VERB_CMD_ERR_ECC		0x70
+#define BM_MCR_VERB_ACQUIRE_BUFCOUNT	BM_MCC_VERB_ACQUIRE_BUFCOUNT /* 0..8 */
+
+/* Portal and Buffer Pools */
+/* Represents a managed portal */
+struct bman_portal;
+
+/* This object type represents Bman buffer pools. */
+struct bman_pool;
+
+/* This struct specifies parameters for a bman_pool object. */
+struct bman_pool_params {
+	/* index of the buffer pool to encapsulate (0-63), ignored if
+	 * BMAN_POOL_FLAG_DYNAMIC_BPID is set.
+	 */
+	u32 bpid;
+	/* bit-mask of BMAN_POOL_FLAG_*** options */
+	u32 flags;
+	/* depletion-entry/exit thresholds, if BMAN_POOL_FLAG_THRESH is set. NB:
+	 * this is only allowed if BMAN_POOL_FLAG_DYNAMIC_BPID is used *and*
+	 * when run in the control plane (which controls Bman CCSR). This array
+	 * matches the definition of bm_pool_set().
+	 */
+	u32 thresholds[4];
+};
+
+/* Flags to bman_new_pool() */
+#define BMAN_POOL_FLAG_NO_RELEASE    0x00000001 /* can't release to pool */
+#define BMAN_POOL_FLAG_ONLY_RELEASE  0x00000002 /* can only release to pool */
+#define BMAN_POOL_FLAG_DYNAMIC_BPID  0x00000008 /* (de)allocate bpid */
+#define BMAN_POOL_FLAG_THRESH        0x00000010 /* set depletion thresholds */
+
+/* Flags to bman_release() */
+#define BMAN_RELEASE_FLAG_NOW        0x00000008 /* issue immediate release */
+
+
+/**
+ * bman_get_portal_index - get portal configuration index
+ */
+int bman_get_portal_index(void);
+
+/**
+ * bman_rcr_is_empty - Determine if portal's RCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * releases for the local portal have been processed by Bman but can't use the
+ * BMAN_RELEASE_FLAG_WAIT_SYNC flag to do this from the final bman_release().
+ * The function forces tracking of RCR consumption (which normally doesn't
+ * happen until release processing needs to find space to put new release
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int bman_rcr_is_empty(void);
+
+/**
+ * bman_alloc_bpid_range - Allocate a contiguous range of BPIDs
+ * @result: is set by the API to the base BPID of the allocated range
+ * @count: the number of BPIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count BPIDs
+ *
+ * Returns the number of buffer pools allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * BPs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int bman_alloc_bpid(u32 *result)
+{
+	int ret = bman_alloc_bpid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * bman_release_bpid_range - Release the specified range of buffer pool IDs
+ * @bpid: the base BPID of the range to deallocate
+ * @count: the number of BPIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of BPIDs
+ * that it can subsequently allocate from.
+ */
+void bman_release_bpid_range(u32 bpid, unsigned int count);
+static inline void bman_release_bpid(u32 bpid)
+{
+	bman_release_bpid_range(bpid, 1);
+}
+
+int bman_reserve_bpid_range(u32 bpid, unsigned int count);
+static inline int bman_reserve_bpid(u32 bpid)
+{
+	return bman_reserve_bpid_range(bpid, 1);
+}
+
+void bman_seed_bpid_range(u32 bpid, unsigned int count);
+
+int bman_shutdown_pool(u32 bpid);
+
+/**
+ * bman_new_pool - Allocates a Buffer Pool object
+ * @params: parameters specifying the buffer pool ID and behaviour
+ *
+ * Creates a pool object for the given @params. A portal and the depletion
+ * callback field of @params are only used if the BMAN_POOL_FLAG_DEPLETION flag
+ * is set. NB, the fields from @params are copied into the new pool object, so
+ * the structure provided by the caller can be released or reused after the
+ * function returns.
+ */
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params);
+
+/**
+ * bman_free_pool - Deallocates a Buffer Pool object
+ * @pool: the pool object to release
+ */
+void bman_free_pool(struct bman_pool *pool);
+
+/**
+ * bman_get_params - Returns a pool object's parameters.
+ * @pool: the pool object
+ *
+ * The returned pointer refers to state within the pool object so must not be
+ * modified and can no longer be read once the pool object is destroyed.
+ */
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
+
+/**
+ * bman_release - Release buffer(s) to the buffer pool
+ * @pool: the buffer pool object to release to
+ * @bufs: an array of buffers to release
+ * @num: the number of buffers in @bufs (1-8)
+ * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options
+ *
+ */
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_acquire - Acquire buffer(s) from a buffer pool
+ * @pool: the buffer pool object to acquire from
+ * @bufs: array for storing the acquired buffers
+ * @num: the number of buffers desired (@bufs is at least this big)
+ *
+ * Issues an "Acquire" command via the portal's management command interface.
+ * The return value will be the number of buffers obtained from the pool, or a
+ * negative error code if a h/w error or pool starvation was encountered.
+ */
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags);
+
+/**
+ * bman_query_pools - Query all buffer pool states
+ * @state: storage for the queried availability and depletion states
+ */
+int bman_query_pools(struct bm_pool_state *state);
+
+/**
+ * bman_query_free_buffers - Query how many free buffers are in buffer pool
+ * @pool: the buffer pool object to query
+ *
+ * Return the number of the free buffers
+ */
+u32 bman_query_free_buffers(struct bman_pool *pool);
+
+/**
+ * bman_update_pool_thresholds - Change the buffer pool's depletion thresholds
+ * @pool: the buffer pool object to which the thresholds will be set
+ * @thresholds: the new thresholds
+ */
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds);
+
+/**
+ * bm_pool_set_hw_threshold - Change the buffer pool's thresholds
+ * @pool: Pool id
+ * @low_thresh: low threshold
+ * @high_thresh: high threshold
+ */
+int bm_pool_set_hw_threshold(u32 bpid, const u32 low_thresh,
+			     const u32 high_thresh);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __FSL_BMAN_H */
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index a4897b0..a3243af 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -50,7 +50,9 @@ extern "C" {
 
 /* Thread-entry/exit hooks; */
 int qman_thread_init(void);
+int bman_thread_init(void);
 int qman_thread_finish(void);
+int bman_thread_finish(void);
 
 #define QBMAN_ANY_PORTAL_IDX 0xffffffff
 
@@ -92,9 +94,12 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal);
  * into another blocking read/select/poll.
  */
 void qman_thread_irq(void);
+void bman_thread_irq(void);
 
 /* Global setup */
 int qman_global_init(void);
+int bman_global_init(void);
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 13/40] bus/dpaa: support FMAN frame queue lookup
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (11 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
                             ` (28 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/qbman/qman.c        | 99 ++++++++++++++++++++++++++++++-
 drivers/bus/dpaa/base/qbman/qman_driver.c |  7 ++-
 drivers/bus/dpaa/base/qbman/qman_priv.h   |  7 +++
 drivers/bus/dpaa/include/fsl_qman.h       | 12 ++++
 4 files changed, 122 insertions(+), 3 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 9b1630b..8c8d270 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -176,6 +176,65 @@ static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
 	return fqtree_find(&p->retire_table, fqid);
 }
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+static void **qman_fq_lookup_table;
+static size_t qman_fq_lookup_table_size;
+
+int qman_setup_fq_lookup_table(size_t num_entries)
+{
+	num_entries++;
+	/* Allocate 1 more entry since the first entry is not used */
+	qman_fq_lookup_table = vmalloc((num_entries * sizeof(void *)));
+	if (!qman_fq_lookup_table) {
+		pr_err("QMan: Could not allocate fq lookup table\n");
+		return -ENOMEM;
+	}
+	memset(qman_fq_lookup_table, 0, num_entries * sizeof(void *));
+	qman_fq_lookup_table_size = num_entries;
+	pr_debug("QMan: Allocated lookup table at %p, entry count %lu\n",
+		qman_fq_lookup_table,
+			(unsigned long)qman_fq_lookup_table_size);
+	return 0;
+}
+
+/* global structure that maintains fq object mapping */
+static DEFINE_SPINLOCK(fq_hash_table_lock);
+
+static int find_empty_fq_table_entry(u32 *entry, struct qman_fq *fq)
+{
+	u32 i;
+
+	spin_lock(&fq_hash_table_lock);
+	/* Can't use index zero because this has special meaning
+	 * in context_b field.
+	 */
+	for (i = 1; i < qman_fq_lookup_table_size; i++) {
+		if (qman_fq_lookup_table[i] == NULL) {
+			*entry = i;
+			qman_fq_lookup_table[i] = fq;
+			spin_unlock(&fq_hash_table_lock);
+			return 0;
+		}
+	}
+	spin_unlock(&fq_hash_table_lock);
+	return -ENOMEM;
+}
+
+static void clear_fq_table_entry(u32 entry)
+{
+	spin_lock(&fq_hash_table_lock);
+	DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
+	qman_fq_lookup_table[entry] = NULL;
+	spin_unlock(&fq_hash_table_lock);
+}
+
+static inline struct qman_fq *get_fq_table_entry(u32 entry)
+{
+	DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
+	return qman_fq_lookup_table[entry];
+}
+#endif
+
 static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
 {
 	/* Byteswap the FQD to HW format */
@@ -766,8 +825,13 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 				break;
 			case QM_MR_VERB_FQPN:
 				/* Parked */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+				fq = get_fq_table_entry(
+					be32_to_cpu(msg->fq.contextB));
+#else
 				fq = (void *)(uintptr_t)
 					be32_to_cpu(msg->fq.contextB);
+#endif
 				fq_state_change(p, fq, msg, verb);
 				if (fq->cb.fqs)
 					fq->cb.fqs(p, fq, &swapped_msg);
@@ -792,7 +856,11 @@ static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 			}
 		} else {
 			/* Its a software ERN */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(be32_to_cpu(msg->ern.tag));
+#else
 			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+#endif
 			fq->cb.ern(p, fq, &swapped_msg);
 		}
 		num++;
@@ -907,7 +975,11 @@ static inline unsigned int __poll_portal_fast(struct qman_portal *p,
 				clear_vdqcr(p, fq);
 		} else {
 			/* SDQCR: context_b points to the FQ */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			fq = get_fq_table_entry(dq->contextB);
+#else
 			fq = (void *)(uintptr_t)dq->contextB;
+#endif
 			/* Now let the callback do its stuff */
 			res = fq->cb.dqrr(p, fq, dq);
 			/*
@@ -1119,7 +1191,12 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
 	fq->flags = flags;
 	fq->state = qman_fq_state_oos;
 	fq->cgr_groupid = 0;
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	if (unlikely(find_empty_fq_table_entry(&fq->key, fq))) {
+		pr_info("Find empty table entry failed\n");
+		return -ENOMEM;
+	}
+#endif
 	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
 		return 0;
 	/* Everything else is AS_IS support */
@@ -1193,7 +1270,9 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
 	case qman_fq_state_oos:
 		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
 			qman_release_fqid(fq->fqid);
-
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		clear_fq_table_entry(fq->key);
+#endif
 		return;
 	default:
 		break;
@@ -1258,7 +1337,11 @@ int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
 		dma_addr_t phys_fq;
 
 		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		mcc->initfq.fqd.context_b = fq->key;
+#else
 		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+#endif
 		/*
 		 *  and the physical address - NB, if the user wasn't trying to
 		 * set CONTEXTA, clear the stashing settings.
@@ -1419,7 +1502,11 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
 			msg.verb = QM_MR_VERB_FQRNI;
 			msg.fq.fqs = mcr->alterfq.fqs;
 			msg.fq.fqid = fq->fqid;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+			msg.fq.contextB = fq->key;
+#else
 			msg.fq.contextB = (u32)(uintptr_t)fq;
+#endif
 			fq->cb.fqs(p, fq, &msg);
 		}
 	} else if (res == QM_MCR_RESULT_PENDING) {
@@ -1861,7 +1948,11 @@ static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
 					QM_EQCR_DCA_PARK : 0) |
 			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
 	eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	eq->tag = cpu_to_be32(fq->key);
+#else
 	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 	eq->fd = *fd;
 	cpu_to_hw_fd(&eq->fd);
 	return eq;
@@ -1907,7 +1998,11 @@ int qman_enqueue_multi(struct qman_fq *fq,
 	/* try to send as many frames as possible */
 	while (eqcr->available && frames_to_send--) {
 		eq->fqid = cpu_to_be32(fq->fqid);
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+		eq->tag = cpu_to_be32(fq->key);
+#else
 		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+#endif
 		eq->fd.opaque_addr = fd->opaque_addr;
 		eq->fd.addr = cpu_to_be40(fd->addr);
 		eq->fd.status = cpu_to_be32(fd->status);
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 90fb130..7a68896 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -279,5 +279,10 @@ int qman_global_init(void)
 	else
 		qman_clk = be32_to_cpu(*clk);
 
-	return ret;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	ret = qman_setup_fq_lookup_table(CONFIG_FSL_QMAN_FQ_LOOKUP_MAX);
+	if (ret)
+		return ret;
+#endif
+	return 0;
 }
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index 4a11e40..3e1d7f9 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -197,6 +197,13 @@ void qm_set_liodns(struct qm_portal_config *pcfg);
 int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
 		       struct qm_mcr_cgrtestwrite *result);
 
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+/* If the fq object pointer is greater than the size of context_b field,
+ * than a lookup table is required.
+ */
+int qman_setup_fq_lookup_table(size_t num_entries);
+#endif
+
 /*   QMan s/w corenet portal, low-level i/face	 */
 
 /*
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 85ae13b..eedfd7e 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -46,6 +46,15 @@ extern "C" {
 
 #include <dpaa_rbtree.h>
 
+/* FQ lookups (turn this on for 64bit user-space) */
+#if (__WORDSIZE == 64)
+#define CONFIG_FSL_QMAN_FQ_LOOKUP
+/* if FQ lookups are supported, this controls the number of initialised,
+ * s/w-consumed FQs that can be supported at any one time.
+ */
+#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)
+#endif
+
 /* Last updated for v00.800 of the BG */
 
 /* Hardware constants */
@@ -1228,6 +1237,9 @@ struct qman_fq {
 	enum qman_fq_state state;
 	int cgr_groupid;
 	struct rb_node node;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	u32 key;
+#endif
 };
 
 /*
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 14/40] bus/dpaa: add BMan hardware interfaces
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (12 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 13/40] bus/dpaa: support FMAN frame queue lookup Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
                             ` (27 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |   1 +
 drivers/bus/dpaa/base/qbman/bman.c        | 394 +++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman.h        | 550 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/bman_driver.c |  12 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |  16 +
 5 files changed, 973 insertions(+)
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/bman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index e1415e4..61b6432 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -63,6 +63,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/bman.c \
 	base/qbman/bman_driver.c \
 	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
diff --git a/drivers/bus/dpaa/base/qbman/bman.c b/drivers/bus/dpaa/base/qbman/bman.c
new file mode 100644
index 0000000..0480caa
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.c
@@ -0,0 +1,394 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "bman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define RCR_THRESH	2	/* reread h/w CI when running out of space */
+#define IRQNAME		"BMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "BMan portal %d" */
+
+struct bman_portal {
+	struct bm_portal p;
+	/* 2-element array. pools[0] is mask, pools[1] is snapshot. */
+	struct bman_depletion *pools;
+	int thresh_set;
+	unsigned long irq_sources;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct bm_portal_config *config;
+	char irqname[MAX_IRQNAME];
+};
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static RTE_DEFINE_PER_LCORE(struct bman_portal, bman_affine_portal);
+
+static inline struct bman_portal *get_affine_portal(void)
+{
+	return &RTE_PER_LCORE(bman_affine_portal);
+}
+
+/*
+ * This object type refers to a pool, it isn't *the* pool. There may be
+ * more than one such object per BMan buffer pool, eg. if different users of
+ * the pool are operating via different portals.
+ */
+struct bman_pool {
+	struct bman_pool_params params;
+	/* Used for hash-table admin when using depletion notifications. */
+	struct bman_portal *portal;
+	struct bman_pool *next;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	atomic_t in_use;
+#endif
+};
+
+static inline
+struct bman_portal *bman_create_portal(struct bman_portal *portal,
+				       const struct bm_portal_config *c)
+{
+	struct bm_portal *p;
+	const struct bman_depletion *pools = &c->mask;
+	int ret;
+	u8 bpid = 0;
+
+	p = &portal->p;
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference...
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	if (bm_rcr_init(p, bm_rcr_pvb, bm_rcr_cce)) {
+		pr_err("Bman RCR initialisation failed\n");
+		return NULL;
+	}
+	if (bm_mc_init(p)) {
+		pr_err("Bman MC initialisation failed\n");
+		goto fail_mc;
+	}
+	portal->pools = kmalloc(2 * sizeof(*pools), GFP_KERNEL);
+	if (!portal->pools)
+		goto fail_pools;
+	portal->pools[0] = *pools;
+	bman_depletion_init(portal->pools + 1);
+	while (bpid < bman_pool_max) {
+		/*
+		 * Default to all BPIDs disabled, we enable as required at
+		 * run-time.
+		 */
+		bm_isr_bscn_mask(p, bpid, 0);
+		bpid++;
+	}
+	portal->slowpoll = 0;
+	/* Write-to-clear any stale interrupt status bits */
+	bm_isr_disable_write(p, 0xffffffff);
+	portal->irq_sources = 0;
+	bm_isr_enable_write(p, portal->irq_sources);
+	bm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, NULL, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need RCR to be empty before continuing */
+	ret = bm_rcr_get_fill(p);
+	if (ret) {
+		pr_err("Bman RCR unclean\n");
+		goto fail_rcr_empty;
+	}
+	/* Success */
+	portal->config = c;
+
+	bm_isr_disable_write(p, 0);
+	bm_isr_uninhibit(p);
+	return portal;
+fail_rcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->pools);
+fail_pools:
+	bm_mc_finish(p);
+fail_mc:
+	bm_rcr_finish(p);
+	return NULL;
+}
+
+struct bman_portal *
+bman_create_affine_portal(const struct bm_portal_config *c)
+{
+	struct bman_portal *portal = get_affine_portal();
+
+	/*This function is called from the context which is already affine to
+	 *CPU or in other words this in non-migratable to other CPUs.
+	 */
+	portal = bman_create_portal(portal, c);
+	if (portal) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		spin_unlock(&affine_mask_lock);
+	}
+	return portal;
+}
+
+static inline
+void bman_destroy_portal(struct bman_portal *bm)
+{
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bm_rcr_cce_update(&bm->p);
+	bm_rcr_cce_update(&bm->p);
+
+	free_irq(pcfg->irq, bm);
+
+	kfree(bm->pools);
+	bm_mc_finish(&bm->p);
+	bm_rcr_finish(&bm->p);
+	bm->config = NULL;
+}
+
+const struct
+bm_portal_config *bman_destroy_affine_portal(void)
+{
+	struct bman_portal *bm = get_affine_portal();
+	const struct bm_portal_config *pcfg;
+
+	pcfg = bm->config;
+	bman_destroy_portal(bm);
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(pcfg->cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int
+bman_get_portal_index(void)
+{
+	struct bman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+static const u32 zero_thresholds[4] = {0, 0, 0, 0};
+
+struct bman_pool *bman_new_pool(const struct bman_pool_params *params)
+{
+	struct bman_pool *pool = NULL;
+	u32 bpid;
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID) {
+		int ret = bman_alloc_bpid(&bpid);
+
+		if (ret)
+			return NULL;
+	} else {
+		if (params->bpid >= bman_pool_max)
+			return NULL;
+		bpid = params->bpid;
+	}
+	if (params->flags & BMAN_POOL_FLAG_THRESH) {
+		int ret = bm_pool_set(bpid, params->thresholds);
+
+		if (ret)
+			goto err;
+	}
+
+	pool = kmalloc(sizeof(*pool), GFP_KERNEL);
+	if (!pool)
+		goto err;
+	pool->params = *params;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	atomic_set(&pool->in_use, 1);
+#endif
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		pool->params.bpid = bpid;
+
+	return pool;
+err:
+	if (params->flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(bpid, zero_thresholds);
+
+	if (params->flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(bpid);
+	kfree(pool);
+
+	return NULL;
+}
+
+void bman_free_pool(struct bman_pool *pool)
+{
+	if (pool->params.flags & BMAN_POOL_FLAG_THRESH)
+		bm_pool_set(pool->params.bpid, zero_thresholds);
+	if (pool->params.flags & BMAN_POOL_FLAG_DYNAMIC_BPID)
+		bman_release_bpid(pool->params.bpid);
+	kfree(pool);
+}
+
+const struct bman_pool_params *bman_get_params(const struct bman_pool *pool)
+{
+	return &pool->params;
+}
+
+static void update_rcr_ci(struct bman_portal *p, int avail)
+{
+	if (avail)
+		bm_rcr_cce_prefetch(&p->p);
+	else
+		bm_rcr_cce_update(&p->p);
+}
+
+#define BMAN_BUF_MASK 0x0000fffffffffffful
+int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p;
+	struct bm_rcr_entry *r;
+	u32 i = num - 1;
+	u8 avail;
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_NO_RELEASE)
+		return -EINVAL;
+#endif
+
+	p = get_affine_portal();
+	avail = bm_rcr_get_avail(&p->p);
+	if (avail < 2)
+		update_rcr_ci(p, avail);
+	r = bm_rcr_start(&p->p);
+	if (unlikely(!r))
+		return -EBUSY;
+
+	/*
+	 * we can copy all but the first entry, as this can trigger badness
+	 * with the valid-bit
+	 */
+	r->bufs[0].opaque =
+		cpu_to_be64(((u64)pool->params.bpid << 48) |
+			    (bufs[0].opaque & BMAN_BUF_MASK));
+	if (i) {
+		for (i = 1; i < num; i++)
+			r->bufs[i].opaque =
+				cpu_to_be64(bufs[i].opaque & BMAN_BUF_MASK);
+	}
+
+	bm_rcr_pvb_commit(&p->p, BM_RCR_VERB_CMD_BPID_SINGLE |
+			  (num & BM_RCR_VERB_BUFCOUNT_MASK));
+
+	return 0;
+}
+
+int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
+		 u32 flags __maybe_unused)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_command *mcc;
+	struct bm_mc_result *mcr;
+	int ret, i;
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (!num || (num > 8))
+		return -EINVAL;
+	if (pool->params.flags & BMAN_POOL_FLAG_ONLY_RELEASE)
+		return -EINVAL;
+#endif
+
+	mcc = bm_mc_start(&p->p);
+	mcc->acquire.bpid = pool->params.bpid;
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_ACQUIRE |
+			(num & BM_MCC_VERB_ACQUIRE_BUFCOUNT));
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	ret = mcr->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT;
+	if (bufs) {
+		for (i = 0; i < num; i++)
+			bufs[i].opaque =
+				be64_to_cpu(mcr->acquire.bufs[i].opaque);
+	}
+	if (ret != num)
+		ret = -ENOMEM;
+	return ret;
+}
+
+int bman_query_pools(struct bm_pool_state *state)
+{
+	struct bman_portal *p = get_affine_portal();
+	struct bm_mc_result *mcr;
+
+	bm_mc_start(&p->p);
+	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_QUERY);
+	while (!(mcr = bm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & BM_MCR_VERB_CMD_MASK) ==
+		    BM_MCR_VERB_CMD_QUERY);
+	*state = mcr->query;
+	state->as.state.state[0] = be32_to_cpu(state->as.state.state[0]);
+	state->as.state.state[1] = be32_to_cpu(state->as.state.state[1]);
+	state->ds.state.state[0] = be32_to_cpu(state->ds.state.state[0]);
+	state->ds.state.state[1] = be32_to_cpu(state->ds.state.state[1]);
+	return 0;
+}
+
+u32 bman_query_free_buffers(struct bman_pool *pool)
+{
+	return bm_pool_free_buffers(pool->params.bpid);
+}
+
+int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds)
+{
+	u32 bpid;
+
+	bpid = bman_get_params(pool)->bpid;
+
+	return bm_pool_set(bpid, thresholds);
+}
+
+int bman_shutdown_pool(u32 bpid)
+{
+	struct bman_portal *p = get_affine_portal();
+	return bm_shutdown_pool(&p->p, bpid);
+}
diff --git a/drivers/bus/dpaa/base/qbman/bman.h b/drivers/bus/dpaa/base/qbman/bman.h
new file mode 100644
index 0000000..4b088da
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/bman.h
@@ -0,0 +1,550 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2010-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BMAN_H
+#define __BMAN_H
+
+#include "bman_priv.h"
+
+/* Cache-inhibited register offsets */
+#define BM_REG_RCR_PI_CINH	0x3000
+#define BM_REG_RCR_CI_CINH	0x3100
+#define BM_REG_RCR_ITR		0x3200
+#define BM_REG_CFG		0x3300
+#define BM_REG_SCN(n)		(0x3400 + ((n) << 6))
+#define BM_REG_ISR		0x3e00
+#define BM_REG_IIR              0x3ec0
+
+/* Cache-enabled register offsets */
+#define BM_CL_CR		0x0000
+#define BM_CL_RR0		0x0100
+#define BM_CL_RR1		0x0140
+#define BM_CL_RCR		0x1000
+#define BM_CL_RCR_PI_CENA	0x3000
+#define BM_CL_RCR_CI_CENA	0x3100
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __bm_in(bm, o)		be32_to_cpu(__raw_readl((bm)->ci + (o)))
+#define __bm_out(bm, o, val)    __raw_writel(cpu_to_be32(val), \
+					     (bm)->ci + (o))
+#define bm_in(reg)		__bm_in(&portal->addr, BM_REG_##reg)
+#define bm_out(reg, val)	__bm_out(&portal->addr, BM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __bm_cl_touch_ro(bm, o) dcbt_ro((bm)->ce + (o))
+#define __bm_cl_touch_rw(bm, o) dcbt_rw((bm)->ce + (o))
+#define __bm_cl_in(bm, o)	be32_to_cpu(__raw_readl((bm)->ce + (o)))
+#define __bm_cl_out(bm, o, val) \
+	do { \
+		u32 *__tmpclout = (bm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __bm_cl_invalidate(bm, o) dccivac((bm)->ce + (o))
+#define bm_cl_touch_ro(reg) __bm_cl_touch_ro(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_touch_rw(reg) __bm_cl_touch_rw(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_in(reg)	    __bm_cl_in(&portal->addr, BM_CL_##reg##_CENA)
+#define bm_cl_out(reg, val) __bm_cl_out(&portal->addr, BM_CL_##reg##_CENA, val)
+#define bm_cl_invalidate(reg)\
+	__bm_cl_invalidate(&portal->addr, BM_CL_##reg##_CENA)
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 bm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ */
+enum bm_rcr_pmode {		/* matches BCSP_CFG::RPM */
+	bm_rcr_pci = 0,		/* PI index, cache-inhibited */
+	bm_rcr_pce = 1,		/* PI index, cache-enabled */
+	bm_rcr_pvb = 2		/* valid-bit */
+};
+
+enum bm_rcr_cmode {		/* s/w-only */
+	bm_rcr_cci,		/* CI index, cache-inhibited */
+	bm_rcr_cce		/* CI index, cache-enabled */
+};
+
+/* --- Portal structures --- */
+
+#define BM_RCR_SIZE		8
+
+struct bm_rcr {
+	struct bm_rcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	u32 busy;
+	enum bm_rcr_pmode pmode;
+	enum bm_rcr_cmode cmode;
+#endif
+};
+
+struct bm_mc {
+	struct bm_mc_command *cr;
+	struct bm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	enum {
+		/* Can only be _mc_start()ed */
+		mc_idle,
+		/* Can only be _mc_commit()ed or _mc_abort()ed */
+		mc_user,
+		/* Can only be _mc_retry()ed */
+		mc_hw
+	} state;
+#endif
+};
+
+struct bm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct bm_portal {
+	struct bm_addr addr;
+	struct bm_rcr rcr;
+	struct bm_mc mc;
+	struct bm_portal_config config;
+} ____cacheline_aligned;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define RCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(BM_RCR_SIZE << 6)))
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 RCR_PTR2IDX(struct bm_rcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (BM_RCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void RCR_INC(struct bm_rcr *rcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates
+	 * fast code with essentially no branching overheads. We increment to
+	 * the next RCR pointer and handle overflow and 'vbit'.
+	 */
+	struct bm_rcr_entry *partial = rcr->cursor + 1;
+
+	rcr->cursor = RCR_CARRYCLEAR(partial);
+	if (partial != rcr->cursor)
+		rcr->vbit ^= BM_RCR_VERB_VBIT;
+}
+
+static inline int bm_rcr_init(struct bm_portal *portal, enum bm_rcr_pmode pmode,
+			      __maybe_unused enum bm_rcr_cmode cmode)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct bm_rcr *rcr = &portal->rcr;
+	u32 cfg;
+	u8 pi;
+
+	rcr->ring = portal->addr.ce + BM_CL_RCR;
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	rcr->cursor = rcr->ring + pi;
+	rcr->vbit = (bm_in(RCR_PI_CINH) & BM_RCR_SIZE) ?  BM_RCR_VERB_VBIT : 0;
+	rcr->available = BM_RCR_SIZE - 1
+		- bm_cyc_diff(BM_RCR_SIZE, rcr->ci, pi);
+	rcr->ithresh = bm_in(RCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 0;
+	rcr->pmode = pmode;
+	rcr->cmode = cmode;
+#endif
+	cfg = (bm_in(CFG) & 0xffffffe0) | (pmode & 0x3); /* BCSP_CFG::RPM */
+	bm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void bm_rcr_finish(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 pi = bm_in(RCR_PI_CINH) & (BM_RCR_SIZE - 1);
+	u8 ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+
+	DPAA_ASSERT(!rcr->busy);
+	if (pi != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("losing uncommitted RCR entries\n");
+	if (ci != rcr->ci)
+		pr_crit("missing existing RCR completions\n");
+	if (rcr->ci != RCR_PTR2IDX(rcr->cursor))
+		pr_crit("RCR destroyed unquiesced\n");
+}
+
+static inline struct bm_rcr_entry *bm_rcr_start(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(!rcr->busy);
+	if (!rcr->available)
+		return NULL;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 1;
+#endif
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 0;
+#endif
+}
+
+static inline struct bm_rcr_entry *bm_rcr_pend_and_next(
+					struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode != bm_rcr_pvb);
+	if (rcr->available == 1)
+		return NULL;
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcr->cursor);
+	RCR_INC(rcr);
+	rcr->available--;
+	dcbz_64(rcr->cursor);
+	return rcr->cursor;
+}
+
+static inline void bm_rcr_pci_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pci);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	hwsync();
+	bm_out(RCR_PI_CINH, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	bm_cl_invalidate(RCR_PI);
+	bm_cl_touch_rw(RCR_PI);
+}
+
+static inline void bm_rcr_pce_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pce);
+	rcr->cursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	RCR_INC(rcr);
+	rcr->available--;
+	lwsync();
+	bm_cl_out(RCR_PI, RCR_PTR2IDX(rcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 0;
+#endif
+}
+
+static inline void bm_rcr_pvb_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	struct bm_rcr_entry *rcursor;
+
+	DPAA_ASSERT(rcr->busy);
+	DPAA_ASSERT(rcr->pmode == bm_rcr_pvb);
+	lwsync();
+	rcursor = rcr->cursor;
+	rcursor->__dont_write_directly__verb = myverb | rcr->vbit;
+	dcbf_64(rcursor);
+	RCR_INC(rcr);
+	rcr->available--;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	rcr->busy = 0;
+#endif
+}
+
+static inline u8 bm_rcr_cci_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cci);
+	rcr->ci = bm_in(RCR_CI_CINH) & (BM_RCR_SIZE - 1);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline void bm_rcr_cce_prefetch(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_rcr *rcr = &portal->rcr;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	bm_cl_touch_ro(RCR_CI);
+}
+
+static inline u8 bm_rcr_cce_update(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+	u8 diff, old_ci = rcr->ci;
+
+	DPAA_ASSERT(rcr->cmode == bm_rcr_cce);
+	rcr->ci = bm_cl_in(RCR_CI) & (BM_RCR_SIZE - 1);
+	bm_cl_invalidate(RCR_CI);
+	diff = bm_cyc_diff(BM_RCR_SIZE, old_ci, rcr->ci);
+	rcr->available += diff;
+	return diff;
+}
+
+static inline u8 bm_rcr_get_ithresh(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->ithresh;
+}
+
+static inline void bm_rcr_set_ithresh(struct bm_portal *portal, u8 ithresh)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	rcr->ithresh = ithresh;
+	bm_out(RCR_ITR, ithresh);
+}
+
+static inline u8 bm_rcr_get_avail(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return rcr->available;
+}
+
+static inline u8 bm_rcr_get_fill(struct bm_portal *portal)
+{
+	register struct bm_rcr *rcr = &portal->rcr;
+
+	return BM_RCR_SIZE - 1 - rcr->available;
+}
+
+/* --- Management command API --- */
+
+static inline int bm_mc_init(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + BM_CL_CR;
+	mc->rr = portal->addr.ce + BM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			BM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? BM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = mc_idle;
+#endif
+	return 0;
+}
+
+static inline void bm_mc_finish(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	if (mc->state != mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct bm_mc_command *bm_mc_start(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_idle);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void bm_mc_abort(struct bm_portal *portal)
+{
+	__maybe_unused register struct bm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == mc_user);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = mc_idle;
+#endif
+}
+
+static inline void bm_mc_commit(struct bm_portal *portal, u8 myverb)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = mc_hw;
+#endif
+}
+
+static inline struct bm_mc_result *bm_mc_result(struct bm_portal *portal)
+{
+	register struct bm_mc *mc = &portal->mc;
+	struct bm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= BM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+	mc->state = mc_idle;
+#endif
+	return rr;
+}
+
+#define SCN_REG(bpid) BM_REG_SCN((bpid) / 32)
+#define SCN_BIT(bpid) (0x80000000 >> (bpid & 31))
+static inline void bm_isr_bscn_mask(struct bm_portal *portal, u8 bpid,
+				    int enable)
+{
+	u32 val;
+
+	DPAA_ASSERT(bpid < bman_pool_max);
+	/* REG_SCN for bpid=0..31, REG_SCN+4 for bpid=32..63 */
+	val = __bm_in(&portal->addr, SCN_REG(bpid));
+	if (enable)
+		val |= SCN_BIT(bpid);
+	else
+		val &= ~SCN_BIT(bpid);
+	__bm_out(&portal->addr, SCN_REG(bpid), val);
+}
+
+static inline u32 __bm_isr_read(struct bm_portal *portal, enum bm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 6));
+#else
+	return __bm_in(&portal->addr, BM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __bm_isr_write(struct bm_portal *portal, enum bm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 6), val);
+#else
+	__bm_out(&portal->addr, BM_REG_ISR + (n << 2), val);
+#endif
+}
+
+/* Buffer Pool Cleanup */
+static inline int bm_shutdown_pool(struct bm_portal *p, u32 bpid)
+{
+	struct bm_mc_command *bm_cmd;
+	struct bm_mc_result *bm_res;
+
+	int aq_count = 0;
+	bool stop = false;
+
+	while (!stop) {
+		/* Acquire buffers until empty */
+		bm_cmd = bm_mc_start(p);
+		bm_cmd->acquire.bpid = bpid;
+		bm_mc_commit(p, BM_MCC_VERB_CMD_ACQUIRE |  1);
+		while (!(bm_res = bm_mc_result(p)))
+			cpu_relax();
+		if (!(bm_res->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT)) {
+			/* Pool is empty */
+			stop = true;
+		} else
+			++aq_count;
+	};
+	return 0;
+}
+
+#endif /* __BMAN_H */
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
index fb3c50e..5c13a80 100644
--- a/drivers/bus/dpaa/base/qbman/bman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -65,6 +65,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct bman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -111,6 +112,14 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 	/* Use the IRQ FD as a unique IRQ number */
 	pcfg.irq = fd;
 
+	portal = bman_create_affine_portal(&pcfg);
+	if (!portal) {
+		pr_err("Bman portal initialisation failed (%d)",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	/* Set the IRQ number */
 	irq_map.type = dpaa_portal_bman;
 	irq_map.portal_cinh = map.addr.cinh;
@@ -120,10 +129,13 @@ static int fsl_bman_portal_init(uint32_t idx, int is_shared)
 
 static int fsl_bman_portal_finish(void)
 {
+	__maybe_unused const struct bm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = bman_destroy_affine_portal();
+	DPAA_BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
index 690576a..35dba7f 100644
--- a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -41,6 +41,22 @@
 #include "dpaa_sys.h"
 #include <process.h>
 #include <fsl_qman.h>
+#include <fsl_bman.h>
+
+int bman_alloc_bpid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_bpid, result, count, align, partial);
+}
+
+void bman_release_bpid_range(u32 bpid, u32 count)
+{
+	process_release(dpaa_id_bpid, bpid, count);
+}
+
+int bman_reserve_bpid_range(u32 bpid, u32 count)
+{
+	return process_reserve(dpaa_id_bpid, bpid, count);
+}
 
 int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
 {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 15/40] bus/dpaa: add fman flow control threshold setting
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (13 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
                             ` (26 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/fman/fman_hw.c | 28 ++++++++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_fman.h  |  7 +++++++
 2 files changed, 35 insertions(+)

diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index a7ca661..077c17c 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -37,6 +37,7 @@
  */
 #include <fsl_fman.h>
 #include <fsl_fman_crc64.h>
+#include <fsl_bman.h>
 
 /* Instantiate the global variable that the inline CRC64 implementation (in
  * <fsl_fman.h>) depends on.
@@ -393,6 +394,33 @@ fman_if_set_bp(struct fman_if *fm_if, unsigned num __always_unused,
 }
 
 int
+fman_if_get_fc_threshold(struct fman_if *fm_if)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	return in_be32(fmbm_mpd);
+}
+
+int
+fman_if_set_fc_threshold(struct fman_if *fm_if, u32 high_water,
+			 u32 low_water, u32 bpid)
+{
+	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
+	unsigned int *fmbm_mpd;
+
+	assert(fman_ccsr_map_fd != -1);
+
+	fmbm_mpd = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_mpd;
+	out_be32(fmbm_mpd, FMAN_ENABLE_BPOOL_DEPLETION);
+	return bm_pool_set_hw_threshold(bpid, low_water, high_water);
+
+}
+
+int
 fman_if_get_fc_quanta(struct fman_if *fm_if)
 {
 	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index ac38082..95aee67 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -112,6 +112,13 @@ void fman_if_loopback_disable(struct fman_if *p);
 void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
 		    size_t bufsize);
 
+/* Get Flow Control threshold parameters on specific interface */
+int fman_if_get_fc_threshold(struct fman_if *fm_if);
+
+/* Enable and Set Flow Control threshold parameters on specific interface */
+int fman_if_set_fc_threshold(struct fman_if *fm_if,
+			u32 high_water, u32 low_water, u32 bpid);
+
 /* Get Flow Control pause quanta on specific interface */
 int fman_if_get_fc_quanta(struct fman_if *fm_if);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (14 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
                             ` (25 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Now that QBMAN (QMAN, BMAN) and FMAN drivers are available, this patch
integrates the DPAA Bus driver for using the drivers for scanning
devices and calling the PMD registered probe callbacks.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c               | 248 ++++++++++++++++++++++++++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  47 ++++++
 drivers/bus/dpaa/rte_dpaa_bus.h           |  25 +++
 3 files changed, 320 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index cc343b3..8017df3 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -63,9 +63,21 @@
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
 int dpaa_logtype_bus;
 
 struct rte_dpaa_bus rte_dpaa_bus;
+struct netcfg_info *dpaa_netcfg;
+
+/* define a variable to hold the portal_key, once created.*/
+pthread_key_t dpaa_portal_key;
+
+RTE_DEFINE_PER_LCORE(bool, _dpaa_io);
 
 static inline void
 dpaa_add_to_device_list(struct rte_dpaa_device *dev)
@@ -79,11 +91,247 @@ dpaa_remove_from_device_list(struct rte_dpaa_device *dev)
 	TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, dev, next);
 }
 
+static void dpaa_clean_device_list(void);
+
+static int
+dpaa_create_device_list(void)
+{
+	int i;
+	int ret;
+	struct rte_dpaa_device *dev;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+
+	/* Creating Ethernet Devices */
+	for (i = 0; i < dpaa_netcfg->num_ethports; i++) {
+		dev = calloc(1, sizeof(struct rte_dpaa_device));
+		if (!dev) {
+			DPAA_BUS_LOG(ERR, "Failed to allocate ETH devices");
+			ret = -ENOMEM;
+			goto cleanup;
+		}
+
+		cfg = &dpaa_netcfg->port_cfg[i];
+		fman_intf = cfg->fman_if;
+
+		/* Device identifiers */
+		dev->id.fman_id = fman_intf->fman_idx + 1;
+		dev->id.mac_id = fman_intf->mac_idx;
+		dev->device_type = FSL_DPAA_ETH;
+		dev->id.dev_id = i;
+
+		/* Create device name */
+		memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
+		sprintf(dev->name, "fm%d-mac%d", (fman_intf->fman_idx + 1),
+			fman_intf->mac_idx);
+		DPAA_BUS_LOG(DEBUG, "Device added: %s", dev->name);
+		dev->device.name = dev->name;
+
+		dpaa_add_to_device_list(dev);
+	}
+
+	rte_dpaa_bus.device_count = i;
+
+	return 0;
+
+cleanup:
+	dpaa_clean_device_list();
+	return ret;
+}
+
+static void
+dpaa_clean_device_list(void)
+{
+	struct rte_dpaa_device *dev = NULL;
+	struct rte_dpaa_device *tdev = NULL;
+
+	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+		free(dev);
+		dev = NULL;
+	}
+}
+
+/** XXX move this function into a separate file */
+static int
+_dpaa_portal_init(void *arg)
+{
+	cpu_set_t cpuset;
+	pthread_t id;
+	uint32_t cpu = rte_lcore_id();
+	int ret;
+	struct dpaa_portal *dpaa_io_portal;
+
+	BUS_INIT_FUNC_TRACE();
+
+	if ((uint64_t)arg == 1 || cpu == LCORE_ID_ANY)
+		cpu = rte_get_master_lcore();
+	/* if the core id is not supported */
+	else
+		if (cpu >= RTE_MAX_LCORE)
+			return -1;
+
+	/* Set CPU affinity for this thread */
+	CPU_ZERO(&cpuset);
+	CPU_SET(cpu, &cpuset);
+	id = pthread_self();
+	ret = pthread_setaffinity_np(id, sizeof(cpu_set_t), &cpuset);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "pthread_setaffinity_np failed on "
+			"core :%d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	/* Initialise bman thread portals */
+	ret = bman_thread_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "BMAN thread initialized");
+
+	/* Initialise qman thread portals */
+	ret = qman_thread_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "bman_thread_init failed on "
+			"core %d with ret: %d", cpu, ret);
+		bman_thread_finish();
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
+
+	dpaa_io_portal = rte_malloc(NULL, sizeof(struct dpaa_portal),
+				    RTE_CACHE_LINE_SIZE);
+	if (!dpaa_io_portal) {
+		DPAA_BUS_LOG(ERR, "Unable to allocate memory");
+		bman_thread_finish();
+		qman_thread_finish();
+		return -ENOMEM;
+	}
+
+	dpaa_io_portal->qman_idx = qman_get_portal_index();
+	dpaa_io_portal->bman_idx = bman_get_portal_index();
+	dpaa_io_portal->tid = syscall(SYS_gettid);
+
+	ret = pthread_setspecific(dpaa_portal_key, (void *)dpaa_io_portal);
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "pthread_setspecific failed on "
+			    "core %d with ret: %d", cpu, ret);
+		dpaa_portal_finish(NULL);
+
+		return ret;
+	}
+
+	RTE_PER_LCORE(_dpaa_io) = true;
+
+	DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
+
+	return 0;
+}
+
+/*
+ * rte_dpaa_portal_init - Wrapper over _dpaa_portal_init with thread level check
+ * XXX Complete this
+ */
+int
+rte_dpaa_portal_init(void *arg)
+{
+	if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
+		return _dpaa_portal_init(arg);
+
+	return 0;
+}
+
+void
+dpaa_portal_finish(void *arg)
+{
+	struct dpaa_portal *dpaa_io_portal = (struct dpaa_portal *)arg;
+
+	if (!dpaa_io_portal) {
+		DPAA_BUS_LOG(DEBUG, "Portal already cleaned");
+		return;
+	}
+
+	bman_thread_finish();
+	qman_thread_finish();
+
+	pthread_setspecific(dpaa_portal_key, NULL);
+
+	rte_free(dpaa_io_portal);
+	dpaa_io_portal = NULL;
+
+	RTE_PER_LCORE(_dpaa_io) = false;
+}
+
+#define DPAA_DEV_PATH1 "/sys/devices/platform/soc/soc:fsl,dpaa"
+#define DPAA_DEV_PATH2 "/sys/devices/platform/fsl,dpaa"
+
 static int
 rte_dpaa_bus_scan(void)
 {
+	int ret;
+
 	BUS_INIT_FUNC_TRACE();
 
+	if ((access(DPAA_DEV_PATH1, F_OK) != 0) &&
+	    (access(DPAA_DEV_PATH2, F_OK) != 0)) {
+		RTE_LOG(DEBUG, EAL, "DPAA Bus not present. Skipping.\n");
+		return 0;
+	}
+
+	/* Load the device-tree driver */
+	ret = of_init();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "of_init failed with ret: %d", ret);
+		return -1;
+	}
+
+	/* Get the interface configurations from device-tree */
+	dpaa_netcfg = netcfg_acquire();
+	if (!dpaa_netcfg) {
+		DPAA_BUS_LOG(ERR, "netcfg_acquire failed");
+		return -EINVAL;
+	}
+
+	RTE_LOG(NOTICE, EAL, "DPAA Bus Detected\n");
+
+	if (!dpaa_netcfg->num_ethports) {
+		DPAA_BUS_LOG(INFO, "no network interfaces available");
+		/* This is not an error */
+		return 0;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "Bus: Address of netcfg=%p, Ethports=%d",
+		     dpaa_netcfg, dpaa_netcfg->num_ethports);
+
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dump_netcfg(dpaa_netcfg);
+#endif
+
+	DPAA_BUS_LOG(DEBUG, "Number of devices = %d\n",
+		     dpaa_netcfg->num_ethports);
+	ret = dpaa_create_device_list();
+	if (ret) {
+		DPAA_BUS_LOG(ERR, "Unable to create device list. (%d)", ret);
+		return ret;
+	}
+
+	/* create the key, supplying a function that'll be invoked
+	 * when a portal affined thread will be deleted.
+	 */
+	ret = pthread_key_create(&dpaa_portal_key, dpaa_portal_finish);
+	if (ret) {
+		DPAA_BUS_LOG(DEBUG, "Unable to create pthread key. (%d)", ret);
+		dpaa_clean_device_list();
+		return ret;
+	}
+
+	DPAA_BUS_LOG(DEBUG, "dpaa_portal_key=%u, ret=%d\n",
+		    (unsigned int)dpaa_portal_key, ret);
+
 	return 0;
 }
 
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 9f41c77..853bc47 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,8 +1,55 @@
 DPDK_17.11 {
 	global:
 
+	bman_acquire;
+	bman_free_pool;
+	bman_get_params;
+	bman_global_init;
+	bman_new_pool;
+	bman_query_free_buffers;
+	bman_release;
+	dpaa_netcfg;
+	fman_ccsr_map_fd;
+	fman_dealloc_bufs_mask_hi;
+	fman_dealloc_bufs_mask_lo;
+	fman_if_add_mac_addr;
+	fman_if_clear_mac_addr;
+	fman_if_disable_rx;
+	fman_if_enable_rx;
+	fman_if_discard_rx_errors;
+	fman_if_get_fc_threshold;
+	fman_if_get_fc_quanta;
+	fman_if_get_fdoff;
+	fman_if_loopback_disable;
+	fman_if_loopback_enable;
+	fman_if_promiscuous_disable;
+	fman_if_promiscuous_enable;
+	fman_if_reset_mcast_filter_table;
+	fman_if_set_bp;
+	fman_if_set_fc_threshold;
+	fman_if_set_fc_quanta;
+	fman_if_set_fdoff;
+	fman_if_set_ic_params;
+	fman_if_set_maxfrm;
+	fman_if_set_mcast_filter_table;
+	fman_if_stats_get;
+	fman_if_stats_get_all;
+	fman_if_stats_reset;
+	fman_ip_rev;
+	netcfg_acquire;
+	netcfg_release;
+	qman_create_fq;
+	qman_dequeue;
+	qman_dqrr_consume;
+	qman_enqueue_multi;
+	qman_global_init;
+	qman_init_fq;
+	qman_set_vdq;
+	qman_reserve_fqid_range;
 	rte_dpaa_driver_register;
 	rte_dpaa_driver_unregister;
+	rte_dpaa_mem_ptov;
+	rte_dpaa_portal_init;
 
 	local: *;
 };
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 789882e..eafc944 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -35,6 +35,12 @@
 #include <rte_bus.h>
 #include <rte_mempool.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
 #define FSL_DPAA_BUS_NAME	"FSL_DPAA_BUS"
 
 #define DEV_TO_DPAA_DEVICE(ptr)	\
@@ -47,6 +53,9 @@ struct rte_dpaa_driver;
 TAILQ_HEAD(rte_dpaa_device_list, rte_dpaa_device);
 TAILQ_HEAD(rte_dpaa_driver_list, rte_dpaa_driver);
 
+/* Configuration variables exported from DPAA bus */
+extern struct netcfg_info *dpaa_netcfg;
+
 enum rte_dpaa_type {
 	FSL_DPAA_ETH = 1,
 	FSL_DPAA_CRYPTO,
@@ -131,6 +140,22 @@ void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
  */
 void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
 
+/**
+ * Initialize a DPAA portal
+ *
+ * @param arg
+ *	Per thread ID
+ *
+ * @return
+ *	0 in case of success, error otherwise
+ */
+int rte_dpaa_portal_init(void *arg);
+
+/**
+ * Cleanup a DPAA Portal
+ */
+void dpaa_portal_finish(void *arg);
+
 /** Helper for DPAA device registration from driver (eth, crypto) instance */
 #define RTE_PMD_REGISTER_DPAA(nm, dpaa_drv) \
 RTE_INIT(dpaainitfn_ ##nm); \
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 17/40] doc: add NXP DPAA PMD documentation
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (15 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
                             ` (24 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                       |   2 +
 doc/guides/nics/dpaa.rst          | 377 ++++++++++++++++++++++++++++++++++++++
 doc/guides/nics/features/dpaa.ini |   8 +
 doc/guides/nics/index.rst         |   1 +
 4 files changed, 388 insertions(+)
 create mode 100644 doc/guides/nics/dpaa.rst
 create mode 100644 doc/guides/nics/features/dpaa.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index c566962..dad876f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -412,6 +412,8 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: doc/guides/nics/dpaa.rst
+F: doc/guides/nics/features/dpaa.ini
 
 NXP dpaa2
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
new file mode 100644
index 0000000..7d054d7
--- /dev/null
+++ b/doc/guides/nics/dpaa.rst
@@ -0,0 +1,377 @@
+..  BSD LICENSE
+    Copyright 2017 NXP.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of NXP nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+DPAA Poll Mode Driver
+=====================
+
+The DPAA NIC PMD (**librte_pmd_dpaa**) provides poll mode driver
+support for the inbuilt NIC found in the **NXP DPAA** SoC family.
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+NXP DPAA (Data Path Acceleration Architecture - Gen 1)
+------------------------------------------------------
+
+This section provides an overview of the NXP DPAA architecture
+and how it is integrated into the DPDK.
+
+Contents summary
+
+- DPAA overview
+- DPAA driver architecture overview
+
+.. _dpaa_overview:
+
+DPAA Overview
+~~~~~~~~~~~~~
+
+Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
+
+The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
+components on specific QorIQ series multicore processors. This architecture
+provides the infrastructure to support simplified sharing of networking
+interfaces and accelerators by multiple CPU cores, and the accelerators
+themselves.
+
+DPAA includes:
+
+- Cores
+- Network and packet I/O
+- Hardware offload accelerators
+- Infrastructure required to facilitate flow of packets between the components above
+
+Infrastructure components are:
+
+- The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
+  It allows  CPUs and other accelerators connected to the SoC datapath to
+  enqueue and dequeue ethernet frames, thus providing the infrastructure for
+  data exchange among CPUs and datapath accelerators.
+- The Buffer Manager (BMan) is a hardware buffer pool management block that
+  allows software and accelerators on the datapath to acquire and release
+  buffers in order to build frames.
+
+Hardware accelerators are:
+
+- SEC - Cryptographic accelerator
+- PME - Pattern matching engine
+
+The Network and packet I/O component:
+
+- The Frame Manager (FMan) is a key component in the DPAA and makes use of the
+  DPAA infrastructure (QMan and BMan). FMan  is responsible for packet
+  distribution and policing. Each frame can be parsed, classified and results
+  may be attached to the frame. This meta data can be used to select
+  particular QMan queue, which the packet is forwarded to.
+
+
+DPAA DPDK - Poll Mode Driver Overview
+-------------------------------------
+
+This section provides an overview of the drivers for DPAA:
+
+* Bus driver and associated "DPAA infrastructure" drivers
+* Functional object drivers (such as Ethernet).
+
+Brief description of each driver is provided in layout below as well as
+in the following sections.
+
+.. code-block:: console
+
+                                       +------------+
+                                       | DPDK DPAA  |
+                                       |    PMD     |
+                                       +-----+------+
+                                             |
+                                       +-----+------+       +---------------+
+                                       :  Ethernet  :.......| DPDK DPAA     |
+                    . . . . . . . . .  :   (FMAN)   :       | Mempool driver|
+                   .                   +---+---+----+       |  (BMAN)       |
+                  .                        ^   |            +-----+---------+
+                 .                         |   |<enqueue,         .
+                .                          |   | dequeue>         .
+               .                           |   |                  .
+              .                        +---+---V----+             .
+             .      . . . . . . . . . .: Portal drv :             .
+            .      .                   :            :             .
+           .      .                    +-----+------+             .
+          .      .                     :   QMAN     :             .
+         .      .                      :  Driver    :             .
+    +----+------+-------+              +-----+------+             .
+    |   DPDK DPAA Bus   |                    |                    .
+    |   driver          |....................|.....................
+    |   /bus/dpaa       |                    |
+    +-------------------+                    |
+                                             |
+    ========================== HARDWARE =====|========================
+                                            PHY
+    =========================================|========================
+
+In the above representation, solid lines represent components which interface
+with DPDK RTE Framework and dotted lines represent DPAA internal components.
+
+DPAA Bus driver
+~~~~~~~~~~~~~~~
+
+The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
+Key functions include:
+
+- Scanning and parsing the various objects and adding them to their respective
+  device list.
+- Performing probe for available drivers against each scanned device
+- Creating necessary ethernet instance before passing control to the PMD
+
+DPAA NIC Driver (PMD)
+~~~~~~~~~~~~~~~~~~~~~
+
+DPAA PMD is traditional DPDK PMD which provides necessary interface between
+RTE framework and DPAA internal components/drivers.
+
+- Once devices have been identified by DPAA Bus, each device is associated
+  with the PMD
+- PMD is responsible for implementing necessary glue layer between RTE APIs
+  and lower level QMan and FMan blocks.
+  The Ethernet driver is bound to a FMAN port and implements the interfaces
+  needed to connect the DPAA network interface to the network stack.
+  Each FMAN Port corresponds to a DPDK network interface.
+
+
+Features
+^^^^^^^^
+
+  Features of the DPAA PMD are:
+
+  - Multiple queues for TX and RX
+  - Receive Side Scaling (RSS)
+  - Packet type information
+  - Checksum offload
+  - Promiscuous mode
+
+DPAA Mempool Driver
+~~~~~~~~~~~~~~~~~~~
+
+DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
+Manager.
+
+- Using standard Mempools operations RTE API, the mempool driver interfaces
+  with RTE to service each mempool creation, deletion, buffer allocation and
+  deallocation requests.
+- Each FMAN instance has a BMan pool attached to it during initialization.
+  Each Tx frame can be automatically released by hardware, if allocated from
+  this pool.
+
+
+Supported DPAA SoCs
+-------------------
+
+- LS1043A/LS1023A
+- LS1046A/LS1026A
+
+Prerequisites
+-------------
+
+There are three main pre-requisities for executing DPAA PMD on a DPAA
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+   For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
+
+2. **Linux Kernel**
+
+   It can be obtained from `NXP's Github hosting <https://github.com/qoriq-open-source/linux>`_.
+
+3. **Rootfile system**
+
+   Any *aarch64* supporting filesystem can be used. For example,
+   Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
+   from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
+
+4. **FMC Tool**
+
+   Before any DPDK application can be executed, the Frame Manager Configuration
+   Tool (FMC) need to be executed to set the configurations of the queues. This
+   includes the queue state, RSS and other policies.
+   This tool can be obtained from `NXP (Freescale) Public Git Repository <http://git.freescale.com/git/cgit.cgi/ppc/sdk/fmc.git>`_.
+   This tool needs configuration files which are available in the
+   :ref:`DPDK Extra Scripts <extra_scripts>`, described below.
+
+As an alternative method, DPAA PMD can also be executed using images provided
+as part of SDK from NXP. The SDK includes all the above prerequisites necessary
+to bring up a DPAA board.
+
+The following dependencies are not part of DPDK and must be installed
+separately:
+
+- **NXP Linux SDK**
+
+  NXP Linux software development kit (SDK) includes support for family
+  of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+  and corresponding boards.
+
+  It includes the Linux board support packages (BSPs) for NXP SoCs,
+  a fully operational tool chain, kernel and board specific modules.
+
+  SDK and related information can be obtained from:  `NXP QorIQ SDK  <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+
+.. _extra_scripts:
+
+- **DPDK Extra Scripts**
+
+  DPAA based resources can be configured easily with the help of ready scripts
+  as provided in the DPDK Extra repository.
+
+  `DPDK Extras Scripts <https://github.com/qoriq-open-source/dpdk-extras>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+**.
+- Supported architectures:  **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
+  to setup the basic DPDK environment.
+
+.. note::
+
+   Some part of dpaa bus code (qbman and fman - library) routines are
+   dual licensed (BSD & GPLv2).
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_bus_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-dpaa-* config.
+  Toggle compilation of the ``librte_pmd_dpaa`` driver.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
+
+  Toggles display of bus configurations and enables a debugging queue
+  to fetch error (Rx/Tx) packets to driver. By default, packets with errors
+  (like wrong checksum) are dropped by the hardware.
+
+- ``CONFIG_RTE_LIBRTE_DPAA_HWDEBUG`` (default ``n``)
+
+  Enables debugging of the Queue and Buffer Manager layer which interacts
+  with the DPAA hardware.
+
+- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` (default ``dpaa``)
+
+  This is not a DPAA specific configuration - it is a generic RTE config.
+  For optimal performance and hardware utilization, it is expected that DPAA
+  Mempool driver is used for mempools. For that, this configuration needs to
+  enabled.
+
+Environment Variables
+~~~~~~~~~~~~~~~~~~~~~
+
+DPAA drivers uses the following environment variables to configure its
+state during application initialization:
+
+- ``DPAA_NUM_RX_QUEUES`` (default 1)
+
+  This defines the number of Rx queues configured for an application, per
+  port. Hardware would distribute across these many number of queues on Rx
+  of packets.
+  In case the application is configured to use lesser number of queues than
+  configured above, it might result in packet loss (because of distribution).
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+#. Running testpmd:
+
+   Follow instructions available in the document
+   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+   to run testpmd.
+
+   Example output:
+
+   .. code-block:: console
+
+      ./arm64-dpaa-linuxapp-gcc/testpmd -c 0xff -n 1 \
+        -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
+
+      .....
+      EAL: Registered [pci] bus.
+      EAL: Registered [dpaa] bus.
+      EAL: Detected 4 lcore(s)
+      .....
+      EAL: dpaa: Bus scan completed
+      .....
+      Configuring Port 0 (socket 0)
+      Port 0: 00:00:00:00:00:01
+      Configuring Port 1 (socket 0)
+      Port 1: 00:00:00:00:00:02
+      .....
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Port 1 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+Limitations
+-----------
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA drivers for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
+up to 10240 bytes can still reach the host interface.
+
+Multiprocess Support
+~~~~~~~~~~~~~~~~~~~~
+
+Current version of DPAA driver doesn't support multi-process applications
+where I/O is performed using secondary processes. This feature would be
+implemented in subsequent versions.
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
new file mode 100644
index 0000000..9e8befc
--- /dev/null
+++ b/doc/guides/nics/features/dpaa.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'dpaa' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+ARMv8                = Y
+Usage doc            = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 36f4f3f..4115141 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -43,6 +43,7 @@ Network Interface Controller Drivers
     bnx2x
     bnxt
     cxgbe
+    dpaa
     dpaa2
     e1000em
     ena
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 18/40] bus/dpaa: add DPAA mempool logging macros
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (16 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 19/40] mempool/dpaa: support NXP DPAA Mempool Shreyansh Jain
                             ` (23 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c               |  5 +++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  1 +
 drivers/bus/dpaa/rte_dpaa_logs.h          | 20 ++++++++++++++++++++
 3 files changed, 26 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 8017df3..dc2b3ad 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -70,6 +70,7 @@
 #include <netcfg.h>
 
 int dpaa_logtype_bus;
+int dpaa_logtype_mempool;
 
 struct rte_dpaa_bus rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
@@ -452,4 +453,8 @@ dpaa_init_log(void)
 	dpaa_logtype_bus = rte_log_register("bus.dpaa");
 	if (dpaa_logtype_bus >= 0)
 		rte_log_set_level(dpaa_logtype_bus, RTE_LOG_NOTICE);
+
+	dpaa_logtype_mempool = rte_log_register("mempool.dpaa");
+	if (dpaa_logtype_mempool >= 0)
+		rte_log_set_level(dpaa_logtype_mempool, RTE_LOG_NOTICE);
 }
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 853bc47..a2394b8 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -8,6 +8,7 @@ DPDK_17.11 {
 	bman_new_pool;
 	bman_query_free_buffers;
 	bman_release;
+	dpaa_logtype_mempool;
 	dpaa_netcfg;
 	fman_ccsr_map_fd;
 	fman_dealloc_bufs_mask_hi;
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
index cc10937..5335fd8 100644
--- a/drivers/bus/dpaa/rte_dpaa_logs.h
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -36,6 +36,7 @@
 #include <rte_log.h>
 
 extern int dpaa_logtype_bus;
+extern int dpaa_logtype_mempool;
 
 #define DPAA_BUS_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
@@ -62,4 +63,23 @@ extern int dpaa_logtype_bus;
 #define DPAA_BUS_WARN(fmt, args...) \
 	DPAA_BUS_LOG(WARNING, fmt, ## args)
 
+/* Mempool related logs */
+
+#define DPAA_MEMPOOL_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_mempool, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define MEMPOOL_INIT_FUNC_TRACE() DPAA_MEMPOOL_LOG(DEBUG, " >>")
+
+#define DPAA_MEMPOOL_DPDEBUG(fmt, args...) \
+	RTE_LOG_DP(DEBUG, PMD, fmt, ## args)
+#define DPAA_MEMPOOL_DEBUG(fmt, args...) \
+	DPAA_MEMPOOL_LOG(DEBUG, fmt, ## args)
+#define DPAA_MEMPOOL_ERR(fmt, args...) \
+	DPAA_MEMPOOL_LOG(ERR, fmt, ## args)
+#define DPAA_MEMPOOL_INFO(fmt, args...) \
+	DPAA_MEMPOOL_LOG(INFO, fmt, ## args)
+#define DPAA_MEMPOOL_WARN(fmt, args...) \
+	DPAA_MEMPOOL_LOG(WARNING, fmt, ## args)
+
 #endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 19/40] mempool/dpaa: support NXP DPAA Mempool
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (17 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 20/40] config: enable compilation of DPAA Mempool driver Shreyansh Jain
                             ` (22 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This Mempool driver works with DPAA BMan hardware block. This block
manages data buffers in memory, and provides efficient interface with
other hardware and software components for buffer requests.

This patch adds support for BMan. Compilation would be enabled in
subsequent patches.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                                       |   1 +
 drivers/mempool/Makefile                          |   2 +
 drivers/mempool/dpaa/Makefile                     |  58 +++++
 drivers/mempool/dpaa/dpaa_mempool.c               | 286 ++++++++++++++++++++++
 drivers/mempool/dpaa/dpaa_mempool.h               |  77 ++++++
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map |   8 +
 6 files changed, 432 insertions(+)
 create mode 100644 drivers/mempool/dpaa/Makefile
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.c
 create mode 100644 drivers/mempool/dpaa/dpaa_mempool.h
 create mode 100644 drivers/mempool/dpaa/rte_mempool_dpaa_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index dad876f..022715f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -412,6 +412,7 @@ NXP dpaa
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
+F: drivers/mempool/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index efd55f2..bfc5f00 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -32,6 +32,8 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 core-libs := librte_eal librte_mempool librte_ring
 
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring
diff --git a/drivers/mempool/dpaa/Makefile b/drivers/mempool/dpaa/Makefile
new file mode 100644
index 0000000..25312a0
--- /dev/null
+++ b/drivers/mempool/dpaa/Makefile
@@ -0,0 +1,58 @@
+#   BSD LICENSE
+#
+#   Copyright 2016 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_dpaa.a
+
+CFLAGS := -I$(SRCDIR) $(CFLAGS)
+CFLAGS += -O3 $(WERROR_FLAGS)
+CFLAGS += -D _GNU_SOURCE
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
+CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
+
+# versioning export map
+EXPORT_MAP := rte_mempool_dpaa_version.map
+
+# Lbrary version
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa_mempool.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
new file mode 100644
index 0000000..921c36b
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -0,0 +1,286 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <dpaa_mempool.h>
+
+struct dpaa_bp_info rte_dpaa_bpid_info[DPAA_MAX_BPOOLS];
+
+static int
+dpaa_mbuf_create_pool(struct rte_mempool *mp)
+{
+	struct bman_pool *bp;
+	struct bm_buffer bufs[8];
+	struct dpaa_bp_info *bp_info;
+	uint8_t bpid;
+	int num_bufs = 0, ret = 0;
+	struct bman_pool_params params = {
+		.flags = BMAN_POOL_FLAG_DYNAMIC_BPID
+	};
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	bp = bman_new_pool(&params);
+	if (!bp) {
+		DPAA_MEMPOOL_ERR("bman_new_pool() failed");
+		return -ENODEV;
+	}
+	bpid = bman_get_params(bp)->bpid;
+
+	/* Drain the pool of anything already in it. */
+	do {
+		/* Acquire is all-or-nothing, so we drain in 8s,
+		 * then in 1s for the remainder.
+		 */
+		if (ret != 1)
+			ret = bman_acquire(bp, bufs, 8, 0);
+		if (ret < 8)
+			ret = bman_acquire(bp, bufs, 1, 0);
+		if (ret > 0)
+			num_bufs += ret;
+	} while (ret > 0);
+	if (num_bufs)
+		DPAA_MEMPOOL_WARN("drained %u bufs from BPID %d",
+				  num_bufs, bpid);
+
+	rte_dpaa_bpid_info[bpid].mp = mp;
+	rte_dpaa_bpid_info[bpid].bpid = bpid;
+	rte_dpaa_bpid_info[bpid].size = mp->elt_size;
+	rte_dpaa_bpid_info[bpid].bp = bp;
+	rte_dpaa_bpid_info[bpid].meta_data_size =
+		sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(mp);
+	rte_dpaa_bpid_info[bpid].dpaa_ops_index = mp->ops_index;
+
+	bp_info = rte_malloc(NULL,
+			     sizeof(struct dpaa_bp_info),
+			     RTE_CACHE_LINE_SIZE);
+	if (!bp_info) {
+		DPAA_MEMPOOL_WARN("Memory allocation failed for bp_info");
+		bman_free_pool(bp);
+		return -ENOMEM;
+	}
+
+	rte_memcpy(bp_info, (void *)&rte_dpaa_bpid_info[bpid],
+		   sizeof(struct dpaa_bp_info));
+	mp->pool_data = (void *)bp_info;
+
+	DPAA_MEMPOOL_INFO("BMAN pool created for bpid =%d", bpid);
+	return 0;
+}
+
+static void
+dpaa_mbuf_free_pool(struct rte_mempool *mp)
+{
+	struct dpaa_bp_info *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	if (bp_info) {
+		bman_free_pool(bp_info->bp);
+		DPAA_MEMPOOL_INFO("BMAN pool freed for bpid =%d",
+				  bp_info->bpid);
+		rte_free(mp->pool_data);
+		mp->pool_data = NULL;
+	}
+}
+
+static void
+dpaa_buf_free(struct dpaa_bp_info *bp_info, uint64_t addr)
+{
+	struct bm_buffer buf;
+	int ret;
+
+	DPAA_MEMPOOL_DEBUG("Free 0x%lx to bpid: %d", addr, bp_info->bpid);
+
+	bm_buffer_set64(&buf, addr);
+retry:
+	ret = bman_release(bp_info->bp, &buf, 1, 0);
+	if (ret) {
+		DPAA_MEMPOOL_DEBUG("BMAN busy. Retrying...");
+		cpu_spin(CPU_SPIN_BACKOFF_CYCLES);
+		goto retry;
+	}
+}
+
+static int
+dpaa_mbuf_free_bulk(struct rte_mempool *pool,
+		    void *const *obj_table,
+		    unsigned int n)
+{
+	struct dpaa_bp_info *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+	int ret;
+	unsigned int i = 0;
+
+	DPAA_MEMPOOL_DPDEBUG("Request to free %d buffers in bpid = %d",
+			     n, bp_info->bpid);
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
+				 ret);
+		return 0;
+	}
+
+	while (i < n) {
+		dpaa_buf_free(bp_info,
+			      (uint64_t)rte_mempool_virt2phy(pool,
+			      obj_table[i]) + bp_info->meta_data_size);
+		i = i + 1;
+	}
+
+	DPAA_MEMPOOL_DPDEBUG("freed %d buffers in bpid =%d",
+			     n, bp_info->bpid);
+
+	return 0;
+}
+
+static int
+dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
+		     void **obj_table,
+		     unsigned int count)
+{
+	struct rte_mbuf **m = (struct rte_mbuf **)obj_table;
+	struct bm_buffer bufs[DPAA_MBUF_MAX_ACQ_REL];
+	struct dpaa_bp_info *bp_info;
+	void *bufaddr;
+	int i, ret;
+	unsigned int n = 0;
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
+
+	DPAA_MEMPOOL_DPDEBUG("Request to alloc %d buffers in bpid = %d",
+			     count, bp_info->bpid);
+
+	if (unlikely(count >= (RTE_MEMPOOL_CACHE_MAX_SIZE * 2))) {
+		DPAA_MEMPOOL_ERR("Unable to allocate requested (%u) buffers",
+				 count);
+		return -1;
+	}
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
+				 ret);
+		return -1;
+	}
+
+	while (n < count) {
+		/* Acquire is all-or-nothing, so we drain in 7s,
+		 * then the remainder.
+		 */
+		if ((count - n) > DPAA_MBUF_MAX_ACQ_REL) {
+			ret = bman_acquire(bp_info->bp, bufs,
+					   DPAA_MBUF_MAX_ACQ_REL, 0);
+		} else {
+			ret = bman_acquire(bp_info->bp, bufs, count - n, 0);
+		}
+		/* In case of less than requested number of buffers available
+		 * in pool, qbman_swp_acquire returns 0
+		 */
+		if (ret <= 0) {
+			DPAA_MEMPOOL_DPDEBUG("Buffer acquire failed (%d)",
+					     ret);
+			/* The API expect the exact number of requested
+			 * buffers. Releasing all buffers allocated
+			 */
+			dpaa_mbuf_free_bulk(pool, obj_table, n);
+			return -ENOBUFS;
+		}
+		/* assigning mbuf from the acquired objects */
+		for (i = 0; (i < ret) && bufs[i].addr; i++) {
+			/* TODO-errata - objerved that bufs may be null
+			 * i.e. first buffer is valid, remaining 6 buffers
+			 * may be null.
+			 */
+			bufaddr = (void *)rte_dpaa_mem_ptov(bufs[i].addr);
+			m[n] = (struct rte_mbuf *)((char *)bufaddr
+						- bp_info->meta_data_size);
+			DPAA_MEMPOOL_DPDEBUG("Paddr (%p), FD (%p) from BMAN",
+					     (void *)bufaddr, (void *)m[n]);
+			n++;
+		}
+	}
+
+	DPAA_MEMPOOL_DPDEBUG("Allocated %d buffers from bpid=%d",
+			     n, bp_info->bpid);
+	return 0;
+}
+
+static unsigned int
+dpaa_mbuf_get_count(const struct rte_mempool *mp)
+{
+	struct dpaa_bp_info *bp_info;
+
+	MEMPOOL_INIT_FUNC_TRACE();
+
+	if (!mp || !mp->pool_data) {
+		DPAA_MEMPOOL_ERR("Invalid mempool provided\n");
+		return 0;
+	}
+
+	bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+	return bman_query_free_buffers(bp_info->bp);
+}
+
+struct rte_mempool_ops dpaa_mpool_ops = {
+	.name = "dpaa",
+	.alloc = dpaa_mbuf_create_pool,
+	.free = dpaa_mbuf_free_pool,
+	.enqueue = dpaa_mbuf_free_bulk,
+	.dequeue = dpaa_mbuf_alloc_bulk,
+	.get_count = dpaa_mbuf_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
diff --git a/drivers/mempool/dpaa/dpaa_mempool.h b/drivers/mempool/dpaa/dpaa_mempool.h
new file mode 100644
index 0000000..de33c0c
--- /dev/null
+++ b/drivers/mempool/dpaa/dpaa_mempool.h
@@ -0,0 +1,77 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of NXP nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_MEMPOOL_H__
+#define __DPAA_MEMPOOL_H__
+
+/* System headers */
+#include <stdio.h>
+#include <stdbool.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <unistd.h>
+
+#include <rte_mempool.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <fsl_usd.h>
+#include <fsl_bman.h>
+
+#define CPU_SPIN_BACKOFF_CYCLES               512
+
+/* total number of bpools on SoC */
+#define DPAA_MAX_BPOOLS	256
+
+/* Maximum release/acquire from BMAN */
+#define DPAA_MBUF_MAX_ACQ_REL  8
+
+struct dpaa_bp_info {
+	struct rte_mempool *mp;
+	struct bman_pool *bp;
+	uint32_t bpid;
+	uint32_t size;
+	uint32_t meta_data_size;
+	int32_t dpaa_ops_index;
+};
+
+#define DPAA_MEMPOOL_TO_POOL_INFO(__mp) \
+	((struct dpaa_bp_info *)__mp->pool_data)
+
+#define DPAA_MEMPOOL_TO_BPID(__mp) \
+	(((struct dpaa_bp_info *)__mp->pool_data)->bpid)
+
+extern struct dpaa_bp_info rte_dpaa_bpid_info[DPAA_MAX_BPOOLS];
+
+#define DPAA_BPID_TO_POOL_INFO(__bpid) (&rte_dpaa_bpid_info[__bpid])
+
+#endif
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
new file mode 100644
index 0000000..cc635c7
--- /dev/null
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -0,0 +1,8 @@
+DPDK_17.11 {
+	global:
+
+	rte_dpaa_bpid_info;
+	rte_dpaa_pool_table;
+
+	local: *;
+};
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 20/40] config: enable compilation of DPAA Mempool driver
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (18 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 19/40] mempool/dpaa: support NXP DPAA Mempool Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 21/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
                             ` (21 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

This patch also adds configuration necessary for compilation of DPAA
Mempool driver into the DPAA specific config file.
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=dpaa is also configured to allow
applications to use DPAA mempool as default.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       | 1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 4 ++++
 2 files changed, 5 insertions(+)

diff --git a/config/common_base b/config/common_base
index fc1cdca..fe287b0 100644
--- a/config/common_base
+++ b/config/common_base
@@ -303,6 +303,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 4d6b046..3e11718 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -50,3 +50,7 @@ CONFIG_RTE_PKTMBUF_HEADROOM=128
 CONFIG_RTE_LIBRTE_DPAA_BUS=y
 CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_DPAA_HWDEBUG=n
+
+# NXP DPAA Mempool
+CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 21/40] bus/dpaa: add DPAA PMD logging macros
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (19 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 20/40] config: enable compilation of DPAA Mempool driver Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 22/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
                             ` (20 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/dpaa_bus.c               |  5 +++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  1 +
 drivers/bus/dpaa/rte_dpaa_logs.h          | 22 ++++++++++++++++++++++
 3 files changed, 28 insertions(+)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index dc2b3ad..7ae5bfa 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -71,6 +71,7 @@
 
 int dpaa_logtype_bus;
 int dpaa_logtype_mempool;
+int dpaa_logtype_pmd;
 
 struct rte_dpaa_bus rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
@@ -457,4 +458,8 @@ dpaa_init_log(void)
 	dpaa_logtype_mempool = rte_log_register("mempool.dpaa");
 	if (dpaa_logtype_mempool >= 0)
 		rte_log_set_level(dpaa_logtype_mempool, RTE_LOG_NOTICE);
+
+	dpaa_logtype_pmd = rte_log_register("pmd.dpaa");
+	if (dpaa_logtype_pmd >= 0)
+		rte_log_set_level(dpaa_logtype_pmd, RTE_LOG_NOTICE);
 }
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index a2394b8..64a05a9 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -9,6 +9,7 @@ DPDK_17.11 {
 	bman_query_free_buffers;
 	bman_release;
 	dpaa_logtype_mempool;
+	dpaa_logtype_pmd;
 	dpaa_netcfg;
 	fman_ccsr_map_fd;
 	fman_dealloc_bufs_mask_hi;
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
index 5335fd8..037c96b 100644
--- a/drivers/bus/dpaa/rte_dpaa_logs.h
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -37,6 +37,7 @@
 
 extern int dpaa_logtype_bus;
 extern int dpaa_logtype_mempool;
+extern int dpaa_logtype_pmd;
 
 #define DPAA_BUS_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
@@ -82,4 +83,25 @@ extern int dpaa_logtype_mempool;
 #define DPAA_MEMPOOL_WARN(fmt, args...) \
 	DPAA_MEMPOOL_LOG(WARNING, fmt, ## args)
 
+/* PMD related logs */
+
+#define DPAA_PMD_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, dpaa_logtype_pmd, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() DPAA_PMD_LOG(DEBUG, " >>")
+
+#define DPAA_PMD_DEBUG(fmt, args...) \
+	DPAA_PMD_LOG(DEBUG, fmt, ## args)
+#define DPAA_PMD_ERR(fmt, args...) \
+	DPAA_PMD_LOG(ERR, fmt, ## args)
+#define DPAA_PMD_INFO(fmt, args...) \
+	DPAA_PMD_LOG(INFO, fmt, ## args)
+#define DPAA_PMD_WARN(fmt, args...) \
+	DPAA_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define DPAA_DP_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, fmt, ## args)
+
 #endif /* _DPAA_LOGS_H_ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 22/40] net/dpaa: add NXP DPAA PMD driver skeleton
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (20 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 21/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 23/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
                             ` (19 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

A skeleton which would be called after bus device scan. It currently
fails to identify the device.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 MAINTAINERS                               |   1 +
 drivers/net/dpaa/Makefile                 |  57 +++++++
 drivers/net/dpaa/dpaa_ethdev.c            | 256 ++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_ethdev.h            | 127 +++++++++++++++
 drivers/net/dpaa/rte_pmd_dpaa_version.map |   4 +
 5 files changed, 445 insertions(+)
 create mode 100644 drivers/net/dpaa/Makefile
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.c
 create mode 100644 drivers/net/dpaa/dpaa_ethdev.h
 create mode 100644 drivers/net/dpaa/rte_pmd_dpaa_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 022715f..9eec984 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -413,6 +413,7 @@ M: Hemant Agrawal <hemant.agrawal@nxp.com>
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 F: drivers/bus/dpaa/
 F: drivers/mempool/dpaa/
+F: drivers/net/dpaa/
 F: doc/guides/nics/dpaa.rst
 F: doc/guides/nics/features/dpaa.ini
 
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
new file mode 100644
index 0000000..bb305ca
--- /dev/null
+++ b/drivers/net/dpaa/Makefile
@@ -0,0 +1,57 @@
+#   BSD LICENSE
+#
+#   Copyright 2017 NXP.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of NXP nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_SDK_DPAA=$(RTE_SDK)/drivers/net/dpaa
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa.a
+
+CFLAGS := -I$(SRCDIR) $(CFLAGS)
+CFLAGS += -O3 $(WERROR_FLAGS)
+CFLAGS += -I$(RTE_SDK_DPAA)/
+CFLAGS += -I$(RTE_SDK_DPAA)/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
+
+EXPORT_MAP := rte_pmd_dpaa_version.map
+
+LIBABIVER := 1
+
+# Interfaces with DPDK
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+
+LDLIBS += -lrte_bus_dpaa
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
new file mode 100644
index 0000000..4543dfc
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -0,0 +1,256 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+
+#include <dpaa_ethdev.h>
+
+/* Keep track of whether QMAN and BMAN have been globally initialized */
+static int is_global_init;
+
+static int
+dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	/* Change tx callback to the real one */
+	dev->tx_pkt_burst = NULL;
+
+	return 0;
+}
+
+static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = NULL;
+}
+
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static struct eth_dev_ops dpaa_devops = {
+	.dev_configure		  = dpaa_eth_dev_configure,
+	.dev_start		  = dpaa_eth_dev_start,
+	.dev_stop		  = dpaa_eth_dev_stop,
+	.dev_close		  = dpaa_eth_dev_close,
+};
+
+/* Initialise a network interface */
+static int
+dpaa_dev_init(struct rte_eth_dev *eth_dev)
+{
+	int dev_id;
+	struct rte_dpaa_device *dpaa_device;
+	struct dpaa_if *dpaa_intf;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
+	dev_id = dpaa_device->id.dev_id;
+	dpaa_intf = eth_dev->data->dev_private;
+
+	dpaa_intf->name = dpaa_device->name;
+
+	dpaa_intf->ifid = dev_id;
+
+	eth_dev->dev_ops = &dpaa_devops;
+
+	return 0;
+}
+
+static int
+dpaa_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	if (!dpaa_intf) {
+		DPAA_PMD_WARN("Already closed or not started");
+		return -1;
+	}
+
+	dpaa_eth_dev_close(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
+	return 0;
+}
+
+static int
+rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv,
+	       struct rte_dpaa_device *dpaa_dev)
+{
+	int diag;
+	int ret;
+	struct rte_eth_dev *eth_dev;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* In case of secondary process, the device is already configured
+	 * and no further action is required, except portal initialization
+	 * and verifying secondary attachment to port name.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		eth_dev = rte_eth_dev_attach_secondary(dpaa_dev->name);
+		if (!eth_dev)
+			return -ENOMEM;
+		return 0;
+	}
+
+	if (!is_global_init) {
+		/* One time load of Qman/Bman drivers */
+		ret = qman_global_init();
+		if (ret) {
+			DPAA_PMD_ERR("QMAN initialization failed: %d",
+				     ret);
+			return ret;
+		}
+		ret = bman_global_init();
+		if (ret) {
+			DPAA_PMD_ERR("BMAN initialization failed: %d",
+				     ret);
+			return ret;
+		}
+
+		is_global_init = 1;
+	}
+
+	ret = rte_dpaa_portal_init((void *)1);
+	if (ret) {
+		DPAA_PMD_ERR("Unable to initialize portal");
+		return ret;
+	}
+
+	eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
+	if (eth_dev == NULL)
+		return -ENOMEM;
+
+	eth_dev->data->dev_private = rte_zmalloc(
+					"ethdev private structure",
+					sizeof(struct dpaa_if),
+					RTE_CACHE_LINE_SIZE);
+	if (!eth_dev->data->dev_private) {
+		DPAA_PMD_ERR("Cannot allocate memzone for port data");
+		rte_eth_dev_release_port(eth_dev);
+		return -ENOMEM;
+	}
+
+	eth_dev->device = &dpaa_dev->device;
+	eth_dev->device->driver = &dpaa_drv->driver;
+	dpaa_dev->eth_dev = eth_dev;
+
+	/* Invoke PMD device initialization function */
+	diag = dpaa_dev_init(eth_dev);
+	if (diag == 0)
+		return 0;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eth_dev->data->dev_private);
+
+	rte_eth_dev_release_port(eth_dev);
+	return diag;
+}
+
+static int
+rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
+{
+	struct rte_eth_dev *eth_dev;
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev = dpaa_dev->eth_dev;
+	dpaa_dev_uninit(eth_dev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eth_dev->data->dev_private);
+
+	rte_eth_dev_release_port(eth_dev);
+
+	return 0;
+}
+
+static struct rte_dpaa_driver rte_dpaa_pmd = {
+	.drv_type = FSL_DPAA_ETH,
+	.probe = rte_dpaa_probe,
+	.remove = rte_dpaa_remove,
+};
+
+RTE_PMD_REGISTER_DPAA(net_dpaa, rte_dpaa_pmd);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
new file mode 100644
index 0000000..2f25acb
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -0,0 +1,127 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright (c) 2014-2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __DPAA_ETHDEV_H__
+#define __DPAA_ETHDEV_H__
+
+/* System headers */
+#include <stdbool.h>
+#include <rte_ethdev.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
+#define DPAA_MBUF_HW_ANNOTATION		64
+#define DPAA_FD_PTA_SIZE		64
+
+#if (DPAA_MBUF_HW_ANNOTATION + DPAA_FD_PTA_SIZE) > RTE_PKTMBUF_HEADROOM
+#error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM"
+#endif
+
+/* we will re-use the HEADROOM for annotation in RX */
+#define DPAA_HW_BUF_RESERVE	0
+#define DPAA_PACKET_LAYOUT_ALIGN	64
+
+/* Alignment to use for cpu-local structs to avoid coherency problems. */
+#define MAX_CACHELINE			64
+
+#define DPAA_MIN_RX_BUF_SIZE 512
+#define DPAA_MAX_RX_PKT_LEN  10240
+
+/* RX queue tail drop threshold
+ * currently considering 32 KB packets.
+ */
+#define CONG_THRESHOLD_RX_Q  (32 * 1024)
+
+/*max mac filter for memac(8) including primary mac addr*/
+#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
+
+/*Maximum number of slots available in TX ring*/
+#define MAX_TX_RING_SLOTS	8
+
+/* PCD frame queues */
+#define DPAA_PCD_FQID_START		0x400
+#define DPAA_PCD_FQID_MULTIPLIER	0x100
+#define DPAA_DEFAULT_NUM_PCD_QUEUES	1
+
+#define DPAA_IF_TX_PRIORITY		3
+#define DPAA_IF_RX_PRIORITY		4
+#define DPAA_IF_DEBUG_PRIORITY		7
+
+#define DPAA_IF_RX_ANNOTATION_STASH	1
+#define DPAA_IF_RX_DATA_STASH		1
+#define DPAA_IF_RX_CONTEXT_STASH		0
+
+/* Each "debug" FQ is represented by one of these */
+#define DPAA_DEBUG_FQ_RX_ERROR   0
+#define DPAA_DEBUG_FQ_TX_ERROR   1
+
+#define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
+		PKT_TX_IP_CKSUM |                \
+		PKT_TX_TCP_CKSUM |               \
+		PKT_TX_UDP_CKSUM)
+
+/* DPAA Frame descriptor macros */
+
+#define DPAA_FD_CMD_FCO			0x80000000
+/**< Frame queue Context Override */
+#define DPAA_FD_CMD_RPD			0x40000000
+/**< Read Prepended Data */
+#define DPAA_FD_CMD_UPD			0x20000000
+/**< Update Prepended Data */
+#define DPAA_FD_CMD_DTC			0x10000000
+/**< Do IP/TCP/UDP Checksum */
+#define DPAA_FD_CMD_DCL4C		0x10000000
+/**< Didn't calculate L4 Checksum */
+#define DPAA_FD_CMD_CFQ			0x00ffffff
+/**< Confirmation Frame Queue */
+
+/* Each network interface is represented by one of these */
+struct dpaa_if {
+	int valid;
+	char *name;
+	const struct fm_eth_port_cfg *cfg;
+	struct qman_fq *rx_queues;
+	struct qman_fq *tx_queues;
+	struct qman_fq debug_queues[2];
+	uint16_t nb_rx_queues;
+	uint16_t nb_tx_queues;
+	uint32_t ifid;
+	struct fman_if *fif;
+	struct dpaa_bp_info *bp_info;
+	struct rte_eth_fc_conf *fc_conf;
+};
+
+#endif
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
new file mode 100644
index 0000000..a70bd19
--- /dev/null
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -0,0 +1,4 @@
+DPDK_17.11 {
+
+	local: *;
+};
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 23/40] config: enable NXP DPAA PMD compilation
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (21 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 22/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 24/40] net/dpaa: support Tx and Rx queue setup Shreyansh Jain
                             ` (18 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 config/common_base                       | 1 +
 config/defconfig_arm64-dpaa-linuxapp-gcc | 3 +++
 drivers/net/Makefile                     | 2 ++
 mk/rte.app.mk                            | 4 ++++
 4 files changed, 10 insertions(+)

diff --git a/config/common_base b/config/common_base
index fe287b0..ca47615 100644
--- a/config/common_base
+++ b/config/common_base
@@ -304,6 +304,7 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
+CONFIG_RTE_LIBRTE_DPAA_PMD=n
 
 #
 # Compile NXP DPAA2 FSL-MC Bus
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index 3e11718..f59834c 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -54,3 +54,6 @@ CONFIG_RTE_LIBRTE_DPAA_HWDEBUG=n
 # NXP DPAA Mempool
 CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
 CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
+
+# Compile software NXP DPAA PMD
+CONFIG_RTE_LIBRTE_DPAA_PMD=y
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index d33c959..2bd42f8 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -51,6 +51,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DEPDIRS-bonding = $(core-libs) librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
 DEPDIRS-cxgbe = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa
+DEPDIRS-dpaa = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD) += dpaa2
 DEPDIRS-dpaa2 = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += e1000
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index c25fdd9..9e268ff 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -116,6 +116,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD)      += -lrte_pmd_cxgbe
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL)   += -lrte_mempool_dpaa
+endif
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_pmd_dpaa2
 _LDLIBS-$(CONFIG_RTE_LIBRTE_E1000_PMD)      += -lrte_pmd_e1000
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ENA_PMD)        += -lrte_pmd_ena
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 24/40] net/dpaa: support Tx and Rx queue setup
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (22 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 23/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 25/40] net/dpaa: support MTU update Shreyansh Jain
                             ` (17 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/Makefile      |   4 +
 drivers/net/dpaa/dpaa_ethdev.c | 294 +++++++++++++++++++++++++++++++-
 drivers/net/dpaa/dpaa_rxtx.c   | 370 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h   |  61 +++++++
 4 files changed, 726 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h

diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index bb305ca..c77384c 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -38,10 +38,12 @@ LIB = librte_pmd_dpaa.a
 
 CFLAGS := -I$(SRCDIR) $(CFLAGS)
 CFLAGS += -O3 $(WERROR_FLAGS)
+CFLAGS += -Wno-pointer-arith
 CFLAGS += -I$(RTE_SDK_DPAA)/
 CFLAGS += -I$(RTE_SDK_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
 
@@ -51,7 +53,9 @@ LIBABIVER := 1
 
 # Interfaces with DPDK
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_rxtx.c
 
 LDLIBS += -lrte_bus_dpaa
+LDLIBS += -lrte_mempool_dpaa
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4543dfc..4996daa 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -62,8 +62,15 @@
 
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
+#include <dpaa_mempool.h>
 
 #include <dpaa_ethdev.h>
+#include <dpaa_rxtx.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <fsl_fman.h>
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -78,20 +85,104 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
 	PMD_INIT_FUNC_TRACE();
 
 	/* Change tx callback to the real one */
-	dev->tx_pkt_burst = NULL;
+	dev->tx_pkt_burst = dpaa_eth_queue_tx;
+	fman_if_enable_rx(dpaa_intf->fif);
 
 	return 0;
 }
 
 static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = NULL;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_disable_rx(dpaa_intf->fif);
+	dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
 }
 
-static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+}
+
+static
+int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+			    unsigned int socket_id __rte_unused,
+			    const struct rte_eth_rxconf *rx_conf __rte_unused,
+			    struct rte_mempool *mp)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	DPAA_PMD_INFO("Rx queue setup for queue index: %d", queue_idx);
+
+	if (!dpaa_intf->bp_info || dpaa_intf->bp_info->mp != mp) {
+		struct fman_if_ic_params icp;
+		uint32_t fd_offset;
+		uint32_t bp_size;
+
+		if (!mp->pool_data) {
+			DPAA_PMD_ERR("Not an offloaded buffer pool!");
+			return -1;
+		}
+		dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+		memset(&icp, 0, sizeof(icp));
+		/* set ICEOF for to the default value , which is 0*/
+		icp.iciof = DEFAULT_ICIOF;
+		icp.iceof = DEFAULT_RX_ICEOF;
+		icp.icsz = DEFAULT_ICSZ;
+		fman_if_set_ic_params(dpaa_intf->fif, &icp);
+
+		fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
+		fman_if_set_fdoff(dpaa_intf->fif, fd_offset);
+
+		/* Buffer pool size should be equal to Dataroom Size*/
+		bp_size = rte_pktmbuf_data_room_size(mp);
+		fman_if_set_bp(dpaa_intf->fif, mp->size,
+			       dpaa_intf->bp_info->bpid, bp_size);
+		dpaa_intf->valid = 1;
+		DPAA_PMD_INFO("if =%s - fd_offset = %d offset = %d",
+			    dpaa_intf->name, fd_offset,
+			fman_if_get_fdoff(dpaa_intf->fif));
+	}
+	dev->data->rx_queues[queue_idx] = &dpaa_intf->rx_queues[queue_idx];
+
+	return 0;
+}
+
+static
+void dpaa_eth_rx_queue_release(void *rxq __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+static
+int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+		unsigned int socket_id __rte_unused,
+		const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	DPAA_PMD_INFO("Tx queue setup for queue index: %d", queue_idx);
+	dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx];
+	return 0;
+}
+
+static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 }
@@ -101,15 +192,102 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+
+	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
+	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
+	.rx_queue_release	  = dpaa_eth_rx_queue_release,
+	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 };
 
+/* Initialise an Rx FQ */
+static int dpaa_rx_queue_init(struct qman_fq *fq,
+			      uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		DPAA_PMD_ERR("reserve rx fqid %d failed with ret: %d",
+			     fqid, ret);
+		return -EINVAL;
+	}
+
+	DPAA_PMD_DEBUG("creating rx fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create rx fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTA;
+
+	opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
+			   QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_a.stashing.exclusive = 0;
+	opts.fqd.context_a.stashing.annotation_cl = DPAA_IF_RX_ANNOTATION_STASH;
+	opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
+	opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
+
+	/*Enable tail drop */
+	opts.we_mask = opts.we_mask | QM_INITFQ_WE_TDTHRESH;
+	opts.fqd.fq_ctrl = opts.fqd.fq_ctrl | QM_FQCTRL_TDE;
+	qm_fqd_taildrop_set(&opts.fqd.td, CONG_THRESHOLD_RX_Q, 1);
+
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init rx fqid %d failed with ret: %d", fqid, ret);
+	return ret;
+}
+
+/* Initialise a Tx FQ */
+static int dpaa_tx_queue_init(struct qman_fq *fq,
+			      struct fman_if *fman_intf)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID |
+			     QMAN_FQ_FLAG_TO_DCPORTAL, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create tx fq failed with ret: %d", ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTB | QM_INITFQ_WE_CONTEXTA;
+	opts.fqd.dest.channel = fman_intf->tx_channel_id;
+	opts.fqd.dest.wq = DPAA_IF_TX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_b = 0;
+	/* no tx-confirmation */
+	opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+	opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
+	DPAA_PMD_DEBUG("init tx fq %p, fqid %d", fq, fq->fqid);
+	ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init tx fqid %d failed %d", fq->fqid, ret);
+	return ret;
+}
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
 {
+	int num_cores, num_rx_fqs, fqid;
+	int loop, ret = 0;
 	int dev_id;
 	struct rte_dpaa_device *dpaa_device;
 	struct dpaa_if *dpaa_intf;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+	struct fman_if_bpool *bp, *tmp_bp;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -120,12 +298,108 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
 	dev_id = dpaa_device->id.dev_id;
 	dpaa_intf = eth_dev->data->dev_private;
+	cfg = &dpaa_netcfg->port_cfg[dev_id];
+	fman_intf = cfg->fman_if;
 
 	dpaa_intf->name = dpaa_device->name;
 
+	/* save fman_if & cfg in the interface struture */
+	dpaa_intf->fif = fman_intf;
 	dpaa_intf->ifid = dev_id;
+	dpaa_intf->cfg = cfg;
+
+	/* Initialize Rx FQ's */
+	if (getenv("DPAA_NUM_RX_QUEUES"))
+		num_rx_fqs = atoi(getenv("DPAA_NUM_RX_QUEUES"));
+	else
+		num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
 
+	/* Each device can not have more than DPAA_PCD_FQID_MULTIPLIER RX
+	 * queues.
+	 */
+	if (num_rx_fqs <= 0 || num_rx_fqs > DPAA_PCD_FQID_MULTIPLIER) {
+		DPAA_PMD_ERR("Invalid number of RX queues\n");
+		return -EINVAL;
+	}
+
+	dpaa_intf->rx_queues = rte_zmalloc(NULL,
+		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
+	for (loop = 0; loop < num_rx_fqs; loop++) {
+		fqid = DPAA_PCD_FQID_START + dpaa_intf->ifid *
+			DPAA_PCD_FQID_MULTIPLIER + loop;
+		ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop], fqid);
+		if (ret)
+			return ret;
+		dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_rx_queues = num_rx_fqs;
+
+	/* Initialise Tx FQs. Have as many Tx FQ's as number of cores */
+	num_cores = rte_lcore_count();
+	dpaa_intf->tx_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
+		num_cores, MAX_CACHELINE);
+	if (!dpaa_intf->tx_queues)
+		return -ENOMEM;
+
+	for (loop = 0; loop < num_cores; loop++) {
+		ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
+					 fman_intf);
+		if (ret)
+			return ret;
+		dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_tx_queues = num_cores;
+
+	DPAA_PMD_DEBUG("All frame queues created");
+
+	/* reset bpool list, initialize bpool dynamically */
+	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+
+	/* Populate ethdev structure */
 	eth_dev->dev_ops = &dpaa_devops;
+	eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
+	eth_dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
+		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		DPAA_PMD_ERR("Failed to allocate %d bytes needed to "
+						"store MAC addresses",
+				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
+		rte_free(dpaa_intf->rx_queues);
+		rte_free(dpaa_intf->tx_queues);
+		dpaa_intf->rx_queues = NULL;
+		dpaa_intf->tx_queues = NULL;
+		dpaa_intf->nb_rx_queues = 0;
+		dpaa_intf->nb_tx_queues = 0;
+		return -ENOMEM;
+	}
+
+	/* copy the primary mac address */
+	ether_addr_copy(&fman_intf->mac_addr, &eth_dev->data->mac_addrs[0]);
+
+	RTE_LOG(INFO, PMD, "net: dpaa: %s: %02x:%02x:%02x:%02x:%02x:%02x\n",
+		dpaa_device->name,
+		fman_intf->mac_addr.addr_bytes[0],
+		fman_intf->mac_addr.addr_bytes[1],
+		fman_intf->mac_addr.addr_bytes[2],
+		fman_intf->mac_addr.addr_bytes[3],
+		fman_intf->mac_addr.addr_bytes[4],
+		fman_intf->mac_addr.addr_bytes[5]);
+
+	/* Disable RX mode */
+	fman_if_discard_rx_errors(fman_intf);
+	fman_if_disable_rx(fman_intf);
+	/* Disable promiscuous mode */
+	fman_if_promiscuous_disable(fman_intf);
+	/* Disable multicast */
+	fman_if_reset_mcast_filter_table(fman_intf);
+	/* Reset interface statistics */
+	fman_if_stats_reset(fman_intf);
 
 	return 0;
 }
@@ -147,6 +421,20 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
 
 	dpaa_eth_dev_close(dev);
 
+	/* release configuration memory */
+	if (dpaa_intf->fc_conf)
+		rte_free(dpaa_intf->fc_conf);
+
+	rte_free(dpaa_intf->rx_queues);
+	dpaa_intf->rx_queues = NULL;
+
+	rte_free(dpaa_intf->tx_queues);
+	dpaa_intf->tx_queues = NULL;
+
+	/* free memory for storing MAC addresses */
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
 	dev->dev_ops = NULL;
 	dev->rx_pkt_burst = NULL;
 	dev->tx_pkt_burst = NULL;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
new file mode 100644
index 0000000..c4e67f5
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -0,0 +1,370 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <limits.h>
+#include <sched.h>
+#include <pthread.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_udp.h>
+
+#include "dpaa_ethdev.h"
+#include "dpaa_rxtx.h"
+#include <rte_dpaa_bus.h>
+#include <dpaa_mempool.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
+#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
+	do { \
+		(_fd)->cmd = 0; \
+		(_fd)->opaque_addr = 0; \
+		(_fd)->opaque = QM_FD_CONTIG << DPAA_FD_FORMAT_SHIFT; \
+		(_fd)->opaque |= ((_mbuf)->data_off) << DPAA_FD_OFFSET_SHIFT; \
+		(_fd)->opaque |= (_mbuf)->pkt_len; \
+		(_fd)->addr = (_mbuf)->buf_physaddr; \
+		(_fd)->bpid = _bpid; \
+	} while (0)
+
+static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
+							uint32_t ifid)
+{
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *mbuf;
+	void *ptr;
+	uint16_t offset =
+		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
+	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
+
+	DPAA_DP_LOG(DEBUG, " FD--->MBUF");
+
+	/* Ignoring case when format != qm_fd_contig */
+	ptr = rte_dpaa_mem_ptov(fd->addr);
+	/* Ignoring case when ptr would be NULL. That is only possible incase
+	 * of a corrupted packet
+	 */
+
+	mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
+	/* Prefetch the Parse results and packet data to L1 */
+	rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
+	rte_prefetch0((void *)((uint8_t *)ptr + offset));
+
+	mbuf->data_off = offset;
+	mbuf->data_len = length;
+	mbuf->pkt_len = length;
+
+	mbuf->port = ifid;
+	mbuf->nb_segs = 1;
+	mbuf->ol_flags = 0;
+	mbuf->next = NULL;
+	rte_mbuf_refcnt_set(mbuf, 1);
+
+	return mbuf;
+}
+
+uint16_t dpaa_eth_queue_rx(void *q,
+			   struct rte_mbuf **bufs,
+			   uint16_t nb_bufs)
+{
+	struct qman_fq *fq = q;
+	struct qm_dqrr_entry *dq;
+	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_PMD_ERR("Failure in affining portal");
+		return 0;
+	}
+
+	ret = qman_set_vdq(fq, (nb_bufs > DPAA_MAX_DEQUEUE_NUM_FRAMES) ?
+				DPAA_MAX_DEQUEUE_NUM_FRAMES : nb_bufs);
+	if (ret)
+		return 0;
+
+	do {
+		dq = qman_dequeue(fq);
+		if (!dq)
+			continue;
+		bufs[num_rx++] = dpaa_eth_fd_to_mbuf(&dq->fd, ifid);
+		qman_dqrr_consume(fq, dq);
+	} while (fq->flags & QMAN_FQ_STATE_VDQCR);
+
+	return num_rx;
+}
+
+static void *dpaa_get_pktbuf(struct dpaa_bp_info *bp_info)
+{
+	int ret;
+	uint64_t buf = 0;
+	struct bm_buffer bufs;
+
+	ret = bman_acquire(bp_info->bp, &bufs, 1, 0);
+	if (ret <= 0) {
+		DPAA_PMD_WARN("Failed to allocate buffers %d", ret);
+		return (void *)buf;
+	}
+
+	DPAA_DP_LOG(DEBUG, "got buffer 0x%lx from pool %d",
+		    (uint64_t)bufs.addr, bufs.bpid);
+
+	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
+	if (!buf)
+		goto out;
+
+out:
+	return (void *)buf;
+}
+
+static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
+					     struct dpaa_if *dpaa_intf)
+{
+	struct rte_mbuf *dpaa_mbuf;
+
+	/* allocate pktbuffer on bpid for dpaa port */
+	dpaa_mbuf = dpaa_get_pktbuf(dpaa_intf->bp_info);
+	if (!dpaa_mbuf)
+		return NULL;
+
+	memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + mbuf->data_off, (void *)
+		((uint8_t *)(mbuf->buf_addr) + mbuf->data_off), mbuf->pkt_len);
+
+	/* Copy only the required fields */
+	dpaa_mbuf->data_off = mbuf->data_off;
+	dpaa_mbuf->pkt_len = mbuf->pkt_len;
+	dpaa_mbuf->ol_flags = mbuf->ol_flags;
+	dpaa_mbuf->packet_type = mbuf->packet_type;
+	dpaa_mbuf->tx_offload = mbuf->tx_offload;
+	rte_pktmbuf_free(mbuf);
+	return dpaa_mbuf;
+}
+
+/* Handle mbufs which are not segmented (non SG) */
+static inline void
+tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
+			    struct dpaa_bp_info *bp_info,
+			    struct qm_fd *fd_arr)
+{
+	struct rte_mbuf *mi = NULL;
+
+	if (RTE_MBUF_DIRECT(mbuf)) {
+		if (rte_mbuf_refcnt_read(mbuf) > 1) {
+			/* In case of direct mbuf and mbuf being cloned,
+			 * BMAN should _not_ release buffer.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, 0xff);
+			/* Buffer should be releasd by EAL */
+			rte_mbuf_refcnt_update(mbuf, -1);
+		} else {
+			/* In case of direct mbuf and no cloning, mbuf can be
+			 * released by BMAN.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+		}
+	} else {
+		/* This is data-containing core mbuf: 'mi' */
+		mi = rte_mbuf_from_indirect(mbuf);
+		if (rte_mbuf_refcnt_read(mi) > 1) {
+			/* In case of indirect mbuf, and mbuf being cloned,
+			 * BMAN should _not_ release it and let EAL release
+			 * it through pktmbuf_free below.
+			 */
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, 0xff);
+		} else {
+			/* In case of indirect mbuf, and no cloning, core mbuf
+			 * should be released by BMAN.
+			 * Increate refcnt of core mbuf so that when
+			 * pktmbuf_free is called and mbuf is released, EAL
+			 * doesn't try to release core mbuf which would have
+			 * been released by BMAN.
+			 */
+			rte_mbuf_refcnt_update(mi, 1);
+			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+		}
+		rte_pktmbuf_free(mbuf);
+	}
+}
+
+/* Handle all mbufs on dpaa BMAN managed pool */
+static inline uint16_t
+tx_on_dpaa_pool(struct rte_mbuf *mbuf,
+		struct dpaa_bp_info *bp_info,
+		struct qm_fd *fd_arr)
+{
+	DPAA_DP_LOG(DEBUG, "BMAN offloaded buffer, mbuf: %p", mbuf);
+
+	if (mbuf->nb_segs == 1) {
+		/* Case for non-segmented buffers */
+		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr);
+	} else {
+		DPAA_PMD_DEBUG("Number of Segments not supported");
+		return 1;
+	}
+
+	return 0;
+}
+
+/* Handle all mbufs on an external pool (non-dpaa) */
+static inline uint16_t
+tx_on_external_pool(struct qman_fq *txq, struct rte_mbuf *mbuf,
+		    struct qm_fd *fd_arr)
+{
+	struct dpaa_if *dpaa_intf = txq->dpaa_intf;
+	struct rte_mbuf *dmable_mbuf;
+
+	DPAA_DP_LOG(DEBUG, "Non-BMAN offloaded buffer."
+		    "Allocating an offloaded buffer");
+	dmable_mbuf = dpaa_get_dmable_mbuf(mbuf, dpaa_intf);
+	if (!dmable_mbuf) {
+		DPAA_DP_LOG(DEBUG, "no dpaa buffers.");
+		return 1;
+	}
+
+	DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, dpaa_intf->bp_info->bpid);
+
+	return 0;
+}
+
+uint16_t
+dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+	struct rte_mbuf *mbuf, *mi = NULL;
+	struct rte_mempool *mp;
+	struct dpaa_bp_info *bp_info;
+	struct qm_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send, loop, i = 0;
+	uint16_t state;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		DPAA_PMD_ERR("Failure in affining portal");
+		return 0;
+	}
+
+	DPAA_DP_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
+
+	while (nb_bufs) {
+		frames_to_send = (nb_bufs >> 3) ? MAX_TX_RING_SLOTS : nb_bufs;
+		for (loop = 0; loop < frames_to_send; loop++, i++) {
+			mbuf = bufs[i];
+			if (RTE_MBUF_DIRECT(mbuf)) {
+				mp = mbuf->pool;
+			} else {
+				mi = rte_mbuf_from_indirect(mbuf);
+				mp = mi->pool;
+			}
+
+			bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+			if (likely(mp->ops_index == bp_info->dpaa_ops_index)) {
+				state = tx_on_dpaa_pool(mbuf, bp_info,
+							&fd_arr[loop]);
+				if (unlikely(state)) {
+					/* Set frames_to_send & nb_bufs so
+					 * that packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+			} else {
+				state = tx_on_external_pool(q, mbuf,
+							    &fd_arr[loop]);
+				if (unlikely(state)) {
+					/* Set frames_to_send & nb_bufs so
+					 * that packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+			}
+		}
+
+send_pkts:
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qman_enqueue_multi(q, &fd_arr[loop],
+					frames_to_send - loop);
+		}
+		nb_bufs -= frames_to_send;
+	}
+
+	DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", i, q);
+
+	return i;
+}
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+		uint16_t nb_bufs __rte_unused)
+{
+	DPAA_DP_LOG(DEBUG, "Drop all packets");
+
+	/* Drop all incoming packets. No need to free packets here
+	 * because the rte_eth f/w frees up the packets through tx_buffer
+	 * callback in case this functions returns count less than nb_bufs
+	 */
+	return 0;
+}
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
new file mode 100644
index 0000000..45bfae8
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -0,0 +1,61 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPDK_RXTX_H__
+#define __DPDK_RXTX_H__
+
+/* internal offset from where IC is copied to packet buffer*/
+#define DEFAULT_ICIOF          32
+/* IC transfer size */
+#define DEFAULT_ICSZ	48
+
+/* IC offsets from buffer header address */
+#define DEFAULT_RX_ICEOF	16
+
+#define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
+	/** <Maximum number of frames to be dequeued in a single rx call*/
+/* FD structure masks and offset */
+#define DPAA_FD_FORMAT_MASK 0xE0000000
+#define DPAA_FD_OFFSET_MASK 0x1FF00000
+#define DPAA_FD_LENGTH_MASK 0xFFFFF
+#define DPAA_FD_FORMAT_SHIFT 29
+#define DPAA_FD_OFFSET_SHIFT 20
+
+uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+			      uint16_t nb_bufs __rte_unused);
+#endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 25/40] net/dpaa: support MTU update
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (23 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 24/40] net/dpaa: support Tx and Rx queue setup Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 26/40] net/dpaa: support jumbo frames Shreyansh Jain
                             ` (16 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 9e8befc..59ef23d 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,5 +4,6 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4996daa..4e07661 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -76,6 +76,26 @@
 static int is_global_init;
 
 static int
+dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (mtu < ETHER_MIN_MTU)
+		return -EINVAL;
+	if (mtu > ETHER_MAX_LEN)
+		return -1;
+
+	dev->data->dev_conf.rxmode.jumbo_frame = 0;
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
+
+	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
+
+	return 0;
+}
+
+static int
 dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
@@ -197,6 +217,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+	.mtu_set		  = dpaa_mtu_set,
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 26/40] net/dpaa: support jumbo frames
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (24 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 25/40] net/dpaa: support MTU update Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 27/40] net/dpaa: support link status update Shreyansh Jain
                             ` (15 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 13 +++++++++++--
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 59ef23d..e62812c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Jumbo frame          = Y
 MTU update           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4e07661..1f4f372 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -85,9 +85,10 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	if (mtu < ETHER_MIN_MTU)
 		return -EINVAL;
 	if (mtu > ETHER_MAX_LEN)
-		return -1;
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
 
-	dev->data->dev_conf.rxmode.jumbo_frame = 0;
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
 
 	fman_if_set_maxfrm(dpaa_intf->fif, mtu);
@@ -100,6 +101,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 
+	if (dev->data->dev_conf.rxmode.jumbo_frame == 1) {
+		if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
+		    DPAA_MAX_RX_PKT_LEN)
+			return dpaa_mtu_set(dev,
+				dev->data->dev_conf.rxmode.max_rx_pkt_len);
+		else
+			return -1;
+	}
 	return 0;
 }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 27/40] net/dpaa: support link status update
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (25 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 26/40] net/dpaa: support jumbo frames Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 28/40] net/dpaa: support device info and speed capability Shreyansh Jain
                             ` (14 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 42 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index e62812c..132f94b 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 1f4f372..aae229b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -142,6 +142,28 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static int dpaa_eth_link_update(struct rte_eth_dev *dev,
+				int wait_to_complete __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_link *link = &dev->data->dev_link;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dpaa_intf->fif->mac_type == fman_mac_1g)
+		link->link_speed = 1000;
+	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
+		link->link_speed = 10000;
+	else
+		DPAA_PMD_ERR("invalid link_speed: %s, %d",
+			     dpaa_intf->name, dpaa_intf->fif->mac_type);
+
+	link->link_status = dpaa_intf->valid;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = ETH_LINK_AUTONEG;
+	return 0;
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -216,6 +238,22 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 	PMD_INIT_FUNC_TRACE();
 }
 
+static int dpaa_link_down(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+	return 0;
+}
+
+static int dpaa_link_up(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_start(dev);
+	return 0;
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -226,7 +264,11 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
+
+	.link_update		  = dpaa_eth_link_update,
 	.mtu_set		  = dpaa_mtu_set,
+	.dev_set_link_down	  = dpaa_link_down,
+	.dev_set_link_up	  = dpaa_link_up,
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 28/40] net/dpaa: support device info and speed capability
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (26 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 27/40] net/dpaa: support link status update Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 29/40] net/dpaa: support promiscuous toggle Shreyansh Jain
                             ` (13 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 132f94b..19beada 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index aae229b..69361eb 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -142,6 +142,25 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
+			      struct rte_eth_dev_info *dev_info)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->max_rx_queues = dpaa_intf->nb_rx_queues;
+	dev_info->max_tx_queues = dpaa_intf->nb_tx_queues;
+	dev_info->min_rx_bufsize = DPAA_MIN_RX_BUF_SIZE;
+	dev_info->max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
+	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
+	dev_info->max_hash_mac_addrs = 0;
+	dev_info->max_vfs = 0;
+	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
+				ETH_LINK_SPEED_10G);
+}
+
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 				int wait_to_complete __rte_unused)
 {
@@ -259,6 +278,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+	.dev_infos_get		  = dpaa_eth_dev_info,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 29/40] net/dpaa: support promiscuous toggle
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (27 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 28/40] net/dpaa: support device info and speed capability Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 30/40] net/dpaa: support multicast toggle Shreyansh Jain
                             ` (12 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 19beada..b2dfd81 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -8,5 +8,6 @@ Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
+Promiscuous mode     = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 69361eb..9d2e003 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -183,6 +183,25 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+
+static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_enable(dpaa_intf->fif);
+}
+
+static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_promiscuous_disable(dpaa_intf->fif);
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -286,6 +305,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
+	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 30/40] net/dpaa: support multicast toggle
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (28 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 29/40] net/dpaa: support promiscuous toggle Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 31/40] net/dpaa: support MAC address update Shreyansh Jain
                             ` (11 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index b2dfd81..f21a85f 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -9,5 +9,6 @@ Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
 Promiscuous mode     = Y
+Allmulticast mode    = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 9d2e003..f45ed5e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -202,6 +202,24 @@ static void dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
 	fman_if_promiscuous_disable(dpaa_intf->fif);
 }
 
+static void dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_set_mcast_filter_table(dpaa_intf->fif);
+}
+
+static void dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_reset_mcast_filter_table(dpaa_intf->fif);
+}
+
 static
 int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    uint16_t nb_desc __rte_unused,
@@ -307,6 +325,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.link_update		  = dpaa_eth_link_update,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
+	.allmulticast_enable	  = dpaa_eth_multicast_enable,
+	.allmulticast_disable	  = dpaa_eth_multicast_disable,
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 31/40] net/dpaa: support MAC address update
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (29 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 30/40] net/dpaa: support multicast toggle Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 32/40] net/dpaa: support basic stats Shreyansh Jain
                             ` (10 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 48 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index f21a85f..cdf5e46 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -10,5 +10,6 @@ Jumbo frame          = Y
 MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
+Unicast MAC filter   = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f45ed5e..893e7f5 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -310,6 +310,50 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *addr,
+			     uint32_t index,
+			     __rte_unused uint32_t pool)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, index);
+
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Adding the MAC ADDR failed:"
+			" err = %d", ret);
+	return 0;
+}
+
+static void
+dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
+			  uint32_t index)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_clear_mac_addr(dpaa_intf->fif, index);
+}
+
+static void
+dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
+		       struct ether_addr *addr)
+{
+	int ret;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, 0);
+	if (ret)
+		RTE_LOG(ERR, PMD, "error: Setting the MAC ADDR failed %d", ret);
+}
+
 static struct eth_dev_ops dpaa_devops = {
 	.dev_configure		  = dpaa_eth_dev_configure,
 	.dev_start		  = dpaa_eth_dev_start,
@@ -330,6 +374,10 @@ static struct eth_dev_ops dpaa_devops = {
 	.mtu_set		  = dpaa_mtu_set,
 	.dev_set_link_down	  = dpaa_link_down,
 	.dev_set_link_up	  = dpaa_link_up,
+	.mac_addr_add		  = dpaa_dev_add_mac_addr,
+	.mac_addr_remove	  = dpaa_dev_remove_mac_addr,
+	.mac_addr_set		  = dpaa_dev_set_mac_addr,
+
 };
 
 /* Initialise an Rx FQ */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 32/40] net/dpaa: support basic stats
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (30 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 31/40] net/dpaa: support MAC address update Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 33/40] net/dpaa: support flow control Shreyansh Jain
                             ` (9 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index cdf5e46..c09efd8 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,5 +11,6 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 893e7f5..bcd6013 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -183,6 +183,24 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void dpaa_eth_stats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_stats *stats)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_get(dpaa_intf->fif, stats);
+}
+
+static void dpaa_eth_stats_reset(struct rte_eth_dev *dev)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_stats_reset(dpaa_intf->fif);
+}
 
 static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
 {
@@ -367,6 +385,8 @@ static struct eth_dev_ops dpaa_devops = {
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
 	.link_update		  = dpaa_eth_link_update,
+	.stats_get		  = dpaa_eth_stats_get,
+	.stats_reset		  = dpaa_eth_stats_reset,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
 	.allmulticast_enable	  = dpaa_eth_multicast_enable,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 33/40] net/dpaa: support flow control
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (31 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 32/40] net/dpaa: support basic stats Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 34/40] net/dpaa: support hashed RSS Shreyansh Jain
                             ` (8 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 112 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index c09efd8..1ba6b11 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+Flow control         = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bcd6013..054d4bb 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -329,6 +329,85 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 }
 
 static int
+dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			DPAA_PMD_ERR("unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	net_fc = dpaa_intf->fc_conf;
+
+	if (fc_conf->high_water < fc_conf->low_water) {
+		DPAA_PMD_ERR("Incorrect Flow Control Configuration");
+		return -EINVAL;
+	}
+
+	if (fc_conf->mode == RTE_FC_NONE) {
+		return 0;
+	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_FC_FULL) {
+		fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water,
+					 fc_conf->low_water,
+				dpaa_intf->bp_info->bpid);
+		if (fc_conf->pause_time)
+			fman_if_set_fc_quanta(dpaa_intf->fif,
+					      fc_conf->pause_time);
+	}
+
+	/* Save the information in dpaa device */
+	net_fc->pause_time = fc_conf->pause_time;
+	net_fc->high_water = fc_conf->high_water;
+	net_fc->low_water = fc_conf->low_water;
+	net_fc->send_xon = fc_conf->send_xon;
+	net_fc->mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd;
+	net_fc->mode = fc_conf->mode;
+	net_fc->autoneg = fc_conf->autoneg;
+
+	return 0;
+}
+
+static int
+dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
+		   struct rte_eth_fc_conf *fc_conf)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct rte_eth_fc_conf *net_fc = dpaa_intf->fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (net_fc) {
+		fc_conf->pause_time = net_fc->pause_time;
+		fc_conf->high_water = net_fc->high_water;
+		fc_conf->low_water = net_fc->low_water;
+		fc_conf->send_xon = net_fc->send_xon;
+		fc_conf->mac_ctrl_frame_fwd = net_fc->mac_ctrl_frame_fwd;
+		fc_conf->mode = net_fc->mode;
+		fc_conf->autoneg = net_fc->autoneg;
+		return 0;
+	}
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
+static int
 dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *addr,
 			     uint32_t index,
@@ -384,6 +463,9 @@ static struct eth_dev_ops dpaa_devops = {
 	.rx_queue_release	  = dpaa_eth_rx_queue_release,
 	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 
+	.flow_ctrl_get		  = dpaa_flow_ctrl_get,
+	.flow_ctrl_set		  = dpaa_flow_ctrl_set,
+
 	.link_update		  = dpaa_eth_link_update,
 	.stats_get		  = dpaa_eth_stats_get,
 	.stats_reset		  = dpaa_eth_stats_reset,
@@ -400,6 +482,33 @@ static struct eth_dev_ops dpaa_devops = {
 
 };
 
+static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
+{
+	struct rte_eth_fc_conf *fc_conf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!(dpaa_intf->fc_conf)) {
+		dpaa_intf->fc_conf = rte_zmalloc(NULL,
+			sizeof(struct rte_eth_fc_conf), MAX_CACHELINE);
+		if (!dpaa_intf->fc_conf) {
+			DPAA_PMD_ERR("unable to save flow control info");
+			return -ENOMEM;
+		}
+	}
+	fc_conf = dpaa_intf->fc_conf;
+	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	if (ret) {
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+	} else {
+		fc_conf->mode = RTE_FC_NONE;
+	}
+
+	return 0;
+}
+
 /* Initialise an Rx FQ */
 static int dpaa_rx_queue_init(struct qman_fq *fq,
 			      uint32_t fqid)
@@ -553,6 +662,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 
 	DPAA_PMD_DEBUG("All frame queues created");
 
+	/* Get the initial configuration for flow control */
+	dpaa_fc_set_default(dpaa_intf);
+
 	/* reset bpool list, initialize bpool dynamically */
 	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
 		list_del(&bp->node);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 34/40] net/dpaa: support hashed RSS
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (32 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 33/40] net/dpaa: support flow control Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 35/40] net/dpaa: support packet type parsing Shreyansh Jain
                             ` (7 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c |  1 +
 drivers/net/dpaa/dpaa_ethdev.h | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 054d4bb..a11edeb 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -157,6 +157,7 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
 				ETH_LINK_SPEED_10G);
 }
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 2f25acb..e1e062e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -88,6 +88,16 @@
 #define DPAA_DEBUG_FQ_RX_ERROR   0
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
+#define DPAA_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP)
+
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
 		PKT_TX_TCP_CKSUM |               \
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 35/40] net/dpaa: support packet type parsing
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (33 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 34/40] net/dpaa: support hashed RSS Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 36/40] net/dpaa: support checksum offload Shreyansh Jain
                             ` (6 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Add support for parsing the packet type and L2/L3 checksum offload
capability information.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   2 +
 drivers/net/dpaa/dpaa_ethdev.c    |  27 +++++
 drivers/net/dpaa/dpaa_rxtx.c      | 116 +++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 206 ++++++++++++++++++++++++++++++++++++++
 4 files changed, 351 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 1ba6b11..2ef1b56 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -11,7 +11,9 @@ MTU update           = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
+RSS hash             = Y
 Flow control         = Y
+Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index a11edeb..8ee00ed 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -112,6 +112,28 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 	return 0;
 }
 
+static const uint32_t *
+dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		/*todo -= add more types */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_SCTP
+	};
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dev->rx_pkt_burst == dpaa_eth_queue_rx)
+		return ptypes;
+	return NULL;
+}
+
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -160,6 +182,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
 				ETH_LINK_SPEED_10G);
+	dev_info->rx_offload_capa =
+		(DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM   |
+		DEV_RX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
@@ -458,6 +484,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
 	.dev_infos_get		  = dpaa_eth_dev_info,
+	.dev_supported_ptypes_get = dpaa_supported_ptypes_get,
 
 	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
 	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index c4e67f5..f8ac711 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,121 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
+				     uint64_t prs __rte_unused)
+{
+	DPAA_DP_LOG(DEBUG, "Slow parsing");
+	/*TBD:XXX: to be implemented*/
+}
+
+static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
+					uint64_t fd_virt_addr)
+{
+	struct annotations_t *annot = GET_ANNOTATIONS(fd_virt_addr);
+	uint64_t prs = *((uint64_t *)(&annot->parse)) & DPAA_PARSE_MASK;
+
+	DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
+
+	switch (prs) {
+	case DPAA_PKT_TYPE_NONE:
+		m->packet_type = 0;
+		break;
+	case DPAA_PKT_TYPE_ETHER:
+		m->packet_type = RTE_PTYPE_L2_ETHER;
+		break;
+	case DPAA_PKT_TYPE_IPV4:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4;
+		break;
+	case DPAA_PKT_TYPE_IPV6:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6;
+		break;
+	case DPAA_PKT_TYPE_IPV4_FRAG:
+	case DPAA_PKT_TYPE_IPV4_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV4_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV6_FRAG:
+	case DPAA_PKT_TYPE_IPV6_FRAG_UDP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_TCP:
+	case DPAA_PKT_TYPE_IPV6_FRAG_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT;
+		break;
+	case DPAA_PKT_TYPE_IPV4_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_UDP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_EXT_TCP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP;
+		break;
+	case DPAA_PKT_TYPE_IPV4_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP;
+		break;
+	case DPAA_PKT_TYPE_IPV6_SCTP:
+		m->packet_type = RTE_PTYPE_L2_ETHER |
+			RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP;
+		break;
+	/* More switch cases can be added */
+	default:
+		dpaa_slow_parsing(m, prs);
+	}
+
+	m->tx_offload = annot->parse.ip_off[0];
+	m->tx_offload |= (annot->parse.l4_off - annot->parse.ip_off[0])
+					<< DPAA_PKT_L3_LEN_SHIFT;
+
+	/* Set the hash values */
+	m->hash.rss = (uint32_t)(rte_be_to_cpu_64(annot->hash));
+	m->ol_flags = PKT_RX_RSS_HASH;
+	/* All packets with Bad checksum are dropped by interface (and
+	 * corresponding notification issued to RX error queues).
+	 */
+	m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	/* Check if Vlan is present */
+	if (prs & DPAA_PARSE_VLAN_MASK)
+		m->ol_flags |= PKT_RX_VLAN_PKT;
+	/* Packet received without stripping the vlan */
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -117,6 +232,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 	mbuf->ol_flags = 0;
 	mbuf->next = NULL;
 	rte_mbuf_refcnt_set(mbuf, 1);
+	dpaa_eth_packet_info(mbuf, (uint64_t)mbuf->buf_addr);
 
 	return mbuf;
 }
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 45bfae8..68d2c41 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -44,6 +44,7 @@
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
+
 /* FD structure masks and offset */
 #define DPAA_FD_FORMAT_MASK 0xE0000000
 #define DPAA_FD_OFFSET_MASK 0x1FF00000
@@ -51,6 +52,211 @@
 #define DPAA_FD_FORMAT_SHIFT 29
 #define DPAA_FD_OFFSET_SHIFT 20
 
+/* Parsing mask (Little Endian) - 0x00E044ED00800000
+ *	Classification Plan ID 0x00
+ *	L4R 0xE0 -
+ *		0x20 - TCP
+ *		0x40 - UDP
+ *		0x80 - SCTP
+ *	L3R 0xEDC4 (in Big Endian) -
+ *		0x8000 - IPv4
+ *		0x4000 - IPv6
+ *		0x8140 - IPv4 Ext + Frag
+ *		0x8040 - IPv4 Frag
+ *		0x8100 - IPv4 Ext
+ *		0x4140 - IPv6 Ext + Frag
+ *		0x4040 - IPv6 Frag
+ *		0x4100 - IPv6 Ext
+ *	L2R 0x8000 (in Big Endian) -
+ *		0x8000 - Ethernet type
+ *	ShimR & Logical Port ID 0x0000
+ */
+#define DPAA_PARSE_MASK			0x00E044ED00800000
+#define DPAA_PARSE_VLAN_MASK		0x0000000000700000
+
+/* Parsed values (Little Endian) */
+#define DPAA_PKT_TYPE_NONE		0x0000000000000000
+#define DPAA_PKT_TYPE_ETHER		0x0000000000800000
+#define DPAA_PKT_TYPE_IPV4 \
+			(0x0000008000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV6 \
+			(0x0000004000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_GRE \
+			(0x0000002000000000 | DPAA_PKT_TYPE_ETHER)
+#define DPAA_PKT_TYPE_IPV4_FRAG	\
+			(0x0000400000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_FRAG	\
+			(0x0000400000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_EXT \
+			(0x0000000100000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_EXT \
+			(0x0000000100000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_SCTP	\
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_IPV6_SCTP	\
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_IPV4_FRAG_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_FRAG_SCTP \
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV4_FRAG)
+#define DPAA_PKT_TYPE_IPV6_FRAG_SCTP \
+			(0x0080000000000000 | DPAA_PKT_TYPE_IPV6_FRAG)
+#define DPAA_PKT_TYPE_IPV4_EXT_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_IPV4_EXT_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV4_EXT)
+#define DPAA_PKT_TYPE_IPV6_EXT_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_IPV6_EXT)
+#define DPAA_PKT_TYPE_TUNNEL_4_4 \
+			(0x0000000800000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6 \
+			(0x0000000400000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6 \
+			(0x0000000400000000 | DPAA_PKT_TYPE_IPV4)
+#define DPAA_PKT_TYPE_TUNNEL_6_4 \
+			(0x0000000800000000 | DPAA_PKT_TYPE_IPV6)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_UDP \
+			(0x0040000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_TYPE_TUNNEL_4_4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_4)
+#define DPAA_PKT_TYPE_TUNNEL_6_6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_6)
+#define DPAA_PKT_TYPE_TUNNEL_4_6_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_4_6)
+#define DPAA_PKT_TYPE_TUNNEL_6_4_TCP \
+			(0x0020000000000000 | DPAA_PKT_TYPE_TUNNEL_6_4)
+#define DPAA_PKT_L3_LEN_SHIFT	7
+
+/**
+ * FMan parse result array
+ */
+struct dpaa_eth_parse_results_t {
+	 uint8_t     lpid;		 /**< Logical port id */
+	 uint8_t     shimr;		 /**< Shim header result  */
+	 union {
+		uint16_t              l2r;	/**< Layer 2 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      ethernet:1;
+			uint16_t      vlan:1;
+			uint16_t      llc_snap:1;
+			uint16_t      mpls:1;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      unused_1:3;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      eth_frame_type:2;
+			uint16_t      l2r_err:5;
+			/*00-unicast, 01-multicast, 11-broadcast*/
+#else
+			uint16_t      l2r_err:5;
+			uint16_t      eth_frame_type:2;
+			uint16_t      unknown_eth_proto:1;
+			uint16_t      unused_1:3;
+			uint16_t      ppoe_ppp:1;
+			uint16_t      mpls:1;
+			uint16_t      llc_snap:1;
+			uint16_t      vlan:1;
+			uint16_t      ethernet:1;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint16_t              l3r;	/**< Layer 3 result */
+		struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint16_t      first_ipv4:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      gre:1;
+			uint16_t      min_enc:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      last_ipv6:1;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      last_ip_err_code:3;
+#else
+			uint16_t      last_ip_err_code:3;
+			uint16_t      last_info_err:1;	/*0 info, 1 error*/
+			uint16_t      first_ip_err_code:5;
+			uint16_t      first_info_err:1;/*0 info, 1 error*/
+			uint16_t      last_ipv6:1;
+			uint16_t      last_ipv4:1;
+			uint16_t      min_enc:1;
+			uint16_t      gre:1;
+			uint16_t      first_ipv6:1;
+			uint16_t      first_ipv4:1;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 union {
+		uint8_t               l4r;	/**< Layer 4 result */
+		struct{
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+			uint8_t	       l4_type:3;
+			uint8_t	       l4_info_err:1;
+			uint8_t	       l4_result:4;
+					/* if type IPSec: 1 ESP, 2 AH */
+#else
+			uint8_t        l4_result:4;
+					/* if type IPSec: 1 ESP, 2 AH */
+			uint8_t        l4_info_err:1;
+			uint8_t        l4_type:3;
+#endif
+		} __attribute__((__packed__));
+	 } __attribute__((__packed__));
+	 uint8_t     cplan;		 /**< Classification plan id */
+	 uint16_t    nxthdr;		 /**< Next Header  */
+	 uint16_t    cksum;		 /**< Checksum */
+	 uint32_t    lcv;		 /**< LCV */
+	 uint8_t     shim_off[3];	 /**< Shim offset */
+	 uint8_t     eth_off;		 /**< ETH offset */
+	 uint8_t     llc_snap_off;	 /**< LLC_SNAP offset */
+	 uint8_t     vlan_off[2];	 /**< VLAN offset */
+	 uint8_t     etype_off;		 /**< ETYPE offset */
+	 uint8_t     pppoe_off;		 /**< PPP offset */
+	 uint8_t     mpls_off[2];	 /**< MPLS offset */
+	 uint8_t     ip_off[2];		 /**< IP offset */
+	 uint8_t     gre_off;		 /**< GRE offset */
+	 uint8_t     l4_off;		 /**< Layer 4 offset */
+	 uint8_t     nxthdr_off;	 /**< Parser end point */
+} __attribute__ ((__packed__));
+
+/* The structure is the Prepended Data to the Frame which is used by FMAN */
+struct annotations_t {
+	uint8_t reserved[DEFAULT_RX_ICEOF];
+	struct dpaa_eth_parse_results_t parse;	/**< Pointer to Parsed result*/
+	uint64_t reserved1;
+	uint64_t hash;			/**< Hash Result */
+};
+
+#define GET_ANNOTATIONS(_buf) \
+	(struct annotations_t *)(_buf)
+
+#define GET_RX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
+
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
 uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 36/40] net/dpaa: support checksum offload
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (34 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 35/40] net/dpaa: support packet type parsing Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 37/40] net/dpaa: support Scattered Rx Shreyansh Jain
                             ` (5 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  2 +
 drivers/net/dpaa/dpaa_ethdev.c    |  4 ++
 drivers/net/dpaa/dpaa_rxtx.c      | 89 +++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      | 23 +++++++++-
 4 files changed, 117 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 2ef1b56..23626c0 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -13,6 +13,8 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 RSS hash             = Y
 Flow control         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
 ARMv8                = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 8ee00ed..12dcc68 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -186,6 +186,10 @@ static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 		(DEV_RX_OFFLOAD_IPV4_CKSUM |
 		DEV_RX_OFFLOAD_UDP_CKSUM   |
 		DEV_RX_OFFLOAD_TCP_CKSUM);
+	dev_info->tx_offload_capa =
+		(DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM);
 }
 
 static int dpaa_eth_link_update(struct rte_eth_dev *dev,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index f8ac711..976268b 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -200,6 +200,82 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
 	/* Packet received without stripping the vlan */
 }
 
+static inline void dpaa_checksum(struct rte_mbuf *mbuf)
+{
+	struct ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf, struct ether_hdr *);
+	char *l3_hdr = (char *)eth_hdr + mbuf->l2_len;
+	struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+	struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	DPAA_DP_LOG(DEBUG, "Calculating checksum for mbuf: %p", mbuf);
+
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	    ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	    RTE_PTYPE_L3_IPV4_EXT)) {
+		ipv4_hdr = (struct ipv4_hdr *)l3_hdr;
+		ipv4_hdr->hdr_checksum = 0;
+		ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
+	} else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6_EXT))
+		ipv6_hdr = (struct ipv6_hdr *)l3_hdr;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP) {
+		struct tcp_hdr *tcp_hdr = (struct tcp_hdr *)(l3_hdr +
+					  mbuf->l3_len);
+		tcp_hdr->cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			tcp_hdr->cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+							       tcp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			tcp_hdr->cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+							       tcp_hdr);
+	} else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) ==
+		   RTE_PTYPE_L4_UDP) {
+		struct udp_hdr *udp_hdr = (struct udp_hdr *)(l3_hdr +
+							     mbuf->l3_len);
+		udp_hdr->dgram_cksum = 0;
+		if (eth_hdr->ether_type == htons(ETHER_TYPE_IPv4))
+			udp_hdr->dgram_cksum = rte_ipv4_udptcp_cksum(ipv4_hdr,
+								     udp_hdr);
+		else /* assume ethertype == ETHER_TYPE_IPv6 */
+			udp_hdr->dgram_cksum = rte_ipv6_udptcp_cksum(ipv6_hdr,
+								     udp_hdr);
+	}
+}
+
+static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
+					 struct qm_fd *fd, char *prs_buf)
+{
+	struct dpaa_eth_parse_results_t *prs;
+
+	DPAA_DP_LOG(DEBUG, " Offloading checksum for mbuf: %p", mbuf);
+
+	prs = GET_TX_PRS(prs_buf);
+	prs->l3r = 0;
+	prs->l4r = 0;
+	if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) ||
+	   ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+	   RTE_PTYPE_L3_IPV4_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV4;
+	else if (((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		   RTE_PTYPE_L3_IPV6) ||
+		 ((mbuf->packet_type & RTE_PTYPE_L3_MASK) ==
+		RTE_PTYPE_L3_IPV6_EXT))
+		prs->l3r = DPAA_L3_PARSE_RESULT_IPV6;
+
+	if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_TCP;
+	else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)
+		prs->l4r = DPAA_L4_PARSE_RESULT_UDP;
+
+	prs->ip_off[0] = mbuf->l2_len;
+	prs->l4_off = mbuf->l3_len + mbuf->l2_len;
+	/* Enable L3 (and L4, if TCP or UDP) HW checksum*/
+	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
@@ -358,6 +434,19 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 		}
 		rte_pktmbuf_free(mbuf);
 	}
+
+	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+		if (mbuf->data_off < (DEFAULT_TX_ICEOF +
+		    sizeof(struct dpaa_eth_parse_results_t))) {
+			DPAA_DP_LOG(DEBUG, "Checksum offload Err: "
+				"Not enough Headroom "
+				"space for correct Checksum offload."
+				"So Calculating checksum in Software.");
+			dpaa_checksum(mbuf);
+		} else {
+			dpaa_checksum_offload(mbuf, fd_arr, mbuf->buf_addr);
+		}
+	}
 }
 
 /* Handle all mbufs on dpaa BMAN managed pool */
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 68d2c41..d10298e 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -41,6 +41,22 @@
 
 /* IC offsets from buffer header address */
 #define DEFAULT_RX_ICEOF	16
+#define DEFAULT_TX_ICEOF	16
+
+/*
+ * Values for the L3R field of the FM Parse Results
+ */
+/* L3 Type field: First IP Present IPv4 */
+#define DPAA_L3_PARSE_RESULT_IPV4 0x80
+/* L3 Type field: First IP Present IPv6 */
+#define DPAA_L3_PARSE_RESULT_IPV6	0x40
+/* Values for the L4R field of the FM Parse Results
+ * See $8.8.4.7.20 - L4 HXS - L4 Results from DPAA-Rev2 Reference Manual.
+ */
+/* L4 Type field: UDP */
+#define DPAA_L4_PARSE_RESULT_UDP	0x40
+/* L4 Type field: TCP */
+#define DPAA_L4_PARSE_RESULT_TCP	0x20
 
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
@@ -255,7 +271,12 @@ struct annotations_t {
 	(struct annotations_t *)(_buf)
 
 #define GET_RX_PRS(_buf) \
-	(struct dpaa_eth_parse_results_t *)((uint8_t *)_buf + DEFAULT_RX_ICEOF)
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)(_buf) + \
+	DEFAULT_RX_ICEOF)
+
+#define GET_TX_PRS(_buf) \
+	(struct dpaa_eth_parse_results_t *)((uint8_t *)(_buf) + \
+	DEFAULT_TX_ICEOF)
 
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 37/40] net/dpaa: support Scattered Rx
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (35 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 36/40] net/dpaa: support checksum offload Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 38/40] net/dpaa: add packet dump for debugging Shreyansh Jain
                             ` (4 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_rxtx.c      | 159 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      |   9 +++
 3 files changed, 169 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 23626c0..0e7956c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -8,6 +8,7 @@ Speed capabilities   = P
 Link status          = Y
 Jumbo frame          = Y
 MTU update           = Y
+Scattered Rx         = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 976268b..9c25d8c 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -276,18 +276,82 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
 	fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
 }
 
+struct rte_mbuf *
+dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
+{
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+	struct qm_sg_entry *sgt, *sg_temp;
+	void *vaddr, *sg_vaddr;
+	int i = 0;
+	uint8_t fd_offset = fd->offset;
+
+	DPAA_DP_LOG(DEBUG, "Received an SG frame");
+
+	vaddr = rte_dpaa_mem_ptov(qm_fd_addr(fd));
+	if (!vaddr) {
+		DPAA_PMD_ERR("unable to convert physical address");
+		return NULL;
+	}
+	sgt = vaddr + fd_offset;
+	sg_temp = &sgt[i++];
+	hw_sg_to_cpu(sg_temp);
+	temp = (struct rte_mbuf *)((char *)vaddr - bp_info->meta_data_size);
+	sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+
+	first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						bp_info->meta_data_size);
+	first_seg->data_off = sg_temp->offset;
+	first_seg->data_len = sg_temp->length;
+	first_seg->pkt_len = sg_temp->length;
+	rte_mbuf_refcnt_set(first_seg, 1);
+
+	first_seg->port = ifid;
+	first_seg->nb_segs = 1;
+	first_seg->ol_flags = 0;
+	prev_seg = first_seg;
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		hw_sg_to_cpu(sg_temp);
+		sg_vaddr = rte_dpaa_mem_ptov(qm_sg_entry_get64(sg_temp));
+		cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						      bp_info->meta_data_size);
+		cur_seg->data_off = sg_temp->offset;
+		cur_seg->data_len = sg_temp->length;
+		first_seg->pkt_len += sg_temp->length;
+		first_seg->nb_segs += 1;
+		rte_mbuf_refcnt_set(cur_seg, 1);
+		prev_seg->next = cur_seg;
+		if (sg_temp->final) {
+			cur_seg->next = NULL;
+			break;
+		}
+		prev_seg = cur_seg;
+	}
+
+	dpaa_eth_packet_info(first_seg, (uint64_t)vaddr);
+	rte_pktmbuf_free_seg(temp);
+
+	return first_seg;
+}
+
 static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 							uint32_t ifid)
 {
 	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
 	struct rte_mbuf *mbuf;
 	void *ptr;
+	uint8_t format =
+		(fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
 	uint16_t offset =
 		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
 	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
 
 	DPAA_DP_LOG(DEBUG, " FD--->MBUF");
 
+	if (unlikely(format == qm_fd_sg))
+		return dpaa_eth_sg_to_mbuf(fd, ifid);
+
 	/* Ignoring case when format != qm_fd_contig */
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
@@ -390,6 +454,95 @@ static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
 	return dpaa_mbuf;
 }
 
+int
+dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
+		struct qm_fd *fd,
+		uint32_t bpid)
+{
+	struct rte_mbuf *cur_seg = mbuf, *prev_seg = NULL;
+	struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(bpid);
+	struct rte_mbuf *temp, *mi;
+	struct qm_sg_entry *sg_temp, *sgt;
+	int i = 0;
+
+	DPAA_DP_LOG(DEBUG, "Creating SG FD to transmit");
+
+	temp = rte_pktmbuf_alloc(bp_info->mp);
+	if (!temp) {
+		DPAA_PMD_ERR("Failure in allocation of mbuf");
+		return -1;
+	}
+	if (temp->buf_len < ((mbuf->nb_segs * sizeof(struct qm_sg_entry))
+				+ temp->data_off)) {
+		DPAA_PMD_ERR("Insufficient space in mbuf for SG entries");
+		return -1;
+	}
+
+	fd->cmd = 0;
+	fd->opaque_addr = 0;
+
+	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+		if (temp->data_off < DEFAULT_TX_ICEOF
+			+ sizeof(struct dpaa_eth_parse_results_t))
+			temp->data_off = DEFAULT_TX_ICEOF
+				+ sizeof(struct dpaa_eth_parse_results_t);
+		dcbz_64(temp->buf_addr);
+		dpaa_checksum_offload(mbuf, fd, temp->buf_addr);
+	}
+
+	sgt = temp->buf_addr + temp->data_off;
+	fd->format = QM_FD_SG;
+	fd->addr = temp->buf_physaddr;
+	fd->offset = temp->data_off;
+	fd->bpid = bpid;
+	fd->length20 = mbuf->pkt_len;
+
+	while (i < DPAA_SGT_MAX_ENTRIES) {
+		sg_temp = &sgt[i++];
+		sg_temp->opaque = 0;
+		sg_temp->val = 0;
+		sg_temp->addr = cur_seg->buf_physaddr;
+		sg_temp->offset = cur_seg->data_off;
+		sg_temp->length = cur_seg->data_len;
+		if (RTE_MBUF_DIRECT(cur_seg)) {
+			if (rte_mbuf_refcnt_read(cur_seg) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+				rte_mbuf_refcnt_update(cur_seg, -1);
+			} else {
+				sg_temp->bpid =
+					DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
+			}
+			cur_seg = cur_seg->next;
+		} else {
+			/* Get owner MBUF from indirect buffer */
+			mi = rte_mbuf_from_indirect(cur_seg);
+			if (rte_mbuf_refcnt_read(mi) > 1) {
+				/*If refcnt > 1, invalid bpid is set to ensure
+				 * owner buffer is not freed by HW.
+				 */
+				sg_temp->bpid = 0xff;
+			} else {
+				sg_temp->bpid = DPAA_MEMPOOL_TO_BPID(mi->pool);
+				rte_mbuf_refcnt_update(mi, 1);
+			}
+			prev_seg = cur_seg;
+			cur_seg = cur_seg->next;
+			prev_seg->next = NULL;
+			rte_pktmbuf_free(prev_seg);
+		}
+		if (cur_seg == NULL) {
+			sg_temp->final = 1;
+			cpu_to_hw_sg(sg_temp);
+			break;
+		}
+		cpu_to_hw_sg(sg_temp);
+	}
+	return 0;
+}
+
 /* Handle mbufs which are not segmented (non SG) */
 static inline void
 tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
@@ -460,6 +613,12 @@ tx_on_dpaa_pool(struct rte_mbuf *mbuf,
 	if (mbuf->nb_segs == 1) {
 		/* Case for non-segmented buffers */
 		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr);
+	} else if (mbuf->nb_segs > 1 &&
+		   mbuf->nb_segs <= DPAA_SGT_MAX_ENTRIES) {
+		if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr, bp_info->bpid)) {
+			DPAA_PMD_DEBUG("Unable to create Scatter Gather FD");
+			return 1;
+		}
 	} else {
 		DPAA_PMD_DEBUG("Number of Segments not supported");
 		return 1;
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index d10298e..2ffc4ff 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -58,6 +58,8 @@
 /* L4 Type field: TCP */
 #define DPAA_L4_PARSE_RESULT_TCP	0x20
 
+#define DPAA_SGT_MAX_ENTRIES 16 /* maximum number of entries in SG Table */
+
 #define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
 	/** <Maximum number of frames to be dequeued in a single rx call*/
 
@@ -285,4 +287,11 @@ uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
 			      struct rte_mbuf **bufs __rte_unused,
 			      uint16_t nb_bufs __rte_unused);
+
+struct rte_mbuf *dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid);
+
+int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
+			   struct qm_fd *fd,
+			   uint32_t bpid);
+
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 38/40] net/dpaa: add packet dump for debugging
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (36 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 37/40] net/dpaa: support Scattered Rx Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:29           ` [PATCH v6 39/40] net/dpaa: support firmware version get API Shreyansh Jain
                             ` (3 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.c   | 26 ++++++++++++++++++++++++++
 2 files changed, 68 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 12dcc68..3d6ddae 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -618,6 +618,39 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
 	return ret;
 }
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
+static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		DPAA_PMD_ERR("Reserve debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return -EINVAL;
+	}
+	/* "map" this Rx FQ to one of the interfaces Tx FQID */
+	DPAA_PMD_DEBUG("Creating debug fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		DPAA_PMD_ERR("create debug fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL;
+	opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		DPAA_PMD_ERR("init debug fqid %d failed with ret: %d",
+			    fqid, ret);
+	return ret;
+}
+#endif
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -692,6 +725,15 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	}
 	dpaa_intf->nb_tx_queues = num_cores;
 
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
+	dpaa_debug_queue_init(&dpaa_intf->debug_queues[
+		DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
+	dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+#endif
+
 	DPAA_PMD_DEBUG("All frame queues created");
 
 	/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 9c25d8c..d73f9cb 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -85,6 +85,31 @@
 		(_fd)->bpid = _bpid; \
 	} while (0)
 
+#if (defined RTE_LIBRTE_DPAA_DEBUG_DRIVER)
+void dpaa_display_frame(const struct qm_fd *fd)
+{
+	int ii;
+	char *ptr;
+
+	printf("%s::bpid %x addr %08x%08x, format %d off %d, len %d stat %x\n",
+	       __func__, fd->bpid, fd->addr_hi, fd->addr_lo, fd->format,
+		fd->offset, fd->length20, fd->status);
+
+	ptr = (char *)rte_dpaa_mem_ptov(fd->addr);
+	ptr += fd->offset;
+	printf("%02x ", *ptr);
+	for (ii = 1; ii < fd->length20; ii++) {
+		printf("%02x ", *ptr);
+		if ((ii % 16) == 0)
+			printf("\n");
+		ptr++;
+	}
+	printf("\n");
+}
+#else
+#define dpaa_display_frame(a)
+#endif
+
 static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
 				     uint64_t prs __rte_unused)
 {
@@ -353,6 +378,7 @@ static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
 		return dpaa_eth_sg_to_mbuf(fd, ifid);
 
 	/* Ignoring case when format != qm_fd_contig */
+	dpaa_display_frame(fd);
 	ptr = rte_dpaa_mem_ptov(fd->addr);
 	/* Ignoring case when ptr would be NULL. That is only possible incase
 	 * of a corrupted packet
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 39/40] net/dpaa: support firmware version get API
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (37 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 38/40] net/dpaa: add packet dump for debugging Shreyansh Jain
@ 2017-09-28 12:29           ` Shreyansh Jain
  2017-09-28 12:30           ` [PATCH v6 40/40] net/dpaa: support extended statistics Shreyansh Jain
                             ` (2 subsequent siblings)
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |  1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 36 ++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_ethdev.h    |  5 +++++
 3 files changed, 42 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 0e7956c..09b9bd9 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -18,5 +18,6 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
+FW version           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3d6ddae..8e51fe6 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -164,6 +164,41 @@ static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 	dpaa_eth_dev_stop(dev);
 }
 
+static int
+dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
+		     char *fw_version,
+		     size_t fw_size)
+{
+	int ret;
+	FILE *svr_file = NULL;
+	unsigned int svr_ver = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	svr_file = fopen(DPAA_SOC_ID_FILE, "r");
+	if (!svr_file) {
+		DPAA_PMD_ERR("Unable to open SoC device");
+		return -ENOTSUP; /* Not supported on this infra */
+	}
+
+	ret = fscanf(svr_file, "svr:%x", &svr_ver);
+	if (ret <= 0) {
+		DPAA_PMD_ERR("Unable to read SoC device");
+		return -ENOTSUP; /* Not supported on this infra */
+	}
+
+	ret = snprintf(fw_version, fw_size,
+		       "svr:%x-fman-v%x",
+		       svr_ver,
+		       fman_ip_rev);
+
+	ret += 1; /* add the size of '\0' */
+	if (fw_size < (uint32_t)ret)
+		return ret;
+	else
+		return 0;
+}
+
 static void dpaa_eth_dev_info(struct rte_eth_dev *dev,
 			      struct rte_eth_dev_info *dev_info)
 {
@@ -512,6 +547,7 @@ static struct eth_dev_ops dpaa_devops = {
 	.mac_addr_remove	  = dpaa_dev_remove_mac_addr,
 	.mac_addr_set		  = dpaa_dev_set_mac_addr,
 
+	.fw_version_get		  = dpaa_fw_version_get,
 };
 
 static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index e1e062e..a980262 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -43,6 +43,11 @@
 #include <of.h>
 #include <netcfg.h>
 
+/* DPAA SoC identifier; If this is not available, it can be concluded
+ * that board is non-DPAA. Single slot is currently supported.
+ */
+#define DPAA_SOC_ID_FILE		"sys/devices/soc0/soc_id"
+
 #define DPAA_MBUF_HW_ANNOTATION		64
 #define DPAA_FD_PTA_SIZE		64
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH v6 40/40] net/dpaa: support extended statistics
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (38 preceding siblings ...)
  2017-09-28 12:29           ` [PATCH v6 39/40] net/dpaa: support firmware version get API Shreyansh Jain
@ 2017-09-28 12:30           ` Shreyansh Jain
  2017-09-28 14:10           ` [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation Shreyansh Jain
  2017-10-02 23:05           ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Ferruh Yigit
  41 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 12:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, hemant.agrawal

From: Hemant Agrawal <hemant.agrawal@nxp.com>

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/dpaa_ethdev.c    | 143 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_ethdev.h    |  40 +++++++++++
 3 files changed, 184 insertions(+)

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 09b9bd9..24cfd85 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -18,6 +18,7 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
+Extended stats       = Y
 FW version           = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 8e51fe6..8dad97e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -75,6 +75,40 @@
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
 
+struct rte_dpaa_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	uint32_t offset;
+};
+
+static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = {
+	{"rx_align_err",
+		offsetof(struct dpaa_if_stats, raln)},
+	{"rx_valid_pause",
+		offsetof(struct dpaa_if_stats, rxpf)},
+	{"rx_fcs_err",
+		offsetof(struct dpaa_if_stats, rfcs)},
+	{"rx_vlan_frame",
+		offsetof(struct dpaa_if_stats, rvlan)},
+	{"rx_frame_err",
+		offsetof(struct dpaa_if_stats, rerr)},
+	{"rx_drop_err",
+		offsetof(struct dpaa_if_stats, rdrp)},
+	{"rx_undersized",
+		offsetof(struct dpaa_if_stats, rund)},
+	{"rx_oversize_err",
+		offsetof(struct dpaa_if_stats, rovr)},
+	{"rx_fragment_pkt",
+		offsetof(struct dpaa_if_stats, rfrg)},
+	{"tx_valid_pause",
+		offsetof(struct dpaa_if_stats, txpf)},
+	{"tx_fcs_err",
+		offsetof(struct dpaa_if_stats, terr)},
+	{"tx_vlan_frame",
+		offsetof(struct dpaa_if_stats, tvlan)},
+	{"rx_undersized",
+		offsetof(struct dpaa_if_stats, tund)},
+};
+
 static int
 dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
@@ -268,6 +302,110 @@ static void dpaa_eth_stats_reset(struct rte_eth_dev *dev)
 	fman_if_stats_reset(dpaa_intf->fif);
 }
 
+static int
+dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+		    unsigned int n)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
+	uint64_t values[sizeof(struct dpaa_if_stats) / 8];
+
+	if (xstats == NULL)
+		return 0;
+
+	if (n < num)
+		return num;
+
+	fman_if_stats_get_all(dpaa_intf->fif, values,
+			      sizeof(struct dpaa_if_stats) / 8);
+
+	for (i = 0; i < num; i++) {
+		xstats[i].id = i;
+		xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
+	}
+	return i;
+}
+
+static int
+dpaa_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+		      struct rte_eth_xstat_name *xstats_names,
+		      __rte_unused unsigned int limit)
+{
+	unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+
+	if (xstats_names != NULL)
+		for (i = 0; i < stat_cnt; i++)
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s",
+				 dpaa_xstats_strings[i].name);
+
+	return stat_cnt;
+}
+
+static int
+dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
+		      uint64_t *values, unsigned int n)
+{
+	unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+	uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8];
+
+	if (!ids) {
+		struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+		if (n < stat_cnt)
+			return stat_cnt;
+
+		if (!values)
+			return 0;
+
+		fman_if_stats_get_all(dpaa_intf->fif, values_copy,
+				      sizeof(struct dpaa_if_stats));
+
+		for (i = 0; i < stat_cnt; i++)
+			values[i] =
+				values_copy[dpaa_xstats_strings[i].offset / 8];
+
+		return stat_cnt;
+	}
+
+	dpaa_xstats_get_by_id(dev, NULL, values_copy, stat_cnt);
+
+	for (i = 0; i < n; i++) {
+		if (ids[i] >= stat_cnt) {
+			DPAA_PMD_ERR("id value isn't valid");
+			return -1;
+		}
+		values[i] = values_copy[ids[i]];
+	}
+	return n;
+}
+
+static int
+dpaa_xstats_get_names_by_id(
+	struct rte_eth_dev *dev,
+	struct rte_eth_xstat_name *xstats_names,
+	const uint64_t *ids,
+	unsigned int limit)
+{
+	unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+	struct rte_eth_xstat_name xstats_names_copy[stat_cnt];
+
+	if (!ids)
+		return dpaa_xstats_get_names(dev, xstats_names, limit);
+
+	dpaa_xstats_get_names(dev, xstats_names_copy, limit);
+
+	for (i = 0; i < limit; i++) {
+		if (ids[i] >= stat_cnt) {
+			DPAA_PMD_ERR("id value isn't valid");
+			return -1;
+		}
+		strcpy(xstats_names[i].name, xstats_names_copy[ids[i]].name);
+	}
+	return limit;
+}
+
 static void dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -535,6 +673,11 @@ static struct eth_dev_ops dpaa_devops = {
 
 	.link_update		  = dpaa_eth_link_update,
 	.stats_get		  = dpaa_eth_stats_get,
+	.xstats_get		  = dpaa_dev_xstats_get,
+	.xstats_get_by_id	  = dpaa_xstats_get_by_id,
+	.xstats_get_names_by_id	  = dpaa_xstats_get_names_by_id,
+	.xstats_get_names	  = dpaa_xstats_get_names,
+	.xstats_reset		  = dpaa_eth_stats_reset,
 	.stats_reset		  = dpaa_eth_stats_reset,
 	.promiscuous_enable	  = dpaa_eth_promiscuous_enable,
 	.promiscuous_disable	  = dpaa_eth_promiscuous_disable,
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index a980262..5457d61 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -139,4 +139,44 @@ struct dpaa_if {
 	struct rte_eth_fc_conf *fc_conf;
 };
 
+struct dpaa_if_stats {
+	/* Rx Statistics Counter */
+	uint64_t reoct;		/**<Rx Eth Octets Counter */
+	uint64_t roct;		/**<Rx Octet Counters */
+	uint64_t raln;		/**<Rx Alignment Error Counter */
+	uint64_t rxpf;		/**<Rx valid Pause Frame */
+	uint64_t rfrm;		/**<Rx Frame counter */
+	uint64_t rfcs;		/**<Rx frame check seq error */
+	uint64_t rvlan;		/**<Rx Vlan Frame Counter */
+	uint64_t rerr;		/**<Rx Frame error */
+	uint64_t ruca;		/**<Rx Unicast */
+	uint64_t rmca;		/**<Rx Multicast */
+	uint64_t rbca;		/**<Rx Broadcast */
+	uint64_t rdrp;		/**<Rx Dropped Packet */
+	uint64_t rpkt;		/**<Rx packet */
+	uint64_t rund;		/**<Rx undersized packets */
+	uint32_t res_x[14];
+	uint64_t rovr;		/**<Rx oversized but good */
+	uint64_t rjbr;		/**<Rx oversized with bad csum */
+	uint64_t rfrg;		/**<Rx fragment Packet */
+	uint64_t rcnp;		/**<Rx control packets (0x8808 */
+	uint64_t rdrntp;	/**<Rx dropped due to FIFO overflow */
+	uint32_t res01d0[12];
+	/* Tx Statistics Counter */
+	uint64_t teoct;		/**<Tx eth octets */
+	uint64_t toct;		/**<Tx Octets */
+	uint32_t res0210[2];
+	uint64_t txpf;		/**<Tx valid pause frame */
+	uint64_t tfrm;		/**<Tx frame counter */
+	uint64_t tfcs;		/**<Tx FCS error */
+	uint64_t tvlan;		/**<Tx Vlan Frame */
+	uint64_t terr;		/**<Tx frame error */
+	uint64_t tuca;		/**<Tx Unicast */
+	uint64_t tmca;		/**<Tx Multicast */
+	uint64_t tbca;		/**<Tx Broadcast */
+	uint32_t res0258[2];
+	uint64_t tpkt;		/**<Tx Packet */
+	uint64_t tund;		/**<Tx Undersized */
+};
+
 #endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (39 preceding siblings ...)
  2017-09-28 12:30           ` [PATCH v6 40/40] net/dpaa: support extended statistics Shreyansh Jain
@ 2017-09-28 14:10           ` Shreyansh Jain
  2017-09-28 14:10             ` [PATCH 2/2] config: fix DPAA PMD linking Shreyansh Jain
                               ` (2 more replies)
  2017-10-02 23:05           ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Ferruh Yigit
  41 siblings, 3 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 14:10 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Shreyansh Jain

Fixes: 5ad2d123be48 "(bus/dpaa: introducing FMan configurations)"

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/base/fman/fman.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 2c6029e..d0a8ee4 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -242,11 +242,11 @@ fman_if_init(const struct device_node *dpa_node)
 	if (!phys_addr) {
 		FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
 			 mname, regs_addr);
-			 __if->ccsr_map = mmap(NULL, __if->regs_size,
-			 PROT_READ | PROT_WRITE, MAP_SHARED,
-			 fman_ccsr_map_fd, phys_addr);
 		goto err;
 	}
+	__if->ccsr_map = mmap(NULL, __if->regs_size,
+			      PROT_READ | PROT_WRITE, MAP_SHARED,
+			      fman_ccsr_map_fd, phys_addr);
 	if (__if->ccsr_map == MAP_FAILED) {
 		FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
 		goto err;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* [PATCH 2/2] config: fix DPAA PMD linking
  2017-09-28 14:10           ` [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation Shreyansh Jain
@ 2017-09-28 14:10             ` Shreyansh Jain
  2017-09-28 14:12             ` [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation Shreyansh Jain
  2017-10-02 23:05             ` Ferruh Yigit
  2 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 14:10 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Shreyansh Jain

Fixes: 41c52ee26c29 "(config: enable NXP DPAA PMD compilation)"

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 mk/rte.app.mk | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 9e268ff..715c9e2 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -117,8 +117,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD)      += -lrte_pmd_cxgbe
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)
-_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_BUS)       += -lrte_bus_dpaa
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL)   += -lrte_mempool_dpaa
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_pmd_dpaa
 endif
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD)      += -lrte_pmd_dpaa2
 _LDLIBS-$(CONFIG_RTE_LIBRTE_E1000_PMD)      += -lrte_pmd_e1000
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 367+ messages in thread

* Re: [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation
  2017-09-28 14:10           ` [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation Shreyansh Jain
  2017-09-28 14:10             ` [PATCH 2/2] config: fix DPAA PMD linking Shreyansh Jain
@ 2017-09-28 14:12             ` Shreyansh Jain
  2017-10-02 23:05             ` Ferruh Yigit
  2 siblings, 0 replies; 367+ messages in thread
From: Shreyansh Jain @ 2017-09-28 14:12 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Hello Ferruh,

On Thursday 28 September 2017 07:40 PM, Shreyansh Jain wrote:
> Fixes: 5ad2d123be48 "(bus/dpaa: introducing FMan configurations)"
> 
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> ---
>   drivers/bus/dpaa/base/fman/fman.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
> index 2c6029e..d0a8ee4 100644
> --- a/drivers/bus/dpaa/base/fman/fman.c
> +++ b/drivers/bus/dpaa/base/fman/fman.c
> @@ -242,11 +242,11 @@ fman_if_init(const struct device_node *dpa_node)
>   	if (!phys_addr) {
>   		FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
>   			 mname, regs_addr);
> -			 __if->ccsr_map = mmap(NULL, __if->regs_size,
> -			 PROT_READ | PROT_WRITE, MAP_SHARED,
> -			 fman_ccsr_map_fd, phys_addr);
>   		goto err;
>   	}
> +	__if->ccsr_map = mmap(NULL, __if->regs_size,
> +			      PROT_READ | PROT_WRITE, MAP_SHARED,
> +			      fman_ccsr_map_fd, phys_addr);
>   	if (__if->ccsr_map == MAP_FAILED) {
>   		FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
>   		goto err;
> 

While working on the v6, I missed two small changes. I have created 
patches against these and added the fixes. Can you please apply them?
(I didn't want to send a complete v7 just for these).

-
Shreyansh

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 01/38] eal: add support for 24 40 and 48 bit operations
  2017-06-16  5:40 ` [PATCH 01/38] eal: add support for 24 40 and 48 bit operations Shreyansh Jain
  2017-06-16  8:57   ` Bruce Richardson
@ 2017-10-02 10:16   ` Avi Kivity
  1 sibling, 0 replies; 367+ messages in thread
From: Avi Kivity @ 2017-10-02 10:16 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: ferruh.yigit, hemant.agrawal



On 06/16/2017 08:40 AM, Shreyansh Jain wrote:
> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>
> Bit Swap and LE<=>BE conversions for 23, 40 and 48 bit width
>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>   .../common/include/generic/rte_byteorder.h         | 78 ++++++++++++++++++++++
>   1 file changed, 78 insertions(+)
>
> diff --git a/lib/librte_eal/common/include/generic/rte_byteorder.h b/lib/librte_eal/common/include/generic/rte_byteorder.h
> index e00bccb..8903ff6 100644
> --- a/lib/librte_eal/common/include/generic/rte_byteorder.h
> +++ b/lib/librte_eal/common/include/generic/rte_byteorder.h
> @@ -122,6 +122,84 @@ rte_constant_bswap64(uint64_t x)
>   		((x & 0xff00000000000000ULL) >> 56);
>   }
>   
> +/*
> + * An internal function to swap bytes of a 48-bit value.
> + */
> +static inline uint64_t
> +rte_constant_bswap48(uint64_t x)
> +{
> +	return  ((x & 0x0000000000ffULL) << 40) |
> +		((x & 0x00000000ff00ULL) << 24) |
> +		((x & 0x000000ff0000ULL) <<  8) |
> +		((x & 0x0000ff000000ULL) >>  8) |
> +		((x & 0x00ff00000000ULL) >> 24) |
> +		((x & 0xff0000000000ULL) >> 40);
> +}
> +

Won't something like bswap64(x << 16) be much more efficient? Two 
instructions for the non-constant case, compared to 15-20 here.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD
  2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
                             ` (40 preceding siblings ...)
  2017-09-28 14:10           ` [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation Shreyansh Jain
@ 2017-10-02 23:05           ` Ferruh Yigit
  41 siblings, 0 replies; 367+ messages in thread
From: Ferruh Yigit @ 2017-10-02 23:05 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: hemant.agrawal

On 9/28/2017 1:29 PM, Shreyansh Jain wrote:
> Change Log:
> ============
> 
> v6:
>  - rebased over net-next/master (9d660ac) 
>  - fixed mk/rte.app.mk (Thomas's comment). It had incorrect
>    style of adding library linking
>  - changed from manual memcpy of etheraddr to ether_addr_copy
>    as suggested by Ferruh
>  (these were minor changes missed in v5)
> 
>  v5:
>  - rebased over net-next/master (9d660ac)	
>  - restructuring debugging macros. Removed a few and combined
>    others. DPAA now reflects the dynamic logging with segragated
>    DP logging
>  - updated documentation for missing configuration option
>  - fixed map file; shared build was broken earlier
>  - other minor fixes from review comments
> 
> v4:
>  - Some checkpatch fixes which were reported by checkpatch@dpdk
>  - adding extra stats feature patch (patch 41)
> 
> v3:
>  - Rebasing over 17.11-rc0 (85238f50)
>  - Checkpatch fixes
>    (There are still 2 errors which I think are false positives)
>  - Implement rte_bus.find_device() interface
>  - Various other minor updates/cleanups
> 
> v2:
>  - Fixing various comments from Ferruh, but broadly:
>   -) Logging is been changed to reflect rte_log_register
>   -) Logs across Bus, Mempool and PMD updated
>   -) fixed incorrect feature claimed in dpaa.ini
>  - Removed 24/40/48 bit swapping macro from EAL.
>    These are defined in dpaa/bus now (compat.h)
>  - Added missing memory cleanup operation
>  - Updated documentation with some missing information
> 
> Introduction
> ============
> 
> RFC was posted here -> [R3]
> V5 was posted here  -> [R8]
> 
> This patch series adds NXP's QorIQ-Layerscape DPAA Architecture based
> bus driver, mempool driver and PMD. This version of driver supports NXP
> LS1043A/LS1023A, LS1046A/LS1026A family of network SoCs. [R1]
> 
> DPAA, or Datapath Acceleration Architecture [R2], is a set of hardware
> components designed for high-speed network packet processing. This
> architecture provides the infrastructure to support simplified sharing of
> networking interfaces and accelerators by multiple CPU cores, and the
> accelerators themselves.
> 
> This patchset introduces the following:
> 1. DPAA Bus (drivers/bus/dpaa)
>  The core of DPAA bus is implemented using 3 main hardware blocks: QMan,
>  or Queue Manager; BMan, or Buffer Manager and FMan, or Frame Manager.
>  The patches introduce necessary layers to expose the DPAA hardware
>  blocks for interfacing with RTE framework.
> 
> 2. DPAA Mempool (drivers/mempool/dpaa)
>  BMan, or Buffer Manager, block of DPAA features a hardware offloaded
>  mempool. These patches add support for a driver to manage the BMan
>  block. This driver allows for mempool creation, deletion, buffer
>  acquire and release, as per the RTE APIs.
> 
> 3. DPAA PMD (drivers/net/dpaa)
>  The Poll Mode Driver for DPAA NIC Interfaces.
> 
> Patch Layout
> ============
> 
> 01: Add DPAA SoC build configuration
> 02~16: Add DPAA Bus support and features, incrementally
> 17: Add Documentation
> 18~21: Add DPAA Mempool support
> 22~40: Add PMD and its various features, incrementally
> 
> References
> ==========
> 
> [R1] http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-layerscape-arm-processors:QORIQ-ARM
> [R2] http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf
> [R3] RFC: http://dpdk.org/ml/archives/dev/2017-May/066675.html
> [R4] v1: http://dpdk.org/ml/archives/dev/2017-June/068020.html
> [R5] v2: http://dpdk.org/ml/archives/dev/2017-July/070113.html
> [R6] v3: http://dpdk.org/ml/archives/dev/2017-August/073269.html
> [R7] v4: http://dpdk.org/ml/archives/dev/2017-September/074936.html
> [R8] v5: http://dpdk.org/dev/patchwork/patch/29245/
> 
> Hemant Agrawal (3):
>   bus/dpaa: add compatibility and helper macros
>   net/dpaa: support firmware version get API
>   net/dpaa: support extended statistics
> 
> Shreyansh Jain (37):
>   config: add NXP DPAA SoC build configuration
>   bus/dpaa: introduce NXP DPAA Bus driver skeleton
>   bus/dpaa: add OF parser for device scanning
>   bus/dpaa: introducing FMan configurations
>   bus/dpaa: add FMan hardware operations
>   bus/dpaa: enable DPAA IOCTL portal driver
>   bus/dpaa: add layer for interrupt emulation using pthread
>   bus/dpaa: add routines for managing a RB tree
>   bus/dpaa: add QMAN interface driver
>   bus/dpaa: add QMan driver core routines
>   bus/dpaa: add BMAN driver core
>   bus/dpaa: support FMAN frame queue lookup
>   bus/dpaa: add BMan hardware interfaces
>   bus/dpaa: add fman flow control threshold setting
>   bus/dpaa: integrate DPAA Bus with hardware blocks
>   doc: add NXP DPAA PMD documentation
>   bus/dpaa: add DPAA mempool logging macros
>   mempool/dpaa: support NXP DPAA Mempool
>   config: enable compilation of DPAA Mempool driver
>   bus/dpaa: add DPAA PMD logging macros
>   net/dpaa: add NXP DPAA PMD driver skeleton
>   config: enable NXP DPAA PMD compilation
>   net/dpaa: support Tx and Rx queue setup
>   net/dpaa: support MTU update
>   net/dpaa: support jumbo frames
>   net/dpaa: support link status update
>   net/dpaa: support device info and speed capability
>   net/dpaa: support promiscuous toggle
>   net/dpaa: support multicast toggle
>   net/dpaa: support MAC address update
>   net/dpaa: support basic stats
>   net/dpaa: support flow control
>   net/dpaa: support hashed RSS
>   net/dpaa: support packet type parsing
>   net/dpaa: support checksum offload
>   net/dpaa: support Scattered Rx
>   net/dpaa: add packet dump for debugging

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 367+ messages in thread

* Re: [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation
  2017-09-28 14:10           ` [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation Shreyansh Jain
  2017-09-28 14:10             ` [PATCH 2/2] config: fix DPAA PMD linking Shreyansh Jain
  2017-09-28 14:12             ` [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation Shreyansh Jain
@ 2017-10-02 23:05             ` Ferruh Yigit
  2 siblings, 0 replies; 367+ messages in thread
From: Ferruh Yigit @ 2017-10-02 23:05 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: dev

On 9/28/2017 3:10 PM, Shreyansh Jain wrote:
> Fixes: 5ad2d123be48 "(bus/dpaa: introducing FMan configurations)"
> 
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

Series squashed into relevant commit in next-net, thanks.

^ permalink raw reply	[flat|nested] 367+ messages in thread

end of thread, other threads:[~2017-10-02 23:05 UTC | newest]

Thread overview: 367+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
2017-06-16  5:40 ` [PATCH 01/38] eal: add support for 24 40 and 48 bit operations Shreyansh Jain
2017-06-16  8:57   ` Bruce Richardson
2017-06-16  9:21     ` Shreyansh Jain
2017-06-16  9:42       ` Thomas Monjalon
2017-06-16 10:34       ` Adrien Mazarguil
2017-06-19 11:00         ` Shreyansh Jain
2017-06-19 13:52           ` Wiles, Keith
2017-06-20 10:43             ` Hemant Agrawal
2017-06-20 14:34               ` Wiles, Keith
2017-06-21  8:17                 ` Hemant Agrawal
2017-06-21  8:32                   ` Bruce Richardson
2017-06-21  9:02                   ` Adrien Mazarguil
2017-10-02 10:16   ` Avi Kivity
2017-06-16  5:40 ` [PATCH 02/38] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-06-16  5:40 ` [PATCH 03/38] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-06-16  5:40 ` [PATCH 04/38] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-06-16  5:40 ` [PATCH 05/38] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-06-16  5:40 ` [PATCH 06/38] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-06-16  5:40 ` [PATCH 07/38] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-06-16  5:40 ` [PATCH 08/38] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-06-16  5:40 ` [PATCH 09/38] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-06-16  5:40 ` [PATCH 10/38] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-06-16  5:40 ` [PATCH 11/38] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-06-16  5:40 ` [PATCH 12/38] bus/dpaa: add QMan driver core routines Shreyansh Jain
2017-06-16  5:40 ` [PATCH 13/38] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-06-16  5:40 ` [PATCH 14/38] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
2017-06-16  5:40 ` [PATCH 15/38] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-06-16  5:40 ` [PATCH 16/38] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-06-16  5:40 ` [PATCH 17/38] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-06-16  5:40 ` [PATCH 18/38] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-06-28 15:51   ` Ferruh Yigit
2017-06-29 14:17     ` Shreyansh Jain
2017-06-16  5:40 ` [PATCH 19/38] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
2017-06-16  5:40 ` [PATCH 20/38] maintainers: claim ownership of DPAA Mempool driver Shreyansh Jain
2017-06-16  5:40 ` [PATCH 21/38] drivers: enable compilation " Shreyansh Jain
2017-06-16  5:40 ` [PATCH 22/38] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-06-28 15:41   ` Ferruh Yigit
2017-06-29 14:29     ` Shreyansh Jain
2017-07-02  6:47       ` Shreyansh Jain
2017-06-16  5:40 ` [PATCH 23/38] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-06-16  5:40 ` [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
2017-06-28 15:45   ` Ferruh Yigit
2017-06-29 14:55     ` Shreyansh Jain
2017-06-29 15:41       ` Ferruh Yigit
2017-06-30 11:48         ` Shreyansh Jain
2017-07-04 14:50         ` Shreyansh Jain
2017-06-16  5:40 ` [PATCH 25/38] net/dpaa: add support for MTU update Shreyansh Jain
2017-06-28 15:45   ` Ferruh Yigit
2017-06-29 14:56     ` Shreyansh Jain
2017-06-29 15:43       ` Ferruh Yigit
2017-06-16  5:40 ` [PATCH 26/38] net/dpaa: add support for jumbo frames Shreyansh Jain
2017-06-16  5:40 ` [PATCH 27/38] net/dpaa: add support for link status update Shreyansh Jain
2017-06-28 15:46   ` Ferruh Yigit
2017-06-29 14:57     ` Shreyansh Jain
2017-06-16  5:40 ` [PATCH 28/38] net/dpaa: add support for device info Shreyansh Jain
2017-06-16  5:40 ` [PATCH 29/38] net/dpaa: add support for promiscuous toggle Shreyansh Jain
2017-06-16  5:41 ` [PATCH 30/38] net/dpaa: add support for multicast toggle Shreyansh Jain
2017-06-28 15:47   ` Ferruh Yigit
2017-06-29 14:58     ` Shreyansh Jain
2017-06-16  5:41 ` [PATCH 31/38] net/dpaa: add support for basic stats Shreyansh Jain
2017-06-16  5:41 ` [PATCH 32/38] net/dpaa: add support for MAC address update Shreyansh Jain
2017-06-16  5:41 ` [PATCH 33/38] net/dpaa: add support for flow control Shreyansh Jain
2017-06-28 15:47   ` Ferruh Yigit
2017-06-30  9:37     ` Shreyansh Jain
2017-06-16  5:41 ` [PATCH 34/38] net/dpaa: add support for hashed RSS Shreyansh Jain
2017-06-28 15:48   ` Ferruh Yigit
2017-06-30 10:31     ` Shreyansh Jain
2017-06-30 11:39       ` Ferruh Yigit
2017-07-04 14:49         ` Shreyansh Jain
2017-06-16  5:41 ` [PATCH 35/38] net/dpaa: add support for packet type parsing Shreyansh Jain
2017-06-28 15:50   ` Ferruh Yigit
2017-06-30 11:40     ` Shreyansh Jain
2017-07-04 12:11       ` Shreyansh Jain
2017-06-16  5:41 ` [PATCH 36/38] net/dpaa: add support for checksum offload Shreyansh Jain
2017-06-28 15:50   ` Ferruh Yigit
2017-07-04 14:48     ` Shreyansh Jain
2017-06-16  5:41 ` [PATCH 37/38] net/dpaa: add support for Scattered Rx Shreyansh Jain
2017-06-16  5:41 ` [PATCH 38/38] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-06-28 15:51   ` Ferruh Yigit
2017-06-30 11:47     ` Shreyansh Jain
2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 13/40] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 19/40] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 20/40] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 21/40] maintainers: claim ownership " Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 22/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 23/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 24/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 25/40] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 26/40] net/dpaa: add support for MTU update Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 27/40] net/dpaa: add support for jumbo frames Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 28/40] net/dpaa: add support for link status update Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 29/40] net/dpaa: add support for device info and speed capability Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 30/40] net/dpaa: add support for promiscuous toggle Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 31/40] net/dpaa: add support for multicast toggle Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 32/40] net/dpaa: add support for MAC address update Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 33/40] net/dpaa: add support for basic stats Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 34/40] net/dpaa: add support for flow control Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 35/40] net/dpaa: add support for hashed RSS Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 36/40] net/dpaa: add support for packet type parsing Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 37/40] net/dpaa: add support for checksum offload Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 38/40] net/dpaa: add support for Scattered Rx Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 39/40] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 40/40] net/dpaa: support for firmware version get API Shreyansh Jain
2017-07-05  0:13   ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Thomas Monjalon
2017-07-05  4:38     ` Shreyansh Jain
2017-07-05  6:28       ` Thomas Monjalon
2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 13/40] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 19/40] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 20/40] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
2017-09-21 21:55       ` Thomas Monjalon
2017-09-22  6:35         ` Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 21/40] maintainers: claim ownership " Shreyansh Jain
2017-09-21 21:56       ` Thomas Monjalon
2017-09-22  6:47         ` Shreyansh Jain
2017-09-22  6:53           ` Thomas Monjalon
2017-09-22  7:37             ` Shreyansh Jain
2017-09-22  7:35               ` Thomas Monjalon
2017-09-27  8:30                 ` Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 22/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 23/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 24/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-09-21 22:03       ` Thomas Monjalon
2017-09-22  6:51         ` Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 25/40] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 26/40] net/dpaa: add support for MTU update Shreyansh Jain
2017-09-21 22:07       ` Thomas Monjalon
2017-09-22  6:48         ` Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 27/40] net/dpaa: add support for jumbo frames Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 28/40] net/dpaa: add support for link status update Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 29/40] net/dpaa: add support for device info and speed capability Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 30/40] net/dpaa: add support for promiscuous toggle Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 31/40] net/dpaa: add support for multicast toggle Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 32/40] net/dpaa: add support for MAC address update Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 33/40] net/dpaa: add support for basic stats Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 34/40] net/dpaa: add support for flow control Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 35/40] net/dpaa: add support for hashed RSS Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 36/40] net/dpaa: add support for packet type parsing Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 37/40] net/dpaa: add support for checksum offload Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 38/40] net/dpaa: add support for Scattered Rx Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 39/40] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 40/40] net/dpaa: support for firmware version get API Shreyansh Jain
2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 01/41] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-09-18 14:47         ` Ferruh Yigit
2017-09-19 13:14           ` Shreyansh Jain
2017-09-19 13:33             ` Ferruh Yigit
2017-09-25 14:32             ` Shreyansh Jain
2017-09-25 15:11               ` Ferruh Yigit
2017-09-26 11:26                 ` Shreyansh Jain
2017-09-27  9:30                 ` Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-09-18 14:49         ` Ferruh Yigit
2017-09-19 13:18           ` Shreyansh Jain
2017-09-19 13:40             ` Ferruh Yigit
2017-09-19 13:57               ` Shreyansh Jain
2017-09-26 12:43                 ` Shreyansh Jain
2017-09-27 23:09                   ` Ferruh Yigit
2017-09-09 11:20       ` [PATCH v4 04/41] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-09-18 14:49         ` Ferruh Yigit
2017-09-19 13:37           ` Shreyansh Jain
2017-09-19 14:15             ` Ferruh Yigit
2017-09-19 20:01               ` Thomas Monjalon
2017-09-20 20:39                 ` Jan Viktorin
2017-09-09 11:20       ` [PATCH v4 05/41] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-09-18 14:50         ` Ferruh Yigit
2017-09-18 16:15           ` Thomas Monjalon
2017-09-18 17:12             ` Hemant Agrawal
2017-09-19 13:43           ` Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 06/41] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 07/41] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-09-18 14:51         ` Ferruh Yigit
2017-09-19 14:17           ` Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 08/41] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 09/41] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 10/41] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 11/41] bus/dpaa: add QMan driver core routines Shreyansh Jain
2017-09-18 14:53         ` Ferruh Yigit
2017-09-19 14:18           ` Shreyansh Jain
2017-09-28 11:45             ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 12/41] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 13/41] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
2017-09-18 14:51         ` Ferruh Yigit
2017-09-28 11:47           ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 14/41] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 15/41] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 16/41] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 17/41] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-09-18 14:53         ` Ferruh Yigit
2017-09-19 14:25           ` Shreyansh Jain
2017-09-28 11:49             ` Shreyansh Jain
2017-09-18 18:33         ` Mcnamara, John
2017-09-09 11:21       ` [PATCH v4 18/41] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 19/41] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 20/41] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 21/41] maintainers: claim ownership " Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 22/41] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 23/41] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 24/41] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
2017-09-18 14:55         ` Ferruh Yigit
2017-09-21 12:59           ` Shreyansh Jain
2017-09-28 11:51             ` Shreyansh Jain
2017-09-18 14:55         ` Ferruh Yigit
2017-09-21 13:00           ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 26/41] net/dpaa: add support for MTU update Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 27/41] net/dpaa: add support for jumbo frames Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 28/41] net/dpaa: add support for link status update Shreyansh Jain
2017-09-18 14:56         ` Ferruh Yigit
2017-09-21 13:09           ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 29/41] net/dpaa: add support for device info and speed capability Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 30/41] net/dpaa: add support for promiscuous toggle Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 31/41] net/dpaa: add support for multicast toggle Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 32/41] net/dpaa: add support for MAC address update Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 33/41] net/dpaa: add support for basic stats Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 34/41] net/dpaa: add support for flow control Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 35/41] net/dpaa: add support for hashed RSS Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 36/41] net/dpaa: add support for packet type parsing Shreyansh Jain
2017-09-18 14:56         ` Ferruh Yigit
2017-09-21 13:16           ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 37/41] net/dpaa: add support for checksum offload Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 38/41] net/dpaa: add support for Scattered Rx Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 39/41] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 40/41] net/dpaa: support for firmware version get API Shreyansh Jain
2017-09-18 14:57         ` Ferruh Yigit
2017-09-21 13:18           ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 41/41] net/dpaa: support for extended statistics Shreyansh Jain
2017-09-18 14:57         ` Ferruh Yigit
2017-09-21 13:26           ` Shreyansh Jain
2017-09-27  8:26             ` Shreyansh Jain
2017-09-27 23:37               ` Ferruh Yigit
2017-09-28  2:30                 ` Shreyansh Jain
2017-09-28  2:52                   ` Shreyansh Jain
2017-09-28  9:26                     ` Ferruh Yigit
2017-09-21 22:09       ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Thomas Monjalon
2017-09-21 22:10       ` Thomas Monjalon
2017-09-22  6:25         ` Shreyansh Jain
2017-09-22  6:33           ` Thomas Monjalon
2017-09-22 13:06         ` Shreyansh Jain
2017-09-22 13:13           ` Thomas Monjalon
2017-09-22 14:00             ` Shreyansh Jain
2017-09-22 14:19               ` Thomas Monjalon
2017-09-23 10:39                 ` Shreyansh Jain
2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 13/40] bus/dpaa: support FMAN frame queue lookup Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 19/40] mempool/dpaa: support NXP DPAA Mempool Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 20/40] config: enable compilation of DPAA Mempool driver Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 21/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 22/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 23/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 24/40] net/dpaa: support Tx and Rx queue setup Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 25/40] net/dpaa: support MTU update Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 26/40] net/dpaa: support jumbo frames Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 27/40] net/dpaa: support link status update Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 28/40] net/dpaa: support device info and speed capability Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 29/40] net/dpaa: support promiscuous toggle Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 30/40] net/dpaa: support multicast toggle Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 31/40] net/dpaa: support MAC address update Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 32/40] net/dpaa: support basic stats Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 33/40] net/dpaa: support flow control Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 34/40] net/dpaa: support hashed RSS Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 35/40] net/dpaa: support packet type parsing Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 36/40] net/dpaa: support checksum offload Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 37/40] net/dpaa: support Scattered Rx Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 38/40] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 39/40] net/dpaa: support firmware version get API Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 40/40] net/dpaa: support extended statistics Shreyansh Jain
2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 13/40] bus/dpaa: support FMAN frame queue lookup Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 19/40] mempool/dpaa: support NXP DPAA Mempool Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 20/40] config: enable compilation of DPAA Mempool driver Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 21/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 22/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 23/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 24/40] net/dpaa: support Tx and Rx queue setup Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 25/40] net/dpaa: support MTU update Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 26/40] net/dpaa: support jumbo frames Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 27/40] net/dpaa: support link status update Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 28/40] net/dpaa: support device info and speed capability Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 29/40] net/dpaa: support promiscuous toggle Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 30/40] net/dpaa: support multicast toggle Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 31/40] net/dpaa: support MAC address update Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 32/40] net/dpaa: support basic stats Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 33/40] net/dpaa: support flow control Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 34/40] net/dpaa: support hashed RSS Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 35/40] net/dpaa: support packet type parsing Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 36/40] net/dpaa: support checksum offload Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 37/40] net/dpaa: support Scattered Rx Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 38/40] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 39/40] net/dpaa: support firmware version get API Shreyansh Jain
2017-09-28 12:30           ` [PATCH v6 40/40] net/dpaa: support extended statistics Shreyansh Jain
2017-09-28 14:10           ` [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation Shreyansh Jain
2017-09-28 14:10             ` [PATCH 2/2] config: fix DPAA PMD linking Shreyansh Jain
2017-09-28 14:12             ` [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation Shreyansh Jain
2017-10-02 23:05             ` Ferruh Yigit
2017-10-02 23:05           ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Ferruh Yigit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.