All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hemant Agrawal <hemant.agrawal@nxp.com>
To: <dev@dpdk.org>
Cc: <thomas.monjalon@6wind.com>, <bruce.richardson@intel.com>,
	<shreyansh.jain@nxp.com>, <john.mcnamara@intel.com>,
	<ferruh.yigit@intel.com>, <jerin.jacob@caviumnetworks.com>
Subject: [PATCH v1 04/22] bus/fslmc: add QBMAN driver to bus
Date: Fri, 17 Mar 2017 18:06:23 +0530	[thread overview]
Message-ID: <1489754201-1027-5-git-send-email-hemant.agrawal@nxp.com> (raw)
In-Reply-To: <1489754201-1027-1-git-send-email-hemant.agrawal@nxp.com>

QBMAN, is a hardware block which interfaces with the other
accelerating hardware blocks (For e.g., WRIOP) on NXP's DPAA2
SoC for queue, buffer and packet scheduling.

This patch introduces a userspace driver for interfacing with
the QBMAN hw block.

The qbman-portal component provides APIs to do the low level
hardware bit twiddling for operations such as:
  -initializing Qman software portals
  -building and sending portal commands
  -portal interrupt configuration and processing

This same/similar code is used in kernel and compat file is used
to make it working in user space.

Signed-off-by: Geoff Thorpe <Geoff.Thorpe@nxp.com>
Signed-off-by: Roy Pledge <Roy.Pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/Makefile                         |    4 +
 drivers/bus/fslmc/qbman/include/compat.h           |  406 ++++++
 drivers/bus/fslmc/qbman/include/fsl_qbman_base.h   |  160 +++
 drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h | 1093 ++++++++++++++
 drivers/bus/fslmc/qbman/qbman_portal.c             | 1496 ++++++++++++++++++++
 drivers/bus/fslmc/qbman/qbman_portal.h             |  277 ++++
 drivers/bus/fslmc/qbman/qbman_private.h            |  170 +++
 drivers/bus/fslmc/qbman/qbman_sys.h                |  385 +++++
 drivers/bus/fslmc/qbman/qbman_sys_decl.h           |   73 +
 drivers/bus/fslmc/rte_bus_fslmc_version.map        |   19 +
 10 files changed, 4083 insertions(+)
 create mode 100644 drivers/bus/fslmc/qbman/include/compat.h
 create mode 100644 drivers/bus/fslmc/qbman/include/fsl_qbman_base.h
 create mode 100644 drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
 create mode 100644 drivers/bus/fslmc/qbman/qbman_portal.c
 create mode 100644 drivers/bus/fslmc/qbman/qbman_portal.h
 create mode 100644 drivers/bus/fslmc/qbman/qbman_private.h
 create mode 100644 drivers/bus/fslmc/qbman/qbman_sys.h
 create mode 100644 drivers/bus/fslmc/qbman/qbman_sys_decl.h

diff --git a/drivers/bus/fslmc/Makefile b/drivers/bus/fslmc/Makefile
index ad28d8a..7368ce0 100644
--- a/drivers/bus/fslmc/Makefile
+++ b/drivers/bus/fslmc/Makefile
@@ -50,6 +50,7 @@ endif
 CFLAGS += "-Wno-strict-aliasing"
 
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc
+CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
 
 # versioning export map
@@ -58,6 +59,9 @@ EXPORT_MAP := rte_bus_fslmc_version.map
 # library version
 LIBABIVER := 1
 
+SRCS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += \
+        qbman/qbman_portal.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc_bus.c
 
 # library dependencies
diff --git a/drivers/bus/fslmc/qbman/include/compat.h b/drivers/bus/fslmc/qbman/include/compat.h
new file mode 100644
index 0000000..28d7952
--- /dev/null
+++ b/drivers/bus/fslmc/qbman/include/compat.h
@@ -0,0 +1,406 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright (c) 2008-2016 Freescale Semiconductor, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef HEADER_COMPAT_H
+#define HEADER_COMPAT_H
+
+#include <sched.h>
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+#include <stdint.h>
+#include <stdlib.h>
+#include <stddef.h>
+#include <errno.h>
+#include <string.h>
+#include <pthread.h>
+#include <net/ethernet.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <ctype.h>
+#include <malloc.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/mman.h>
+#include <limits.h>
+#include <assert.h>
+#include <dirent.h>
+#include <inttypes.h>
+#include <error.h>
+#include <rte_atomic.h>
+
+/* The following definitions are primarily to allow the single-source driver
+ * interfaces to be included by arbitrary program code. Ie. for interfaces that
+ * are also available in kernel-space, these definitions provide compatibility
+ * with certain attributes and types used in those interfaces.
+ */
+
+/* Required compiler attributes */
+#define __user
+#define likely(x)	__builtin_expect(!!(x), 1)
+#define unlikely(x)	__builtin_expect(!!(x), 0)
+#define ____cacheline_aligned __attribute__((aligned(L1_CACHE_BYTES)))
+#undef container_of
+#define container_of(ptr, type, member) ({ \
+		typeof(((type *)0)->member)(*__mptr) = (ptr); \
+		(type *)((char *)__mptr - offsetof(type, member)); })
+#define __stringify_1(x) #x
+#define __stringify(x)	__stringify_1(x)
+
+#ifdef ARRAY_SIZE
+#undef ARRAY_SIZE
+#endif
+#define ARRAY_SIZE(a) (sizeof(a) / sizeof((a)[0]))
+
+/* Required types */
+typedef uint8_t		u8;
+typedef uint16_t	u16;
+typedef uint32_t	u32;
+typedef uint64_t	u64;
+typedef uint64_t	dma_addr_t;
+typedef cpu_set_t	cpumask_t;
+typedef	u32		compat_uptr_t;
+
+static inline void __user *compat_ptr(compat_uptr_t uptr)
+{
+	return (void __user *)(unsigned long)uptr;
+}
+
+static inline compat_uptr_t ptr_to_compat(void __user *uptr)
+{
+	return (u32)(unsigned long)uptr;
+}
+
+/* I/O operations */
+static inline u32 in_be32(volatile void *__p)
+{
+	volatile u32 *p = __p;
+	return *p;
+}
+
+static inline void out_be32(volatile void *__p, u32 val)
+{
+	volatile u32 *p = __p;
+	*p = val;
+}
+
+/* Debugging */
+#define prflush(fmt, args...) \
+	do { \
+		printf(fmt, ##args); \
+		fflush(stdout); \
+	} while (0)
+#define pr_crit(fmt, args...)	 prflush("CRIT:" fmt, ##args)
+#define pr_err(fmt, args...)	 prflush("ERR:" fmt, ##args)
+#define pr_warn(fmt, args...)	 prflush("WARN:" fmt, ##args)
+#define pr_info(fmt, args...)	 prflush(fmt, ##args)
+
+#ifdef pr_debug
+#undef pr_debug
+#endif
+#define pr_debug(fmt, args...) {}
+#define might_sleep_if(c) {}
+#define msleep(x) {}
+#define WARN_ON(c, str) \
+do { \
+	static int warned_##__LINE__; \
+	if ((c) && !warned_##__LINE__) { \
+		pr_warn("%s\n", str); \
+		pr_warn("(%s:%d)\n", __FILE__, __LINE__); \
+		warned_##__LINE__ = 1; \
+	} \
+} while (0)
+#define QBMAN_BUG_ON(c) WARN_ON(c, "BUG")
+
+#define ALIGN(x, a) (((x) + ((typeof(x))(a) - 1)) & ~((typeof(x))(a) - 1))
+
+/****************/
+/* Linked-lists */
+/****************/
+
+struct list_head {
+	struct list_head *prev;
+	struct list_head *next;
+};
+
+#define LIST_HEAD(n) \
+struct list_head n = { \
+	.prev = &n, \
+	.next = &n \
+}
+
+#define INIT_LIST_HEAD(p) \
+do { \
+	struct list_head *__p298 = (p); \
+	__p298->next = __p298; \
+	__p298->prev = __p298->next; \
+} while (0)
+#define list_entry(node, type, member) \
+	(type *)((void *)node - offsetof(type, member))
+#define list_empty(p) \
+({ \
+	const struct list_head *__p298 = (p); \
+	((__p298->next == __p298) && (__p298->prev == __p298)); \
+})
+#define list_add(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->next = __l298->next; \
+	__p298->prev = __l298; \
+	__l298->next->prev = __p298; \
+	__l298->next = __p298; \
+} while (0)
+#define list_add_tail(p, l) \
+do { \
+	struct list_head *__p298 = (p); \
+	struct list_head *__l298 = (l); \
+	__p298->prev = __l298->prev; \
+	__p298->next = __l298; \
+	__l298->prev->next = __p298; \
+	__l298->prev = __p298; \
+} while (0)
+#define list_for_each(i, l)				\
+	for (i = (l)->next; i != (l); i = i->next)
+#define list_for_each_safe(i, j, l)			\
+	for (i = (l)->next, j = i->next; i != (l);	\
+	     i = j, j = i->next)
+#define list_for_each_entry(i, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name); &i->name != (l); \
+		i = list_entry(i->name.next, typeof(*i), name))
+#define list_for_each_entry_safe(i, j, l, name) \
+	for (i = list_entry((l)->next, typeof(*i), name), \
+		j = list_entry(i->name.next, typeof(*j), name); \
+		&i->name != (l); \
+		i = j, j = list_entry(j->name.next, typeof(*j), name))
+#define list_del(i) \
+do { \
+	(i)->next->prev = (i)->prev; \
+	(i)->prev->next = (i)->next; \
+} while (0)
+
+/* Other miscellaneous interfaces our APIs depend on; */
+
+#define lower_32_bits(x) ((u32)(x))
+#define upper_32_bits(x) ((u32)(((x) >> 16) >> 16))
+
+/* Compiler/type stuff */
+typedef unsigned int	gfp_t;
+typedef uint32_t	phandle;
+
+#define __iomem
+#define EINTR		4
+#define ENODEV		19
+#define GFP_KERNEL	0
+#define __raw_readb(p)	(*(const volatile unsigned char *)(p))
+#define __raw_readl(p)	(*(const volatile unsigned int *)(p))
+#define __raw_writel(v, p) {*(volatile unsigned int *)(p) = (v); }
+
+/* memcpy() stuff - when you know alignments in advance */
+#ifdef CONFIG_TRY_BETTER_MEMCPY
+static inline void copy_words(void *dest, const void *src, size_t sz)
+{
+	u32 *__dest = dest;
+	const u32 *__src = src;
+	size_t __sz = sz >> 2;
+
+	QBMAN_BUG_ON((unsigned long)dest & 0x3);
+	QBMAN_BUG_ON((unsigned long)src & 0x3);
+	QBMAN_BUG_ON(sz & 0x3);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_shorts(void *dest, const void *src, size_t sz)
+{
+	u16 *__dest = dest;
+	const u16 *__src = src;
+	size_t __sz = sz >> 1;
+
+	QBMAN_BUG_ON((unsigned long)dest & 0x1);
+	QBMAN_BUG_ON((unsigned long)src & 0x1);
+	QBMAN_BUG_ON(sz & 0x1);
+	while (__sz--)
+		*(__dest++) = *(__src++);
+}
+
+static inline void copy_bytes(void *dest, const void *src, size_t sz)
+{
+	u8 *__dest = dest;
+	const u8 *__src = src;
+
+	while (sz--)
+		*(__dest++) = *(__src++);
+}
+#else
+#define copy_words memcpy
+#define copy_shorts memcpy
+#define copy_bytes memcpy
+#endif
+
+/* Completion stuff */
+#define DECLARE_COMPLETION(n) int n = 0
+#define complete(n) { *n = 1; }
+#define wait_for_completion(n) \
+do { \
+	while (!*n) { \
+		bman_poll(); \
+		qman_poll(); \
+	} \
+	*n = 0; \
+} while (0)
+
+/* Allocator stuff */
+#define kmalloc(sz, t)	malloc(sz)
+#define vmalloc(sz)	malloc(sz)
+#define kfree(p)	{ if (p) free(p); }
+static inline void *kzalloc(size_t sz, gfp_t __foo __rte_unused)
+{
+	void *ptr = malloc(sz);
+
+	if (ptr)
+		memset(ptr, 0, sz);
+	return ptr;
+}
+
+static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused)
+{
+	void *p;
+
+	if (posix_memalign(&p, 4096, 4096))
+		return 0;
+	memset(p, 0, 4096);
+	return (unsigned long)p;
+}
+
+static inline void free_page(unsigned long p)
+{
+	free((void *)p);
+}
+
+/* Bitfield stuff. */
+#define BITS_PER_ULONG	(sizeof(unsigned long) << 3)
+#define SHIFT_PER_ULONG	(((1 << 5) == BITS_PER_ULONG) ? 5 : 6)
+#define BITS_MASK(idx)	((unsigned long)1 << ((idx) & (BITS_PER_ULONG - 1)))
+#define BITS_IDX(idx)	((idx) >> SHIFT_PER_ULONG)
+static inline unsigned long test_bits(unsigned long mask,
+				      volatile unsigned long *p)
+{
+	return *p & mask;
+}
+
+static inline int test_bit(int idx, volatile unsigned long *bits)
+{
+	return test_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+static inline void set_bits(unsigned long mask, volatile unsigned long *p)
+{
+	*p |= mask;
+}
+
+static inline void set_bit(int idx, volatile unsigned long *bits)
+{
+	set_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+static inline void clear_bits(unsigned long mask, volatile unsigned long *p)
+{
+	*p &= ~mask;
+}
+
+static inline void clear_bit(int idx, volatile unsigned long *bits)
+{
+	clear_bits(BITS_MASK(idx), bits + BITS_IDX(idx));
+}
+
+static inline unsigned long test_and_set_bits(unsigned long mask,
+					      volatile unsigned long *p)
+{
+	unsigned long ret = test_bits(mask, p);
+
+	set_bits(mask, p);
+	return ret;
+}
+
+static inline int test_and_set_bit(int idx, volatile unsigned long *bits)
+{
+	int ret = test_bit(idx, bits);
+
+	set_bit(idx, bits);
+	return ret;
+}
+
+static inline int test_and_clear_bit(int idx, volatile unsigned long *bits)
+{
+	int ret = test_bit(idx, bits);
+
+	clear_bit(idx, bits);
+	return ret;
+}
+
+static inline int find_next_zero_bit(unsigned long *bits, int limit, int idx)
+{
+	while ((++idx < limit) && test_bit(idx, bits))
+		;
+	return idx;
+}
+
+static inline int find_first_zero_bit(unsigned long *bits, int limit)
+{
+	int idx = 0;
+
+	while (test_bit(idx, bits) && (++idx < limit))
+		;
+	return idx;
+}
+
+static inline u64 div64_u64(u64 n, u64 d)
+{
+	return n / d;
+}
+
+#define atomic_t                rte_atomic32_t
+#define atomic_read(v)          rte_atomic32_read(v)
+#define atomic_set(v, i)        rte_atomic32_set(v, i)
+
+#define atomic_inc(v)           rte_atomic32_add(v, 1)
+#define atomic_dec(v)           rte_atomic32_sub(v, 1)
+
+#define atomic_inc_and_test(v)  rte_atomic32_inc_and_test(v)
+#define atomic_dec_and_test(v)  rte_atomic32_dec_and_test(v)
+
+#define atomic_inc_return(v)    rte_atomic32_add_return(v, 1)
+#define atomic_dec_return(v)    rte_atomic32_sub_return(v, 1)
+#define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0)
+
+#endif /* HEADER_COMPAT_H */
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_base.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_base.h
new file mode 100644
index 0000000..ee4b772
--- /dev/null
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_base.h
@@ -0,0 +1,160 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright (C) 2014 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _FSL_QBMAN_BASE_H
+#define _FSL_QBMAN_BASE_H
+
+typedef uint64_t  dma_addr_t;
+
+/**
+ * DOC: QBMan basic structures
+ *
+ * The QBMan block descriptor, software portal descriptor and Frame descriptor
+ * are defined here.
+ *
+ */
+
+#define QMAN_REV_4000   0x04000000
+#define QMAN_REV_4100   0x04010000
+#define QMAN_REV_4101   0x04010001
+
+/**
+ * struct qbman_block_desc - qbman block descriptor structure
+ * @ccsr_reg_bar: CCSR register map.
+ * @irq_rerr: Recoverable error interrupt line.
+ * @irq_nrerr: Non-recoverable error interrupt line
+ *
+ * Descriptor for a QBMan instance on the SoC. On partitions/targets that do not
+ * control this QBMan instance, these values may simply be place-holders. The
+ * idea is simply that we be able to distinguish between them, eg. so that SWP
+ * descriptors can identify which QBMan instance they belong to.
+ */
+struct qbman_block_desc {
+	void *ccsr_reg_bar;
+	int irq_rerr;
+	int irq_nrerr;
+};
+
+enum qbman_eqcr_mode {
+	qman_eqcr_vb_ring = 2, /* Valid bit, with eqcr in ring mode */
+	qman_eqcr_vb_array, /* Valid bit, with eqcr in array mode */
+};
+
+/**
+ * struct qbman_swp_desc - qbman software portal descriptor structure
+ * @block: The QBMan instance.
+ * @cena_bar: Cache-enabled portal register map.
+ * @cinh_bar: Cache-inhibited portal register map.
+ * @irq: -1 if unused (or unassigned)
+ * @idx: SWPs within a QBMan are indexed. -1 if opaque to the user.
+ * @qman_version: the qman version.
+ * @eqcr_mode: Select the eqcr mode, currently only valid bit ring mode and
+ * valid bit array mode are supported.
+ *
+ * Descriptor for a QBMan software portal, expressed in terms that make sense to
+ * the user context. Ie. on MC, this information is likely to be true-physical,
+ * and instantiated statically at compile-time. On GPP, this information is
+ * likely to be obtained via "discovery" over a partition's "MC bus"
+ * (ie. in response to a MC portal command), and would take into account any
+ * virtualisation of the GPP user's address space and/or interrupt numbering.
+ */
+struct qbman_swp_desc {
+	const struct qbman_block_desc *block;
+	uint8_t *cena_bar;
+	uint8_t *cinh_bar;
+	int irq;
+	int idx;
+	uint32_t qman_version;
+	enum qbman_eqcr_mode eqcr_mode;
+};
+
+/* Driver object for managing a QBMan portal */
+struct qbman_swp;
+
+/**
+ * struct qbman_fd - basci structure for qbman frame descriptor
+ * @words: for easier/faster copying the whole FD structure.
+ * @addr_lo: the lower 32 bits of the address in FD.
+ * @addr_hi: the upper 32 bits of the address in FD.
+ * @len: the length field in FD.
+ * @bpid_offset: represent the bpid and offset fields in FD. offset in
+ * the MS 16 bits, BPID in the LS 16 bits.
+ * @frc: frame context
+ * @ctrl: the 32bit control bits including dd, sc,... va, err.
+ * @flc_lo: the lower 32bit of flow context.
+ * @flc_hi: the upper 32bits of flow context.
+ *
+ * Place-holder for FDs, we represent it via the simplest form that we need for
+ * now. Different overlays may be needed to support different options, etc. (It
+ * is impractical to define One True Struct, because the resulting encoding
+ * routines (lots of read-modify-writes) would be worst-case performance whether
+ * or not circumstances required them.)
+ *
+ * Note, as with all data-structures exchanged between software and hardware (be
+ * they located in the portal register map or DMA'd to and from main-memory),
+ * the driver ensures that the caller of the driver API sees the data-structures
+ * in host-endianness. "struct qbman_fd" is no exception. The 32-bit words
+ * contained within this structure are represented in host-endianness, even if
+ * hardware always treats them as little-endian. As such, if any of these fields
+ * are interpreted in a binary (rather than numerical) fashion by hardware
+ * blocks (eg. accelerators), then the user should be careful. We illustrate
+ * with an example;
+ *
+ * Suppose the desired behaviour of an accelerator is controlled by the "frc"
+ * field of the FDs that are sent to it. Suppose also that the behaviour desired
+ * by the user corresponds to an "frc" value which is expressed as the literal
+ * sequence of bytes 0xfe, 0xed, 0xab, and 0xba. So "frc" should be the 32-bit
+ * value in which 0xfe is the first byte and 0xba is the last byte, and as
+ * hardware is little-endian, this amounts to a 32-bit "value" of 0xbaabedfe. If
+ * the software is little-endian also, this can simply be achieved by setting
+ * frc=0xbaabedfe. On the other hand, if software is big-endian, it should set
+ * frc=0xfeedabba! The best away of avoiding trouble with this sort of thing is
+ * to treat the 32-bit words as numerical values, in which the offset of a field
+ * from the beginning of the first byte (as required or generated by hardware)
+ * is numerically encoded by a left-shift (ie. by raising the field to a
+ * corresponding power of 2).  Ie. in the current example, software could set
+ * "frc" in the following way, and it would work correctly on both little-endian
+ * and big-endian operation;
+ *    fd.frc = (0xfe << 0) | (0xed << 8) | (0xab << 16) | (0xba << 24);
+ */
+struct qbman_fd {
+	union {
+		uint32_t words[8];
+		struct qbman_fd_simple {
+			uint32_t addr_lo;
+			uint32_t addr_hi;
+			uint32_t len;
+			uint32_t bpid_offset;
+			uint32_t frc;
+			uint32_t ctrl;
+			uint32_t flc_lo;
+			uint32_t flc_hi;
+		} simple;
+	};
+};
+
+#endif /* !_FSL_QBMAN_BASE_H */
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
new file mode 100644
index 0000000..7731772
--- /dev/null
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
@@ -0,0 +1,1093 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright (C) 2014 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _FSL_QBMAN_PORTAL_H
+#define _FSL_QBMAN_PORTAL_H
+
+#include <fsl_qbman_base.h>
+
+/**
+ * DOC - QBMan portal APIs to implement the following functions:
+ * - Initialize and destroy Software portal object.
+ * - Read and write Software portal interrupt registers.
+ * - Enqueue, including setting the enqueue descriptor, and issuing enqueue
+ *   command etc.
+ * - Dequeue, including setting the dequeue descriptor, issuing dequeue command,
+ *   parsing the dequeue response in DQRR and memeory, parsing the state change
+ *   notifications etc.
+ * - Release, including setting the release descriptor, and issuing the buffer
+ *   release command.
+ * - Acquire, acquire the buffer from the given buffer pool.
+ * - FQ management.
+ * - Channel management, enable/disable CDAN with or without context.
+ */
+
+/**
+ * qbman_swp_init() - Create a functional object representing the given
+ * QBMan portal descriptor.
+ * @d: the given qbman swp descriptor
+ *
+ * Return qbman_swp portal object for success, NULL if the object cannot
+ * be created.
+ */
+struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d);
+
+/**
+ * qbman_swp_finish() - Create and destroy a functional object representing
+ * the given QBMan portal descriptor.
+ * @p: the qbman_swp object to be destroyed.
+ *
+ */
+void qbman_swp_finish(struct qbman_swp *p);
+
+/**
+ * qbman_swp_get_desc() - Get the descriptor of the given portal object.
+ * @p: the given portal object.
+ *
+ * Return the descriptor for this portal.
+ */
+const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p);
+
+	/**************/
+	/* Interrupts */
+	/**************/
+
+/* EQCR ring interrupt */
+#define QBMAN_SWP_INTERRUPT_EQRI ((uint32_t)0x00000001)
+/* Enqueue command dispatched interrupt */
+#define QBMAN_SWP_INTERRUPT_EQDI ((uint32_t)0x00000002)
+/* DQRR non-empty interrupt */
+#define QBMAN_SWP_INTERRUPT_DQRI ((uint32_t)0x00000004)
+/* RCR ring interrupt */
+#define QBMAN_SWP_INTERRUPT_RCRI ((uint32_t)0x00000008)
+/* Release command dispatched interrupt */
+#define QBMAN_SWP_INTERRUPT_RCDI ((uint32_t)0x00000010)
+/* Volatile dequeue command interrupt */
+#define QBMAN_SWP_INTERRUPT_VDCI ((uint32_t)0x00000020)
+
+/**
+ * qbman_swp_interrupt_get_vanish() - Get the data in software portal
+ * interrupt status disable register.
+ * @p: the given software portal object.
+ *
+ * Return the settings in SWP_ISDR register.
+ */
+uint32_t qbman_swp_interrupt_get_vanish(struct qbman_swp *p);
+
+/**
+ * qbman_swp_interrupt_set_vanish() - Set the data in software portal
+ * interrupt status disable register.
+ * @p: the given software portal object.
+ * @mask: The value to set in SWP_IDSR register.
+ */
+void qbman_swp_interrupt_set_vanish(struct qbman_swp *p, uint32_t mask);
+
+/**
+ * qbman_swp_interrupt_read_status() - Get the data in software portal
+ * interrupt status register.
+ * @p: the given software portal object.
+ *
+ * Return the settings in SWP_ISR register.
+ */
+uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p);
+
+/**
+ * qbman_swp_interrupt_clear_status() - Set the data in software portal
+ * interrupt status register.
+ * @p: the given software portal object.
+ * @mask: The value to set in SWP_ISR register.
+ */
+void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask);
+
+/**
+ * qbman_swp_interrupt_get_trigger() - Get the data in software portal
+ * interrupt enable register.
+ * @p: the given software portal object.
+ *
+ * Return the settings in SWP_IER register.
+ */
+uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p);
+
+/**
+ * qbman_swp_interrupt_set_trigger() - Set the data in software portal
+ * interrupt enable register.
+ * @p: the given software portal object.
+ * @mask: The value to set in SWP_IER register.
+ */
+void qbman_swp_interrupt_set_trigger(struct qbman_swp *p, uint32_t mask);
+
+/**
+ * qbman_swp_interrupt_get_inhibit() - Get the data in software portal
+ * interrupt inhibit register.
+ * @p: the given software portal object.
+ *
+ * Return the settings in SWP_IIR register.
+ */
+int qbman_swp_interrupt_get_inhibit(struct qbman_swp *p);
+
+/**
+ * qbman_swp_interrupt_set_inhibit() - Set the data in software portal
+ * interrupt inhibit register.
+ * @p: the given software portal object.
+ * @mask: The value to set in SWP_IIR register.
+ */
+void qbman_swp_interrupt_set_inhibit(struct qbman_swp *p, int inhibit);
+
+	/************/
+	/* Dequeues */
+	/************/
+
+/**
+ * struct qbman_result - structure for qbman dequeue response and/or
+ * notification.
+ * @dont_manipulate_directly: the 16 32bit data to represent the whole
+ * possible qbman dequeue result.
+ */
+struct qbman_result {
+	uint32_t dont_manipulate_directly[16];
+};
+
+/* TODO:
+ *A DQRI interrupt can be generated when there are dequeue results on the
+ * portal's DQRR (this mechanism does not deal with "pull" dequeues to
+ * user-supplied 'storage' addresses). There are two parameters to this
+ * interrupt source, one is a threshold and the other is a timeout. The
+ * interrupt will fire if either the fill-level of the ring exceeds 'thresh', or
+ * if the ring has been non-empty for been longer than 'timeout' nanoseconds.
+ * For timeout, an approximation to the desired nanosecond-granularity value is
+ * made, so there are get and set APIs to allow the user to see what actual
+ * timeout is set (compared to the timeout that was requested).
+ */
+int qbman_swp_dequeue_thresh(struct qbman_swp *s, unsigned int thresh);
+int qbman_swp_dequeue_set_timeout(struct qbman_swp *s, unsigned int timeout);
+int qbman_swp_dequeue_get_timeout(struct qbman_swp *s, unsigned int *timeout);
+
+/* ------------------- */
+/* Push-mode dequeuing */
+/* ------------------- */
+
+/* The user of a portal can enable and disable push-mode dequeuing of up to 16
+ * channels independently. It does not specify this toggling by channel IDs, but
+ * rather by specifying the index (from 0 to 15) that has been mapped to the
+ * desired channel.
+ */
+
+/**
+ * qbman_swp_push_get() - Get the push dequeue setup.
+ * @s: the software portal object.
+ * @channel_idx: the channel index to query.
+ * @enabled: returned boolean to show whether the push dequeue is enabled for
+ * the given channel.
+ */
+void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled);
+
+/**
+ * qbman_swp_push_set() - Enable or disable push dequeue.
+ * @s: the software portal object.
+ * @channel_idx: the channel index..
+ * @enable: enable or disable push dequeue.
+ *
+ * The user of a portal can enable and disable push-mode dequeuing of up to 16
+ * channels independently. It does not specify this toggling by channel IDs, but
+ * rather by specifying the index (from 0 to 15) that has been mapped to the
+ * desired channel.
+ */
+void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable);
+
+/* ------------------- */
+/* Pull-mode dequeuing */
+/* ------------------- */
+
+/**
+ * struct qbman_pull_desc - the structure for pull dequeue descriptor
+ * @dont_manipulate_directly: the 6 32bit data to represent the whole
+ * possible settings for pull dequeue descriptor.
+ */
+struct qbman_pull_desc {
+	uint32_t dont_manipulate_directly[6];
+};
+
+enum qbman_pull_type_e {
+	/* dequeue with priority precedence, respect intra-class scheduling */
+	qbman_pull_type_prio = 1,
+	/* dequeue with active FQ precedence, respect ICS */
+	qbman_pull_type_active,
+	/* dequeue with active FQ precedence, no ICS */
+	qbman_pull_type_active_noics
+};
+
+/**
+ * qbman_pull_desc_clear() - Clear the contents of a descriptor to
+ * default/starting state.
+ * @d: the pull dequeue descriptor to be cleared.
+ */
+void qbman_pull_desc_clear(struct qbman_pull_desc *d);
+
+/**
+ * qbman_pull_desc_set_storage()- Set the pull dequeue storage
+ * @d: the pull dequeue descriptor to be set.
+ * @storage: the pointer of the memory to store the dequeue result.
+ * @storage_phys: the physical address of the storage memory.
+ * @stash: to indicate whether write allocate is enabled.
+ *
+ * If not called, or if called with 'storage' as NULL, the result pull dequeues
+ * will produce results to DQRR. If 'storage' is non-NULL, then results are
+ * produced to the given memory location (using the physical/DMA address which
+ * the caller provides in 'storage_phys'), and 'stash' controls whether or not
+ * those writes to main-memory express a cache-warming attribute.
+ */
+void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
+				 struct qbman_result *storage,
+				 dma_addr_t storage_phys,
+				 int stash);
+/**
+ * qbman_pull_desc_set_numframes() - Set the number of frames to be dequeued.
+ * @d: the pull dequeue descriptor to be set.
+ * @numframes: number of frames to be set, must be between 1 and 16, inclusive.
+ */
+void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d,
+				   uint8_t numframes);
+/**
+ * qbman_pull_desc_set_token() - Set dequeue token for pull command
+ * @d: the dequeue descriptor
+ * @token: the token to be set
+ *
+ * token is the value that shows up in the dequeue response that can be used to
+ * detect when the results have been published. The easiest technique is to zero
+ * result "storage" before issuing a dequeue, and use any non-zero 'token' value
+ */
+void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token);
+
+/* Exactly one of the following descriptor "actions" should be set. (Calling any
+ * one of these will replace the effect of any prior call to one of these.)
+ * - pull dequeue from the given frame queue (FQ)
+ * - pull dequeue from any FQ in the given work queue (WQ)
+ * - pull dequeue from any FQ in any WQ in the given channel
+ */
+/**
+ * qbman_pull_desc_set_fq() - Set fqid from which the dequeue command dequeues.
+ * @fqid: the frame queue index of the given FQ.
+ */
+void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid);
+
+/**
+ * qbman_pull_desc_set_wq() - Set wqid from which the dequeue command dequeues.
+ * @wqid: composed of channel id and wqid within the channel.
+ * @dct: the dequeue command type.
+ */
+void qbman_pull_desc_set_wq(struct qbman_pull_desc *d, uint32_t wqid,
+			    enum qbman_pull_type_e dct);
+
+/* qbman_pull_desc_set_channel() - Set channelid from which the dequeue command
+ * dequeues.
+ * @chid: the channel id to be dequeued.
+ * @dct: the dequeue command type.
+ */
+void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
+				 enum qbman_pull_type_e dct);
+
+/**
+ * qbman_swp_pull() - Issue the pull dequeue command
+ * @s: the software portal object.
+ * @d: the software portal descriptor which has been configured with
+ * the set of qbman_pull_desc_set_*() calls.
+ *
+ * Return 0 for success, and -EBUSY if the software portal is not ready
+ * to do pull dequeue.
+ */
+int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d);
+
+/* -------------------------------- */
+/* Polling DQRR for dequeue results */
+/* -------------------------------- */
+
+/**
+ * qbman_swp_dqrr_next() - Get an valid DQRR entry.
+ * @s: the software portal object.
+ *
+ * Return NULL if there are no unconsumed DQRR entries. Return a DQRR entry
+ * only once, so repeated calls can return a sequence of DQRR entries, without
+ * requiring they be consumed immediately or in any particular order.
+ */
+const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *p);
+
+/**
+ * qbman_swp_dqrr_consume() -  Consume DQRR entries previously returned from
+ * qbman_swp_dqrr_next().
+ * @s: the software portal object.
+ * @dq: the DQRR entry to be consumed.
+ */
+void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq);
+
+/**
+ * qbman_get_dqrr_idx() - Get dqrr index from the given dqrr
+ * @dqrr: the given dqrr object.
+ *
+ * Return dqrr index.
+ */
+uint8_t qbman_get_dqrr_idx(struct qbman_result *dqrr);
+
+/**
+ * qbman_get_dqrr_from_idx() - Use index to get the dqrr entry from the
+ * given portal
+ * @s: the given portal.
+ * @idx: the dqrr index.
+ *
+ * Return dqrr entry object.
+ */
+struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx);
+
+/* ------------------------------------------------- */
+/* Polling user-provided storage for dequeue results */
+/* ------------------------------------------------- */
+
+/**
+ * qbman_result_has_new_result() - Check and get the dequeue response from the
+ * dq storage memory set in pull dequeue command
+ * @s: the software portal object.
+ * @dq: the dequeue result read from the memory.
+ *
+ * Only used for user-provided storage of dequeue results, not DQRR. For
+ * efficiency purposes, the driver will perform any required endianness
+ * conversion to ensure that the user's dequeue result storage is in host-endian
+ * format (whether or not that is the same as the little-endian format that
+ * hardware DMA'd to the user's storage). As such, once the user has called
+ * qbman_result_has_new_result() and been returned a valid dequeue result,
+ * they should not call it again on the same memory location (except of course
+ * if another dequeue command has been executed to produce a new result to that
+ * location).
+ *
+ * Return 1 for getting a valid dequeue result, or 0 for not getting a valid
+ * dequeue result.
+ */
+int qbman_result_has_new_result(struct qbman_swp *s,
+				const struct qbman_result *dq);
+
+/* -------------------------------------------------------- */
+/* Parsing dequeue entries (DQRR and user-provided storage) */
+/* -------------------------------------------------------- */
+
+/**
+ * qbman_result_is_DQ() - check the dequeue result is a dequeue response or not
+ * @dq: the dequeue result to be checked.
+ *
+ * DQRR entries may contain non-dequeue results, ie. notifications
+ */
+int qbman_result_is_DQ(const struct qbman_result *dq);
+
+/**
+ * qbman_result_is_SCN() - Check the dequeue result is notification or not
+ * @dq: the dequeue result to be checked.
+ *
+ * All the non-dequeue results (FQDAN/CDAN/CSCN/...) are "state change
+ * notifications" of one type or another. Some APIs apply to all of them, of the
+ * form qbman_result_SCN_***().
+ */
+static inline int qbman_result_is_SCN(const struct qbman_result *dq)
+{
+	return !qbman_result_is_DQ(dq);
+}
+
+/* Recognise different notification types, only required if the user allows for
+ * these to occur, and cares about them when they do.
+ */
+
+/**
+ * qbman_result_is_FQDAN() - Check for FQ Data Availability
+ * @dq: the qbman_result object.
+ *
+ * Return 1 if this is FQDAN.
+ */
+int qbman_result_is_FQDAN(const struct qbman_result *dq);
+
+/**
+ * qbman_result_is_CDAN() - Check for Channel Data Availability
+ * @dq: the qbman_result object to check.
+ *
+ * Return 1 if this is CDAN.
+ */
+int qbman_result_is_CDAN(const struct qbman_result *dq);
+
+/**
+ * qbman_result_is_CSCN() - Check for Congestion State Change
+ * @dq: the qbman_result object to check.
+ *
+ * Return 1 if this is CSCN.
+ */
+int qbman_result_is_CSCN(const struct qbman_result *dq);
+
+/**
+ * qbman_result_is_BPSCN() - Check for Buffer Pool State Change.
+ * @dq: the qbman_result object to check.
+ *
+ * Return 1 if this is BPSCN.
+ */
+int qbman_result_is_BPSCN(const struct qbman_result *dq);
+
+/**
+ * qbman_result_is_CGCU() - Check for Congestion Group Count Update.
+ * @dq: the qbman_result object to check.
+ *
+ * Return 1 if this is CGCU.
+ */
+int qbman_result_is_CGCU(const struct qbman_result *dq);
+
+/* Frame queue state change notifications; (FQDAN in theory counts too as it
+ * leaves a FQ parked, but it is primarily a data availability notification)
+ */
+
+/**
+ * qbman_result_is_FQRN() - Check for FQ Retirement Notification.
+ * @dq: the qbman_result object to check.
+ *
+ * Return 1 if this is FQRN.
+ */
+int qbman_result_is_FQRN(const struct qbman_result *dq);
+
+/**
+ * qbman_result_is_FQRNI() - Check for FQ Retirement Immediate
+ * @dq: the qbman_result object to check.
+ *
+ * Return 1 if this is FQRNI.
+ */
+int qbman_result_is_FQRNI(const struct qbman_result *dq);
+
+/**
+ * qbman_result_is_FQPN() - Check for FQ Park Notification
+ * @dq: the qbman_result object to check.
+ *
+ * Return 1 if this is FQPN.
+ */
+int qbman_result_is_FQPN(const struct qbman_result *dq);
+
+/* Parsing frame dequeue results (qbman_result_is_DQ() must be TRUE)
+ */
+/* FQ empty */
+#define QBMAN_DQ_STAT_FQEMPTY       0x80
+/* FQ held active */
+#define QBMAN_DQ_STAT_HELDACTIVE    0x40
+/* FQ force eligible */
+#define QBMAN_DQ_STAT_FORCEELIGIBLE 0x20
+/* Valid frame */
+#define QBMAN_DQ_STAT_VALIDFRAME    0x10
+/* FQ ODP enable */
+#define QBMAN_DQ_STAT_ODPVALID      0x04
+/* Volatile dequeue */
+#define QBMAN_DQ_STAT_VOLATILE      0x02
+/* volatile dequeue command is expired */
+#define QBMAN_DQ_STAT_EXPIRED       0x01
+
+/**
+ * qbman_result_DQ_flags() - Get the STAT field of dequeue response
+ * @dq: the dequeue result.
+ *
+ * Return the state field.
+ */
+uint32_t qbman_result_DQ_flags(const struct qbman_result *dq);
+
+/**
+ * qbman_result_DQ_is_pull() - Check whether the dq response is from a pull
+ * command.
+ * @dq: the dequeue result.
+ *
+ * Return 1 for volatile(pull) dequeue, 0 for static dequeue.
+ */
+static inline int qbman_result_DQ_is_pull(const struct qbman_result *dq)
+{
+	return (int)(qbman_result_DQ_flags(dq) & QBMAN_DQ_STAT_VOLATILE);
+}
+
+/**
+ * qbman_result_DQ_is_pull_complete() - Check whether the pull command is
+ * completed.
+ * @dq: the dequeue result.
+ *
+ * Return boolean.
+ */
+static inline int qbman_result_DQ_is_pull_complete(
+					const struct qbman_result *dq)
+{
+	return (int)(qbman_result_DQ_flags(dq) & QBMAN_DQ_STAT_EXPIRED);
+}
+
+/**
+ * qbman_result_DQ_seqnum()  - Get the seqnum field in dequeue response
+ * seqnum is valid only if VALIDFRAME flag is TRUE
+ * @dq: the dequeue result.
+ *
+ * Return seqnum.
+ */
+uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
+
+/**
+ * qbman_result_DQ_odpid() - Get the seqnum field in dequeue response
+ * odpid is valid only if ODPVAILD flag is TRUE.
+ * @dq: the dequeue result.
+ *
+ * Return odpid.
+ */
+uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq);
+
+/**
+ * qbman_result_DQ_fqid() - Get the fqid in dequeue response
+ * @dq: the dequeue result.
+ *
+ * Return fqid.
+ */
+uint32_t qbman_result_DQ_fqid(const struct qbman_result *dq);
+
+/**
+ * qbman_result_DQ_byte_count() - Get the byte count in dequeue response
+ * @dq: the dequeue result.
+ *
+ * Return the byte count remaining in the FQ.
+ */
+uint32_t qbman_result_DQ_byte_count(const struct qbman_result *dq);
+
+/**
+ * qbman_result_DQ_frame_count - Get the frame count in dequeue response
+ * @dq: the dequeue result.
+ *
+ * Return the frame count remaining in the FQ.
+ */
+uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq);
+
+/**
+ * qbman_result_DQ_fqd_ctx() - Get the frame queue context in dequeue response
+ * @dq: the dequeue result.
+ *
+ * Return the frame queue context.
+ */
+uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq);
+
+/**
+ * qbman_result_DQ_fd() - Get the frame descriptor in dequeue response
+ * @dq: the dequeue result.
+ *
+ * Return the frame descriptor.
+ */
+const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq);
+
+/* State-change notifications (FQDAN/CDAN/CSCN/...). */
+
+/**
+ * qbman_result_SCN_state() - Get the state field in State-change notification
+ * @scn: the state change notification.
+ *
+ * Return the state in the notifiation.
+ */
+uint8_t qbman_result_SCN_state(const struct qbman_result *scn);
+
+/**
+ * qbman_result_SCN_rid() - Get the resource id from the notification
+ * @scn: the state change notification.
+ *
+ * Return the resource id.
+ */
+uint32_t qbman_result_SCN_rid(const struct qbman_result *scn);
+
+/**
+ * qbman_result_SCN_ctx() - get the context from the notification
+ * @scn: the state change notification.
+ *
+ * Return the context.
+ */
+uint64_t qbman_result_SCN_ctx(const struct qbman_result *scn);
+
+/**
+ * qbman_result_SCN_state_in_mem() - Get the state in notification written
+ * in memory
+ * @scn: the state change notification.
+ *
+ * Return the state.
+ */
+uint8_t qbman_result_SCN_state_in_mem(const struct qbman_result *scn);
+
+/**
+ * qbman_result_SCN_rid_in_mem() - Get the resource id in notification written
+ * in memory.
+ * @scn: the state change notification.
+ *
+ * Return the resource id.
+ */
+uint32_t qbman_result_SCN_rid_in_mem(const struct qbman_result *scn);
+
+/* Type-specific "resource IDs". Mainly for illustration purposes, though it
+ * also gives the appropriate type widths.
+ */
+/* Get the FQID from the FQDAN */
+#define qbman_result_FQDAN_fqid(dq) qbman_result_SCN_rid(dq)
+/* Get the FQID from the FQRN */
+#define qbman_result_FQRN_fqid(dq) qbman_result_SCN_rid(dq)
+/* Get the FQID from the FQRNI */
+#define qbman_result_FQRNI_fqid(dq) qbman_result_SCN_rid(dq)
+/* Get the FQID from the FQPN */
+#define qbman_result_FQPN_fqid(dq) qbman_result_SCN_rid(dq)
+/* Get the channel ID from the CDAN */
+#define qbman_result_CDAN_cid(dq) ((uint16_t)qbman_result_SCN_rid(dq))
+/* Get the CGID from the CSCN */
+#define qbman_result_CSCN_cgid(dq) ((uint16_t)qbman_result_SCN_rid(dq))
+
+/**
+ * qbman_result_bpscn_bpid() - Get the bpid from BPSCN
+ * @scn: the state change notification.
+ *
+ * Return the buffer pool id.
+ */
+uint16_t qbman_result_bpscn_bpid(const struct qbman_result *scn);
+
+/**
+ * qbman_result_bpscn_has_free_bufs() - Check whether there are free
+ * buffers in the pool from BPSCN.
+ * @scn: the state change notification.
+ *
+ * Return the number of free buffers.
+ */
+int qbman_result_bpscn_has_free_bufs(const struct qbman_result *scn);
+
+/**
+ * qbman_result_bpscn_is_depleted() - Check BPSCN to see whether the
+ * buffer pool is depleted.
+ * @scn: the state change notification.
+ *
+ * Return the status of buffer pool depletion.
+ */
+int qbman_result_bpscn_is_depleted(const struct qbman_result *scn);
+
+/**
+ * qbman_result_bpscn_is_surplus() - Check BPSCN to see whether the buffer
+ * pool is surplus or not.
+ * @scn: the state change notification.
+ *
+ * Return the status of buffer pool surplus.
+ */
+int qbman_result_bpscn_is_surplus(const struct qbman_result *scn);
+
+/**
+ * qbman_result_bpscn_ctx() - Get the BPSCN CTX from BPSCN message
+ * @scn: the state change notification.
+ *
+ * Return the BPSCN context.
+ */
+uint64_t qbman_result_bpscn_ctx(const struct qbman_result *scn);
+
+/* Parsing CGCU */
+/**
+ * qbman_result_cgcu_cgid() - Check CGCU resouce id, i.e. cgid
+ * @scn: the state change notification.
+ *
+ * Return the CGCU resource id.
+ */
+uint16_t qbman_result_cgcu_cgid(const struct qbman_result *scn);
+
+/**
+ * qbman_result_cgcu_icnt() - Get the I_CNT from CGCU
+ * @scn: the state change notification.
+ *
+ * Return instantaneous count in the CGCU notification.
+ */
+uint64_t qbman_result_cgcu_icnt(const struct qbman_result *scn);
+
+	/************/
+	/* Enqueues */
+	/************/
+
+/**
+ * struct qbman_eq_desc - structure of enqueue descriptor
+ * @dont_manipulate_directly: the 8 32bit data to represent the whole
+ * possible qbman enqueue setting in enqueue descriptor.
+ */
+struct qbman_eq_desc {
+	uint32_t dont_manipulate_directly[8];
+};
+
+/**
+ * struct qbman_eq_response - structure of enqueue response
+ * @dont_manipulate_directly: the 16 32bit data to represent the whole
+ * enqueue response.
+ */
+struct qbman_eq_response {
+	uint32_t dont_manipulate_directly[16];
+};
+
+/**
+ * qbman_eq_desc_clear() - Clear the contents of a descriptor to
+ * default/starting state.
+ * @d: the given enqueue descriptor.
+ */
+void qbman_eq_desc_clear(struct qbman_eq_desc *d);
+
+/* Exactly one of the following descriptor "actions" should be set. (Calling
+ * any one of these will replace the effect of any prior call to one of these.)
+ * - enqueue without order-restoration
+ * - enqueue with order-restoration
+ * - fill a hole in the order-restoration sequence, without any enqueue
+ * - advance NESN (Next Expected Sequence Number), without any enqueue
+ * 'respond_success' indicates whether an enqueue response should be DMA'd
+ * after success (otherwise a response is DMA'd only after failure).
+ * 'incomplete' indicates that other fragments of the same 'seqnum' are yet to
+ * be enqueued.
+ */
+
+/**
+ * qbman_eq_desc_set_no_orp() - Set enqueue descriptor without orp
+ * @d: the enqueue descriptor.
+ * @response_success: 1 = enqueue with response always; 0 = enqueue with
+ * rejections returned on a FQ.
+ */
+void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success);
+/**
+ * qbman_eq_desc_set_orp() - Set order-resotration in the enqueue descriptor
+ * @d: the enqueue descriptor.
+ * @response_success: 1 = enqueue with response always; 0 = enqueue with
+ * rejections returned on a FQ.
+ * @opr_id: the order point record id.
+ * @seqnum: the order restoration sequence number.
+ * @incomplete: indiates whether this is the last fragments using the same
+ * sequeue number.
+ */
+void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
+			   uint32_t opr_id, uint32_t seqnum, int incomplete);
+
+/**
+ * qbman_eq_desc_set_orp_hole() - fill a hole in the order-restoration sequence
+ * without any enqueue
+ * @d: the enqueue descriptor.
+ * @opr_id: the order point record id.
+ * @seqnum: the order restoration sequence number.
+ */
+void qbman_eq_desc_set_orp_hole(struct qbman_eq_desc *d, uint32_t opr_id,
+				uint32_t seqnum);
+
+/**
+ * qbman_eq_desc_set_orp_nesn() -  advance NESN (Next Expected Sequence Number)
+ * without any enqueue
+ * @d: the enqueue descriptor.
+ * @opr_id: the order point record id.
+ * @seqnum: the order restoration sequence number.
+ */
+void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint32_t opr_id,
+				uint32_t seqnum);
+/**
+ * qbman_eq_desc_set_response() - Set the enqueue response info.
+ * @d: the enqueue descriptor
+ * @storage_phys: the physical address of the enqueue response in memory.
+ * @stash: indicate that the write allocation enabled or not.
+ *
+ * In the case where an enqueue response is DMA'd, this determines where that
+ * response should go. (The physical/DMA address is given for hardware's
+ * benefit, but software should interpret it as a "struct qbman_eq_response"
+ * data structure.) 'stash' controls whether or not the write to main-memory
+ * expresses a cache-warming attribute.
+ */
+void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
+				dma_addr_t storage_phys,
+				int stash);
+
+/**
+ * qbman_eq_desc_set_token() - Set token for the enqueue command
+ * @d: the enqueue descriptor
+ * @token: the token to be set.
+ *
+ * token is the value that shows up in an enqueue response that can be used to
+ * detect when the results have been published. The easiest technique is to zero
+ * result "storage" before issuing an enqueue, and use any non-zero 'token'
+ * value.
+ */
+void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
+
+/**
+ * Exactly one of the following descriptor "targets" should be set. (Calling any
+ * one of these will replace the effect of any prior call to one of these.)
+ * - enqueue to a frame queue
+ * - enqueue to a queuing destination
+ * Note, that none of these will have any affect if the "action" type has been
+ * set to "orp_hole" or "orp_nesn".
+ */
+/**
+ * qbman_eq_desc_set_fq() - Set Frame Queue id for the enqueue command
+ * @d: the enqueue descriptor
+ * @fqid: the id of the frame queue to be enqueued.
+ */
+void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid);
+
+/**
+ * qbman_eq_desc_set_qd() - Set Queuing Destination for the enqueue command.
+ * @d: the enqueue descriptor
+ * @qdid: the id of the queuing destination to be enqueued.
+ * @qd_bin: the queuing destination bin
+ * @qd_prio: the queuing destination priority.
+ */
+void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
+			  uint32_t qd_bin, uint32_t qd_prio);
+
+/**
+ * qbman_eq_desc_set_eqdi() - enable/disable EQDI interrupt
+ * @d: the enqueue descriptor
+ * @enable: boolean to enable/disable EQDI
+ *
+ * Determines whether or not the portal's EQDI interrupt source should be
+ * asserted after the enqueue command is completed.
+ */
+void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable);
+
+/**
+ * qbman_eq_desc_set_dca() - Set DCA mode in the enqueue command.
+ * @d: the enqueue descriptor.
+ * @enable: enabled/disable DCA mode.
+ * @dqrr_idx: DCAP_CI, the DCAP consumer index.
+ * @park: determine the whether park the FQ or not
+ *
+ * Determines whether or not a portal DQRR entry should be consumed once the
+ * enqueue command is completed. (And if so, and the DQRR entry corresponds to a
+ * held-active (order-preserving) FQ, whether the FQ should be parked instead of
+ * being rescheduled.)
+ */
+void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
+			   uint32_t dqrr_idx, int park);
+
+/**
+ * qbman_swp_enqueue() - Issue an enqueue command.
+ * @s: the software portal used for enqueue.
+ * @d: the enqueue descriptor.
+ * @fd: the frame descriptor to be enqueued.
+ *
+ * Please note that 'fd' should only be NULL if the "action" of the
+ * descriptor is "orp_hole" or "orp_nesn".
+ *
+ * Return 0 for a successful enqueue, -EBUSY if the EQCR is not ready.
+ */
+int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
+		      const struct qbman_fd *fd);
+
+/* TODO:
+ * qbman_swp_enqueue_thresh() - Set threshold for EQRI interrupt.
+ * @s: the software portal.
+ * @thresh: the threshold to trigger the EQRI interrupt.
+ *
+ * An EQRI interrupt can be generated when the fill-level of EQCR falls below
+ * the 'thresh' value set here. Setting thresh==0 (the default) disables.
+ */
+int qbman_swp_enqueue_thresh(struct qbman_swp *s, unsigned int thresh);
+
+	/*******************/
+	/* Buffer releases */
+	/*******************/
+/**
+ * struct qbman_release_desc - The structure for buffer release descriptor
+ * @dont_manipulate_directly: the 32bit data to represent the whole
+ * possible settings of qbman release descriptor.
+ */
+struct qbman_release_desc {
+	uint32_t dont_manipulate_directly[1];
+};
+
+/**
+ * qbman_release_desc_clear() - Clear the contents of a descriptor to
+ * default/starting state.
+ * @d: the qbman release descriptor.
+ */
+void qbman_release_desc_clear(struct qbman_release_desc *d);
+
+/**
+ * qbman_release_desc_set_bpid() - Set the ID of the buffer pool to release to
+ * @d: the qbman release descriptor.
+ */
+void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint32_t bpid);
+
+/**
+ * qbman_release_desc_set_rcdi() - Determines whether or not the portal's RCDI
+ * interrupt source should be asserted after the release command is completed.
+ * @d: the qbman release descriptor.
+ */
+void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable);
+
+/**
+ * qbman_swp_release() - Issue a buffer release command.
+ * @s: the software portal object.
+ * @d: the release descriptor.
+ * @buffers: a pointer pointing to the buffer address to be released.
+ * @num_buffers: number of buffers to be released,  must be less than 8.
+ *
+ * Return 0 for success, -EBUSY if the release command ring is not ready.
+ */
+int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d,
+		      const uint64_t *buffers, unsigned int num_buffers);
+
+/* TODO:
+ * qbman_swp_release_thresh() - Set threshold for RCRI interrupt
+ * @s: the software portal.
+ * @thresh: the threshold.
+ * An RCRI interrupt can be generated when the fill-level of RCR falls below
+ * the 'thresh' value set here. Setting thresh==0 (the default) disables.
+ */
+int qbman_swp_release_thresh(struct qbman_swp *s, unsigned int thresh);
+
+	/*******************/
+	/* Buffer acquires */
+	/*******************/
+/**
+ * qbman_swp_acquire() - Issue a buffer acquire command.
+ * @s: the software portal object.
+ * @bpid: the buffer pool index.
+ * @buffers: a pointer pointing to the acquired buffer address|es.
+ * @num_buffers: number of buffers to be acquired, must be less than 8.
+ *
+ * Return 0 for success, or negative error code if the acquire command
+ * fails.
+ */
+int qbman_swp_acquire(struct qbman_swp *s, uint32_t bpid, uint64_t *buffers,
+		      unsigned int num_buffers);
+
+	/*****************/
+	/* FQ management */
+	/*****************/
+/**
+ * qbman_swp_fq_schedule() - Move the fq to the scheduled state.
+ * @s: the software portal object.
+ * @fqid: the index of frame queue to be scheduled.
+ *
+ * There are a couple of different ways that a FQ can end up parked state,
+ * This schedules it.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_fq_schedule(struct qbman_swp *s, uint32_t fqid);
+
+/**
+ * qbman_swp_fq_force() - Force the FQ to fully scheduled state.
+ * @s: the software portal object.
+ * @fqid: the index of frame queue to be forced.
+ *
+ * Force eligible will force a tentatively-scheduled FQ to be fully-scheduled
+ * and thus be available for selection by any channel-dequeuing behaviour (push
+ * or pull). If the FQ is subsequently "dequeued" from the channel and is still
+ * empty at the time this happens, the resulting dq_entry will have no FD.
+ * (qbman_result_DQ_fd() will return NULL.)
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid);
+
+/**
+ * These functions change the FQ flow-control stuff between XON/XOFF. (The
+ * default is XON.) This setting doesn't affect enqueues to the FQ, just
+ * dequeues. XOFF FQs will remain in the tenatively-scheduled state, even when
+ * non-empty, meaning they won't be selected for scheduled dequeuing. If a FQ is
+ * changed to XOFF after it had already become truly-scheduled to a channel, and
+ * a pull dequeue of that channel occurs that selects that FQ for dequeuing,
+ * then the resulting dq_entry will have no FD. (qbman_result_DQ_fd() will
+ * return NULL.)
+ */
+/**
+ * qbman_swp_fq_xon() - XON the frame queue.
+ * @s: the software portal object.
+ * @fqid: the index of frame queue.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_fq_xon(struct qbman_swp *s, uint32_t fqid);
+/**
+ * qbman_swp_fq_xoff() - XOFF the frame queue.
+ * @s: the software portal object.
+ * @fqid: the index of frame queue.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_fq_xoff(struct qbman_swp *s, uint32_t fqid);
+
+	/**********************/
+	/* Channel management */
+	/**********************/
+
+/**
+ * If the user has been allocated a channel object that is going to generate
+ * CDANs to another channel, then these functions will be necessary.
+ * CDAN-enabled channels only generate a single CDAN notification, after which
+ * it they need to be reenabled before they'll generate another. (The idea is
+ * that pull dequeuing will occur in reaction to the CDAN, followed by a
+ * reenable step.) Each function generates a distinct command to hardware, so a
+ * combination function is provided if the user wishes to modify the "context"
+ * (which shows up in each CDAN message) each time they reenable, as a single
+ * command to hardware.
+ */
+
+/**
+ * qbman_swp_CDAN_set_context() - Set CDAN context
+ * @s: the software portal object.
+ * @channelid: the channel index.
+ * @ctx: the context to be set in CDAN.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_CDAN_set_context(struct qbman_swp *s, uint16_t channelid,
+			       uint64_t ctx);
+
+/**
+ * qbman_swp_CDAN_enable() - Enable CDAN for the channel.
+ * @s: the software portal object.
+ * @channelid: the index of the channel to generate CDAN.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_CDAN_enable(struct qbman_swp *s, uint16_t channelid);
+
+/**
+ * qbman_swp_CDAN_disable() - disable CDAN for the channel.
+ * @s: the software portal object.
+ * @channelid: the index of the channel to generate CDAN.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_CDAN_disable(struct qbman_swp *s, uint16_t channelid);
+
+/**
+ * qbman_swp_CDAN_set_context_enable() - Set CDAN contest and enable CDAN
+ * @s: the software portal object.
+ * @channelid: the index of the channel to generate CDAN.
+ * @ctx: the context set in CDAN.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s, uint16_t channelid,
+				      uint64_t ctx);
+int qbman_swp_fill_ring(struct qbman_swp *s,
+			const struct qbman_eq_desc *d,
+		       const struct qbman_fd *fd,
+		       uint8_t burst_index);
+int qbman_swp_flush_ring(struct qbman_swp *s);
+void qbman_sync(void);
+int qbman_swp_send_multiple(struct qbman_swp *s,
+			    const struct qbman_eq_desc *d,
+			    const struct qbman_fd *fd,
+			    int frames_to_send);
+
+int qbman_check_command_complete(struct qbman_swp *s,
+				 const struct qbman_result *dq);
+
+int qbman_get_version(void);
+#endif /* !_FSL_QBMAN_PORTAL_H */
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
new file mode 100644
index 0000000..5d407cc
--- /dev/null
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -0,0 +1,1496 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qbman_portal.h"
+
+/* QBMan portal management command codes */
+#define QBMAN_MC_ACQUIRE       0x30
+#define QBMAN_WQCHAN_CONFIGURE 0x46
+
+/* CINH register offsets */
+#define QBMAN_CINH_SWP_EQCR_PI 0x800
+#define QBMAN_CINH_SWP_EQCR_CI 0x840
+#define QBMAN_CINH_SWP_EQAR    0x8c0
+#define QBMAN_CINH_SWP_DQPI    0xa00
+#define QBMAN_CINH_SWP_DCAP    0xac0
+#define QBMAN_CINH_SWP_SDQCR   0xb00
+#define QBMAN_CINH_SWP_RAR     0xcc0
+#define QBMAN_CINH_SWP_ISR     0xe00
+#define QBMAN_CINH_SWP_IER     0xe40
+#define QBMAN_CINH_SWP_ISDR    0xe80
+#define QBMAN_CINH_SWP_IIR     0xec0
+
+/* CENA register offsets */
+#define QBMAN_CENA_SWP_EQCR(n) (0x000 + ((uint32_t)(n) << 6))
+#define QBMAN_CENA_SWP_DQRR(n) (0x200 + ((uint32_t)(n) << 6))
+#define QBMAN_CENA_SWP_RCR(n)  (0x400 + ((uint32_t)(n) << 6))
+#define QBMAN_CENA_SWP_CR      0x600
+#define QBMAN_CENA_SWP_RR(vb)  (0x700 + ((uint32_t)(vb) >> 1))
+#define QBMAN_CENA_SWP_VDQCR   0x780
+#define QBMAN_CENA_SWP_EQCR_CI 0x840
+
+/* Reverse mapping of QBMAN_CENA_SWP_DQRR() */
+#define QBMAN_IDX_FROM_DQRR(p) (((unsigned long)p & 0x1ff) >> 6)
+
+/* QBMan FQ management command codes */
+#define QBMAN_FQ_SCHEDULE	0x48
+#define QBMAN_FQ_FORCE		0x49
+#define QBMAN_FQ_XON		0x4d
+#define QBMAN_FQ_XOFF		0x4e
+
+/*******************************/
+/* Pre-defined attribute codes */
+/*******************************/
+
+struct qb_attr_code code_generic_verb = QB_CODE(0, 0, 7);
+struct qb_attr_code code_generic_rslt = QB_CODE(0, 8, 8);
+
+/*************************/
+/* SDQCR attribute codes */
+/*************************/
+
+/* we put these here because at least some of them are required by
+ * qbman_swp_init()
+ */
+struct qb_attr_code code_sdqcr_dct = QB_CODE(0, 24, 2);
+struct qb_attr_code code_sdqcr_fc = QB_CODE(0, 29, 1);
+struct qb_attr_code code_sdqcr_tok = QB_CODE(0, 16, 8);
+static struct qb_attr_code code_eq_dca_idx;
+#define CODE_SDQCR_DQSRC(n) QB_CODE(0, n, 1)
+enum qbman_sdqcr_dct {
+	qbman_sdqcr_dct_null = 0,
+	qbman_sdqcr_dct_prio_ics,
+	qbman_sdqcr_dct_active_ics,
+	qbman_sdqcr_dct_active
+};
+
+enum qbman_sdqcr_fc {
+	qbman_sdqcr_fc_one = 0,
+	qbman_sdqcr_fc_up_to_3 = 1
+};
+
+struct qb_attr_code code_sdqcr_dqsrc = QB_CODE(0, 0, 16);
+
+/* We need to keep track of which SWP triggered a pull command
+ * so keep an array of portal IDs and use the token field to
+ * be able to find the proper portal
+ */
+#define MAX_QBMAN_PORTALS  35
+static struct qbman_swp *portal_idx_map[MAX_QBMAN_PORTALS];
+
+uint32_t qman_version;
+
+/*********************************/
+/* Portal constructor/destructor */
+/*********************************/
+
+/* Software portals should always be in the power-on state when we initialise,
+ * due to the CCSR-based portal reset functionality that MC has.
+ *
+ * Erk! Turns out that QMan versions prior to 4.1 do not correctly reset DQRR
+ * valid-bits, so we need to support a workaround where we don't trust
+ * valid-bits when detecting new entries until any stale ring entries have been
+ * overwritten at least once. The idea is that we read PI for the first few
+ * entries, then switch to valid-bit after that. The trick is to clear the
+ * bug-work-around boolean once the PI wraps around the ring for the first time.
+ *
+ * Note: this still carries a slight additional cost once the decrementer hits
+ * zero.
+ */
+struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
+{
+	int ret;
+	uint32_t eqcr_pi;
+	struct qbman_swp *p = kmalloc(sizeof(*p), GFP_KERNEL);
+
+	if (!p)
+		return NULL;
+	p->desc = *d;
+#ifdef QBMAN_CHECKING
+	p->mc.check = swp_mc_can_start;
+#endif
+	p->mc.valid_bit = QB_VALID_BIT;
+	p->sdq = 0;
+	qb_attr_code_encode(&code_sdqcr_dct, &p->sdq, qbman_sdqcr_dct_prio_ics);
+	qb_attr_code_encode(&code_sdqcr_fc, &p->sdq, qbman_sdqcr_fc_up_to_3);
+	qb_attr_code_encode(&code_sdqcr_tok, &p->sdq, 0xbb);
+	atomic_set(&p->vdq.busy, 1);
+	p->vdq.valid_bit = QB_VALID_BIT;
+	p->dqrr.next_idx = 0;
+	p->dqrr.valid_bit = QB_VALID_BIT;
+	qman_version = p->desc.qman_version;
+	if ((qman_version & 0xFFFF0000) < QMAN_REV_4100) {
+		p->dqrr.dqrr_size = 4;
+		p->dqrr.reset_bug = 1;
+		/* Set size of DQRR to 4, encoded in 2 bits */
+		code_eq_dca_idx = (struct qb_attr_code)QB_CODE(0, 8, 2);
+	} else {
+		p->dqrr.dqrr_size = 8;
+		p->dqrr.reset_bug = 0;
+		/* Set size of DQRR to 8, encoded in 3 bits */
+		code_eq_dca_idx = (struct qb_attr_code)QB_CODE(0, 8, 3);
+	}
+
+	ret = qbman_swp_sys_init(&p->sys, d, p->dqrr.dqrr_size);
+	if (ret) {
+		kfree(p);
+		pr_err("qbman_swp_sys_init() failed %d\n", ret);
+		return NULL;
+	}
+	/* SDQCR needs to be initialized to 0 when no channels are
+	 * being dequeued from or else the QMan HW will indicate an
+	 * error.  The values that were calculated above will be
+	 * applied when dequeues from a specific channel are enabled
+	 */
+	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_SDQCR, 0);
+	eqcr_pi = qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_EQCR_PI);
+	p->eqcr.pi = eqcr_pi & 0xF;
+	p->eqcr.pi_vb = eqcr_pi & QB_VALID_BIT;
+	p->eqcr.ci = qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_EQCR_CI) & 0xF;
+	p->eqcr.available = QBMAN_EQCR_SIZE - qm_cyc_diff(QBMAN_EQCR_SIZE,
+						p->eqcr.ci, p->eqcr.pi);
+
+	portal_idx_map[p->desc.idx] = p;
+	return p;
+}
+
+void qbman_swp_finish(struct qbman_swp *p)
+{
+#ifdef QBMAN_CHECKING
+	QBMAN_BUG_ON(p->mc.check != swp_mc_can_start);
+#endif
+	qbman_swp_sys_finish(&p->sys);
+	portal_idx_map[p->desc.idx] = NULL;
+	kfree(p);
+}
+
+const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p)
+{
+	return &p->desc;
+}
+
+/**************/
+/* Interrupts */
+/**************/
+
+uint32_t qbman_swp_interrupt_get_vanish(struct qbman_swp *p)
+{
+	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISDR);
+}
+
+void qbman_swp_interrupt_set_vanish(struct qbman_swp *p, uint32_t mask)
+{
+	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISDR, mask);
+}
+
+uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p)
+{
+	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISR);
+}
+
+void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask)
+{
+	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISR, mask);
+}
+
+uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p)
+{
+	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_IER);
+}
+
+void qbman_swp_interrupt_set_trigger(struct qbman_swp *p, uint32_t mask)
+{
+	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_IER, mask);
+}
+
+int qbman_swp_interrupt_get_inhibit(struct qbman_swp *p)
+{
+	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_IIR);
+}
+
+void qbman_swp_interrupt_set_inhibit(struct qbman_swp *p, int inhibit)
+{
+	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_IIR, inhibit ? 0xffffffff : 0);
+}
+
+/***********************/
+/* Management commands */
+/***********************/
+
+/*
+ * Internal code common to all types of management commands.
+ */
+
+void *qbman_swp_mc_start(struct qbman_swp *p)
+{
+	void *ret;
+#ifdef QBMAN_CHECKING
+	QBMAN_BUG_ON(p->mc.check != swp_mc_can_start);
+#endif
+	ret = qbman_cena_write_start(&p->sys, QBMAN_CENA_SWP_CR);
+#ifdef QBMAN_CHECKING
+	if (!ret)
+		p->mc.check = swp_mc_can_submit;
+#endif
+	return ret;
+}
+
+void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint32_t cmd_verb)
+{
+	uint32_t *v = cmd;
+#ifdef QBMAN_CHECKING
+	QBMAN_BUG_ON(!(p->mc.check != swp_mc_can_submit));
+#endif
+	/* TBD: "|=" is going to hurt performance. Need to move as many fields
+	 * out of word zero, and for those that remain, the "OR" needs to occur
+	 * at the caller side. This debug check helps to catch cases where the
+	 * caller wants to OR but has forgotten to do so.
+	 */
+	QBMAN_BUG_ON((*v & cmd_verb) != *v);
+	*v = cmd_verb | p->mc.valid_bit;
+	qbman_cena_write_complete(&p->sys, QBMAN_CENA_SWP_CR, cmd);
+#ifdef QBMAN_CHECKING
+	p->mc.check = swp_mc_can_poll;
+#endif
+}
+
+void *qbman_swp_mc_result(struct qbman_swp *p)
+{
+	uint32_t *ret, verb;
+#ifdef QBMAN_CHECKING
+	QBMAN_BUG_ON(p->mc.check != swp_mc_can_poll);
+#endif
+	qbman_cena_invalidate_prefetch(&p->sys,
+				       QBMAN_CENA_SWP_RR(p->mc.valid_bit));
+	ret = qbman_cena_read(&p->sys, QBMAN_CENA_SWP_RR(p->mc.valid_bit));
+	/* Remove the valid-bit - command completed iff the rest is non-zero */
+	verb = ret[0] & ~QB_VALID_BIT;
+	if (!verb)
+		return NULL;
+#ifdef QBMAN_CHECKING
+	p->mc.check = swp_mc_can_start;
+#endif
+	p->mc.valid_bit ^= QB_VALID_BIT;
+	return ret;
+}
+
+/***********/
+/* Enqueue */
+/***********/
+
+/* These should be const, eventually */
+static struct qb_attr_code code_eq_cmd = QB_CODE(0, 0, 2);
+static struct qb_attr_code code_eq_eqdi = QB_CODE(0, 3, 1);
+static struct qb_attr_code code_eq_dca_en = QB_CODE(0, 15, 1);
+static struct qb_attr_code code_eq_dca_pk = QB_CODE(0, 14, 1);
+/* Can't set code_eq_dca_idx width. Need qman version. Read at runtime */
+static struct qb_attr_code code_eq_orp_en = QB_CODE(0, 2, 1);
+static struct qb_attr_code code_eq_orp_is_nesn = QB_CODE(0, 31, 1);
+static struct qb_attr_code code_eq_orp_nlis = QB_CODE(0, 30, 1);
+static struct qb_attr_code code_eq_orp_seqnum = QB_CODE(0, 16, 14);
+static struct qb_attr_code code_eq_opr_id = QB_CODE(1, 0, 16);
+static struct qb_attr_code code_eq_tgt_id = QB_CODE(2, 0, 24);
+/* static struct qb_attr_code code_eq_tag = QB_CODE(3, 0, 32); */
+static struct qb_attr_code code_eq_qd_en = QB_CODE(0, 4, 1);
+static struct qb_attr_code code_eq_qd_bin = QB_CODE(4, 0, 16);
+static struct qb_attr_code code_eq_qd_pri = QB_CODE(4, 16, 4);
+static struct qb_attr_code code_eq_rsp_stash = QB_CODE(5, 16, 1);
+static struct qb_attr_code code_eq_rsp_id = QB_CODE(5, 24, 8);
+static struct qb_attr_code code_eq_rsp_lo = QB_CODE(6, 0, 32);
+
+enum qbman_eq_cmd_e {
+	/* No enqueue, primarily for plugging ORP gaps for dropped frames */
+	qbman_eq_cmd_empty,
+	/* DMA an enqueue response once complete */
+	qbman_eq_cmd_respond,
+	/* DMA an enqueue response only if the enqueue fails */
+	qbman_eq_cmd_respond_reject
+};
+
+void qbman_eq_desc_clear(struct qbman_eq_desc *d)
+{
+	memset(d, 0, sizeof(*d));
+}
+
+void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_orp_en, cl, 0);
+	qb_attr_code_encode(&code_eq_cmd, cl,
+			    respond_success ? qbman_eq_cmd_respond :
+					      qbman_eq_cmd_respond_reject);
+}
+
+void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
+			   uint32_t opr_id, uint32_t seqnum, int incomplete)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_orp_en, cl, 1);
+	qb_attr_code_encode(&code_eq_cmd, cl,
+			    respond_success ? qbman_eq_cmd_respond :
+					      qbman_eq_cmd_respond_reject);
+	qb_attr_code_encode(&code_eq_opr_id, cl, opr_id);
+	qb_attr_code_encode(&code_eq_orp_seqnum, cl, seqnum);
+	qb_attr_code_encode(&code_eq_orp_nlis, cl, !!incomplete);
+}
+
+void qbman_eq_desc_set_orp_hole(struct qbman_eq_desc *d, uint32_t opr_id,
+				uint32_t seqnum)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_orp_en, cl, 1);
+	qb_attr_code_encode(&code_eq_cmd, cl, qbman_eq_cmd_empty);
+	qb_attr_code_encode(&code_eq_opr_id, cl, opr_id);
+	qb_attr_code_encode(&code_eq_orp_seqnum, cl, seqnum);
+	qb_attr_code_encode(&code_eq_orp_nlis, cl, 0);
+	qb_attr_code_encode(&code_eq_orp_is_nesn, cl, 0);
+}
+
+void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint32_t opr_id,
+				uint32_t seqnum)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_orp_en, cl, 1);
+	qb_attr_code_encode(&code_eq_cmd, cl, qbman_eq_cmd_empty);
+	qb_attr_code_encode(&code_eq_opr_id, cl, opr_id);
+	qb_attr_code_encode(&code_eq_orp_seqnum, cl, seqnum);
+	qb_attr_code_encode(&code_eq_orp_nlis, cl, 0);
+	qb_attr_code_encode(&code_eq_orp_is_nesn, cl, 1);
+}
+
+void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
+				dma_addr_t storage_phys,
+				int stash)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode_64(&code_eq_rsp_lo, (uint64_t *)cl, storage_phys);
+	qb_attr_code_encode(&code_eq_rsp_stash, cl, !!stash);
+}
+
+void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_rsp_id, cl, (uint32_t)token);
+}
+
+void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_qd_en, cl, 0);
+	qb_attr_code_encode(&code_eq_tgt_id, cl, fqid);
+}
+
+void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
+			  uint32_t qd_bin, uint32_t qd_prio)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_qd_en, cl, 1);
+	qb_attr_code_encode(&code_eq_tgt_id, cl, qdid);
+	qb_attr_code_encode(&code_eq_qd_bin, cl, qd_bin);
+	qb_attr_code_encode(&code_eq_qd_pri, cl, qd_prio);
+}
+
+void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_eqdi, cl, !!enable);
+}
+
+void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
+			   uint32_t dqrr_idx, int park)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_dca_en, cl, !!enable);
+	if (enable) {
+		qb_attr_code_encode(&code_eq_dca_pk, cl, !!park);
+		qb_attr_code_encode(&code_eq_dca_idx, cl, dqrr_idx);
+	}
+}
+
+#define EQAR_IDX(eqar)     ((eqar) & 0x7)
+#define EQAR_VB(eqar)      ((eqar) & 0x80)
+#define EQAR_SUCCESS(eqar) ((eqar) & 0x100)
+static int qbman_swp_enqueue_array_mode(struct qbman_swp *s,
+					const struct qbman_eq_desc *d,
+				 const struct qbman_fd *fd)
+{
+	uint32_t *p;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t eqar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_EQAR);
+
+	pr_debug("EQAR=%08x\n", eqar);
+	if (!EQAR_SUCCESS(eqar))
+		return -EBUSY;
+	p = qbman_cena_write_start_wo_shadow(&s->sys,
+			QBMAN_CENA_SWP_EQCR(EQAR_IDX(eqar)));
+	word_copy(&p[1], &cl[1], 7);
+	word_copy(&p[8], fd, sizeof(*fd) >> 2);
+	/* Set the verb byte, have to substitute in the valid-bit */
+	lwsync();
+	p[0] = cl[0] | EQAR_VB(eqar);
+	qbman_cena_write_complete_wo_shadow(&s->sys,
+			QBMAN_CENA_SWP_EQCR(EQAR_IDX(eqar)));
+	return 0;
+}
+
+static int qbman_swp_enqueue_ring_mode(struct qbman_swp *s,
+				       const struct qbman_eq_desc *d,
+				const struct qbman_fd *fd)
+{
+	uint32_t *p;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t eqcr_ci;
+	uint8_t diff;
+
+	if (!s->eqcr.available) {
+		eqcr_ci = s->eqcr.ci;
+		s->eqcr.ci = qbman_cena_read_reg(&s->sys,
+				QBMAN_CENA_SWP_EQCR_CI) & 0xF;
+		diff = qm_cyc_diff(QBMAN_EQCR_SIZE,
+				   eqcr_ci, s->eqcr.ci);
+		s->eqcr.available += diff;
+		if (!diff)
+			return -EBUSY;
+	}
+
+	p = qbman_cena_write_start_wo_shadow(&s->sys,
+		QBMAN_CENA_SWP_EQCR(s->eqcr.pi & 7));
+	word_copy(&p[1], &cl[1], 7);
+	word_copy(&p[8], fd, sizeof(*fd) >> 2);
+	lwsync();
+	/* Set the verb byte, have to substitute in the valid-bit */
+	p[0] = cl[0] | s->eqcr.pi_vb;
+	qbman_cena_write_complete_wo_shadow(&s->sys,
+		QBMAN_CENA_SWP_EQCR(s->eqcr.pi & 7));
+	s->eqcr.pi++;
+	s->eqcr.pi &= 0xF;
+	s->eqcr.available--;
+	if (!(s->eqcr.pi & 7))
+		s->eqcr.pi_vb ^= QB_VALID_BIT;
+	return 0;
+}
+
+int qbman_swp_fill_ring(struct qbman_swp *s,
+			const struct qbman_eq_desc *d,
+			const struct qbman_fd *fd,
+			__attribute__((unused)) uint8_t burst_index)
+{
+	uint32_t *p;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t eqcr_ci;
+	uint8_t diff;
+
+	if (!s->eqcr.available) {
+		eqcr_ci = s->eqcr.ci;
+		s->eqcr.ci = qbman_cena_read_reg(&s->sys,
+				QBMAN_CENA_SWP_EQCR_CI) & 0xF;
+		diff = qm_cyc_diff(QBMAN_EQCR_SIZE,
+				   eqcr_ci, s->eqcr.ci);
+		s->eqcr.available += diff;
+		if (!diff)
+			return -EBUSY;
+	}
+	p = qbman_cena_write_start_wo_shadow(&s->sys,
+		QBMAN_CENA_SWP_EQCR((s->eqcr.pi/* +burst_index */) & 7));
+	/* word_copy(&p[1], &cl[1], 7); */
+	memcpy(&p[1], &cl[1], 7 * 4);
+	/* word_copy(&p[8], fd, sizeof(*fd) >> 2); */
+	memcpy(&p[8], fd, sizeof(struct qbman_fd));
+
+	/* lwsync(); */
+	p[0] = cl[0] | s->eqcr.pi_vb;
+
+	s->eqcr.pi++;
+	s->eqcr.pi &= 0xF;
+	s->eqcr.available--;
+	if (!(s->eqcr.pi & 7))
+		s->eqcr.pi_vb ^= QB_VALID_BIT;
+
+	return 0;
+}
+
+int qbman_swp_flush_ring(struct qbman_swp *s)
+{
+	void *ptr = s->sys.addr_cena;
+
+	dcbf((uint64_t)ptr);
+	dcbf((uint64_t)ptr + 0x40);
+	dcbf((uint64_t)ptr + 0x80);
+	dcbf((uint64_t)ptr + 0xc0);
+	dcbf((uint64_t)ptr + 0x100);
+	dcbf((uint64_t)ptr + 0x140);
+	dcbf((uint64_t)ptr + 0x180);
+	dcbf((uint64_t)ptr + 0x1c0);
+
+	return 0;
+}
+
+void qbman_sync(void)
+{
+	lwsync();
+}
+
+int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
+		      const struct qbman_fd *fd)
+{
+	if (s->sys.eqcr_mode == qman_eqcr_vb_array)
+		return qbman_swp_enqueue_array_mode(s, d, fd);
+	else    /* Use ring mode by default */
+		return qbman_swp_enqueue_ring_mode(s, d, fd);
+}
+
+/*************************/
+/* Static (push) dequeue */
+/*************************/
+
+void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled)
+{
+	struct qb_attr_code code = CODE_SDQCR_DQSRC(channel_idx);
+
+	QBMAN_BUG_ON(channel_idx > 15);
+	*enabled = (int)qb_attr_code_decode(&code, &s->sdq);
+}
+
+void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable)
+{
+	uint16_t dqsrc;
+	struct qb_attr_code code = CODE_SDQCR_DQSRC(channel_idx);
+
+	QBMAN_BUG_ON(channel_idx > 15);
+	qb_attr_code_encode(&code, &s->sdq, !!enable);
+	/* Read make the complete src map.  If no channels are enabled
+	 * the SDQCR must be 0 or else QMan will assert errors
+	 */
+	dqsrc = (uint16_t)qb_attr_code_decode(&code_sdqcr_dqsrc, &s->sdq);
+	if (dqsrc != 0)
+		qbman_cinh_write(&s->sys, QBMAN_CINH_SWP_SDQCR, s->sdq);
+	else
+		qbman_cinh_write(&s->sys, QBMAN_CINH_SWP_SDQCR, 0);
+}
+
+/***************************/
+/* Volatile (pull) dequeue */
+/***************************/
+
+/* These should be const, eventually */
+static struct qb_attr_code code_pull_dct = QB_CODE(0, 0, 2);
+static struct qb_attr_code code_pull_dt = QB_CODE(0, 2, 2);
+static struct qb_attr_code code_pull_rls = QB_CODE(0, 4, 1);
+static struct qb_attr_code code_pull_stash = QB_CODE(0, 5, 1);
+static struct qb_attr_code code_pull_numframes = QB_CODE(0, 8, 4);
+static struct qb_attr_code code_pull_token = QB_CODE(0, 16, 8);
+static struct qb_attr_code code_pull_dqsource = QB_CODE(1, 0, 24);
+static struct qb_attr_code code_pull_rsp_lo = QB_CODE(2, 0, 32);
+
+enum qb_pull_dt_e {
+	qb_pull_dt_channel,
+	qb_pull_dt_workqueue,
+	qb_pull_dt_framequeue
+};
+
+void qbman_pull_desc_clear(struct qbman_pull_desc *d)
+{
+	memset(d, 0, sizeof(*d));
+}
+
+void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
+				 struct qbman_result *storage,
+				 dma_addr_t storage_phys,
+				 int stash)
+{
+	uint32_t *cl = qb_cl(d);
+	/* Squiggle the pointer 'storage' into the extra 2 words of the
+	 * descriptor (which aren't copied to the hw command)
+	 */
+	*(void **)&cl[4] = storage;
+	if (!storage) {
+		qb_attr_code_encode(&code_pull_rls, cl, 0);
+		return;
+	}
+	qb_attr_code_encode(&code_pull_rls, cl, 1);
+	qb_attr_code_encode(&code_pull_stash, cl, !!stash);
+	qb_attr_code_encode_64(&code_pull_rsp_lo, (uint64_t *)cl, storage_phys);
+}
+
+void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d, uint8_t numframes)
+{
+	uint32_t *cl = qb_cl(d);
+
+	QBMAN_BUG_ON(!numframes || (numframes > 16));
+	qb_attr_code_encode(&code_pull_numframes, cl,
+			    (uint32_t)(numframes - 1));
+}
+
+void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_pull_token, cl, token);
+}
+
+void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_pull_dct, cl, 1);
+	qb_attr_code_encode(&code_pull_dt, cl, qb_pull_dt_framequeue);
+	qb_attr_code_encode(&code_pull_dqsource, cl, fqid);
+}
+
+void qbman_pull_desc_set_wq(struct qbman_pull_desc *d, uint32_t wqid,
+			    enum qbman_pull_type_e dct)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_pull_dct, cl, dct);
+	qb_attr_code_encode(&code_pull_dt, cl, qb_pull_dt_workqueue);
+	qb_attr_code_encode(&code_pull_dqsource, cl, wqid);
+}
+
+void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
+				 enum qbman_pull_type_e dct)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_pull_dct, cl, dct);
+	qb_attr_code_encode(&code_pull_dt, cl, qb_pull_dt_channel);
+	qb_attr_code_encode(&code_pull_dqsource, cl, chid);
+}
+
+int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
+{
+	uint32_t *p;
+	uint32_t *cl = qb_cl(d);
+
+	if (!atomic_dec_and_test(&s->vdq.busy)) {
+		atomic_inc(&s->vdq.busy);
+		return -EBUSY;
+	}
+	s->vdq.storage = *(void **)&cl[4];
+	/* We use portal index +1 as token so that 0 still indicates
+	 * that the result isn't valid yet.
+	 */
+	qb_attr_code_encode(&code_pull_token, cl, s->desc.idx + 1);
+	p = qbman_cena_write_start_wo_shadow(&s->sys, QBMAN_CENA_SWP_VDQCR);
+	word_copy(&p[1], &cl[1], 3);
+	/* Set the verb byte, have to substitute in the valid-bit */
+	lwsync();
+	p[0] = cl[0] | s->vdq.valid_bit;
+	s->vdq.valid_bit ^= QB_VALID_BIT;
+	qbman_cena_write_complete_wo_shadow(&s->sys, QBMAN_CENA_SWP_VDQCR);
+	return 0;
+}
+
+/****************/
+/* Polling DQRR */
+/****************/
+
+static struct qb_attr_code code_dqrr_verb = QB_CODE(0, 0, 8);
+static struct qb_attr_code code_dqrr_response = QB_CODE(0, 0, 7);
+static struct qb_attr_code code_dqrr_stat = QB_CODE(0, 8, 8);
+static struct qb_attr_code code_dqrr_seqnum = QB_CODE(0, 16, 14);
+static struct qb_attr_code code_dqrr_odpid = QB_CODE(1, 0, 16);
+/* static struct qb_attr_code code_dqrr_tok = QB_CODE(1, 24, 8); */
+static struct qb_attr_code code_dqrr_fqid = QB_CODE(2, 0, 24);
+static struct qb_attr_code code_dqrr_byte_count = QB_CODE(4, 0, 32);
+static struct qb_attr_code code_dqrr_frame_count = QB_CODE(5, 0, 24);
+static struct qb_attr_code code_dqrr_ctx_lo = QB_CODE(6, 0, 32);
+
+#define QBMAN_RESULT_DQ        0x60
+#define QBMAN_RESULT_FQRN      0x21
+#define QBMAN_RESULT_FQRNI     0x22
+#define QBMAN_RESULT_FQPN      0x24
+#define QBMAN_RESULT_FQDAN     0x25
+#define QBMAN_RESULT_CDAN      0x26
+#define QBMAN_RESULT_CSCN_MEM  0x27
+#define QBMAN_RESULT_CGCU      0x28
+#define QBMAN_RESULT_BPSCN     0x29
+#define QBMAN_RESULT_CSCN_WQ   0x2a
+
+static struct qb_attr_code code_dqpi_pi = QB_CODE(0, 0, 4);
+
+/* NULL return if there are no unconsumed DQRR entries. Returns a DQRR entry
+ * only once, so repeated calls can return a sequence of DQRR entries, without
+ * requiring they be consumed immediately or in any particular order.
+ */
+const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *s)
+{
+	uint32_t verb;
+	uint32_t response_verb;
+	uint32_t flags;
+	const struct qbman_result *dq;
+	const uint32_t *p;
+
+	/* Before using valid-bit to detect if something is there, we have to
+	 * handle the case of the DQRR reset bug...
+	 */
+	if (unlikely(s->dqrr.reset_bug)) {
+		/* We pick up new entries by cache-inhibited producer index,
+		 * which means that a non-coherent mapping would require us to
+		 * invalidate and read *only* once that PI has indicated that
+		 * there's an entry here. The first trip around the DQRR ring
+		 * will be much less efficient than all subsequent trips around
+		 * it...
+		 */
+		uint32_t dqpi = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_DQPI);
+		uint32_t pi = qb_attr_code_decode(&code_dqpi_pi, &dqpi);
+		/* there are new entries iff pi != next_idx */
+		if (pi == s->dqrr.next_idx)
+			return NULL;
+		/* if next_idx is/was the last ring index, and 'pi' is
+		 * different, we can disable the workaround as all the ring
+		 * entries have now been DMA'd to so valid-bit checking is
+		 * repaired. Note: this logic needs to be based on next_idx
+		 * (which increments one at a time), rather than on pi (which
+		 * can burst and wrap-around between our snapshots of it).
+		 */
+		QBMAN_BUG_ON((s->dqrr.dqrr_size - 1) < 0);
+		if (s->dqrr.next_idx == (s->dqrr.dqrr_size - 1u)) {
+			pr_debug("DEBUG: next_idx=%d, pi=%d, clear reset bug\n",
+				 s->dqrr.next_idx, pi);
+			s->dqrr.reset_bug = 0;
+		}
+		qbman_cena_invalidate_prefetch(&s->sys,
+				QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
+	}
+	dq = qbman_cena_read_wo_shadow(&s->sys,
+				       QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
+	p = qb_cl(dq);
+	verb = qb_attr_code_decode(&code_dqrr_verb, p);
+	/* If the valid-bit isn't of the expected polarity, nothing there. Note,
+	 * in the DQRR reset bug workaround, we shouldn't need to skip these
+	 * check, because we've already determined that a new entry is available
+	 * and we've invalidated the cacheline before reading it, so the
+	 * valid-bit behaviour is repaired and should tell us what we already
+	 * knew from reading PI.
+	 */
+	if ((verb & QB_VALID_BIT) != s->dqrr.valid_bit)
+		return NULL;
+
+	/* There's something there. Move "next_idx" attention to the next ring
+	 * entry (and prefetch it) before returning what we found.
+	 */
+	s->dqrr.next_idx++;
+	if (s->dqrr.next_idx == s->dqrr.dqrr_size) {
+		s->dqrr.next_idx = 0;
+		s->dqrr.valid_bit ^= QB_VALID_BIT;
+	}
+	/* If this is the final response to a volatile dequeue command
+	 * indicate that the vdq is no longer busy.
+	 */
+	flags = qbman_result_DQ_flags(dq);
+	response_verb = qb_attr_code_decode(&code_dqrr_response, &verb);
+	if ((response_verb == QBMAN_RESULT_DQ) &&
+	    (flags & QBMAN_DQ_STAT_VOLATILE) &&
+	    (flags & QBMAN_DQ_STAT_EXPIRED))
+		atomic_inc(&s->vdq.busy);
+
+	return dq;
+}
+
+/* Consume DQRR entries previously returned from qbman_swp_dqrr_next(). */
+void qbman_swp_dqrr_consume(struct qbman_swp *s,
+			    const struct qbman_result *dq)
+{
+	qbman_cinh_write(&s->sys, QBMAN_CINH_SWP_DCAP, QBMAN_IDX_FROM_DQRR(dq));
+}
+
+/*********************************/
+/* Polling user-provided storage */
+/*********************************/
+
+int qbman_result_has_new_result(__attribute__((unused)) struct qbman_swp *s,
+				const struct qbman_result *dq)
+{
+	/* To avoid converting the little-endian DQ entry to host-endian prior
+	 * to us knowing whether there is a valid entry or not (and run the
+	 * risk of corrupting the incoming hardware LE write), we detect in
+	 * hardware endianness rather than host. This means we need a different
+	 * "code" depending on whether we are BE or LE in software, which is
+	 * where DQRR_TOK_OFFSET comes in...
+	 */
+	static struct qb_attr_code code_dqrr_tok_detect =
+					QB_CODE(0, DQRR_TOK_OFFSET, 8);
+	/* The user trying to poll for a result treats "dq" as const. It is
+	 * however the same address that was provided to us non-const in the
+	 * first place, for directing hardware DMA to. So we can cast away the
+	 * const because it is mutable from our perspective.
+	 */
+	uint32_t *p = (uint32_t *)(unsigned long)qb_cl(dq);
+	uint32_t token;
+
+	token = qb_attr_code_decode(&code_dqrr_tok_detect, &p[1]);
+	if (token == 0)
+		return 0;
+	/* Entry is valid - overwrite token back to 0 so
+	 * a) If this memory is reused tokesn will be 0
+	 * b) If someone calls "has_new_result()" again on this entry it
+	 *    will not appear to be new
+	 */
+	qb_attr_code_encode(&code_dqrr_tok_detect, &p[1], 0);
+
+	/* Only now do we convert from hardware to host endianness. Also, as we
+	 * are returning success, the user has promised not to call us again, so
+	 * there's no risk of us converting the endianness twice...
+	 */
+	make_le32_n(p, 16);
+	return 1;
+}
+
+int qbman_check_command_complete(struct qbman_swp *s,
+				 const struct qbman_result *dq)
+{
+	/* To avoid converting the little-endian DQ entry to host-endian prior
+	 * to us knowing whether there is a valid entry or not (and run the
+	 * risk of corrupting the incoming hardware LE write), we detect in
+	 * hardware endianness rather than host. This means we need a different
+	 * "code" depending on whether we are BE or LE in software, which is
+	 * where DQRR_TOK_OFFSET comes in...
+	 */
+	static struct qb_attr_code code_dqrr_tok_detect =
+					QB_CODE(0, DQRR_TOK_OFFSET, 8);
+	/* The user trying to poll for a result treats "dq" as const. It is
+	 * however the same address that was provided to us non-const in the
+	 * first place, for directing hardware DMA to. So we can cast away the
+	 * const because it is mutable from our perspective.
+	 */
+	uint32_t *p = (uint32_t *)(unsigned long)qb_cl(dq);
+	uint32_t token;
+
+	token = qb_attr_code_decode(&code_dqrr_tok_detect, &p[1]);
+	if (token == 0)
+		return 0;
+	/* TODO: Remove qbman_swp from parameters and make it a local
+	 * once we've tested the reserve portal map change
+	 */
+	s = portal_idx_map[token - 1];
+	/* When token is set it indicates that VDQ command has been fetched
+	 * by qbman and is working on it. It is safe for software to issue
+	 * another VDQ command, so incrementing the busy variable.
+	 */
+	if (s->vdq.storage == dq) {
+		s->vdq.storage = NULL;
+		atomic_inc(&s->vdq.busy);
+	}
+	return 1;
+}
+
+/********************************/
+/* Categorising qbman results   */
+/********************************/
+
+static struct qb_attr_code code_result_in_mem =
+			QB_CODE(0, QBMAN_RESULT_VERB_OFFSET_IN_MEM, 7);
+
+static inline int __qbman_result_is_x(const struct qbman_result *dq,
+				      uint32_t x)
+{
+	const uint32_t *p = qb_cl(dq);
+	uint32_t response_verb = qb_attr_code_decode(&code_dqrr_response, p);
+
+	return (response_verb == x);
+}
+
+static inline int __qbman_result_is_x_in_mem(const struct qbman_result *dq,
+					     uint32_t x)
+{
+	const uint32_t *p = qb_cl(dq);
+	uint32_t response_verb = qb_attr_code_decode(&code_result_in_mem, p);
+
+	return (response_verb == x);
+}
+
+int qbman_result_is_DQ(const struct qbman_result *dq)
+{
+	return __qbman_result_is_x(dq, QBMAN_RESULT_DQ);
+}
+
+int qbman_result_is_FQDAN(const struct qbman_result *dq)
+{
+	return __qbman_result_is_x(dq, QBMAN_RESULT_FQDAN);
+}
+
+int qbman_result_is_CDAN(const struct qbman_result *dq)
+{
+	return __qbman_result_is_x(dq, QBMAN_RESULT_CDAN);
+}
+
+int qbman_result_is_CSCN(const struct qbman_result *dq)
+{
+	return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_CSCN_MEM) ||
+		__qbman_result_is_x(dq, QBMAN_RESULT_CSCN_WQ);
+}
+
+int qbman_result_is_BPSCN(const struct qbman_result *dq)
+{
+	return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_BPSCN);
+}
+
+int qbman_result_is_CGCU(const struct qbman_result *dq)
+{
+	return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_CGCU);
+}
+
+int qbman_result_is_FQRN(const struct qbman_result *dq)
+{
+	return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_FQRN);
+}
+
+int qbman_result_is_FQRNI(const struct qbman_result *dq)
+{
+	return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_FQRNI);
+}
+
+int qbman_result_is_FQPN(const struct qbman_result *dq)
+{
+	return __qbman_result_is_x(dq, QBMAN_RESULT_FQPN);
+}
+
+/*********************************/
+/* Parsing frame dequeue results */
+/*********************************/
+
+/* These APIs assume qbman_result_is_DQ() is TRUE */
+
+uint32_t qbman_result_DQ_flags(const struct qbman_result *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return qb_attr_code_decode(&code_dqrr_stat, p);
+}
+
+uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return (uint16_t)qb_attr_code_decode(&code_dqrr_seqnum, p);
+}
+
+uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return (uint16_t)qb_attr_code_decode(&code_dqrr_odpid, p);
+}
+
+uint32_t qbman_result_DQ_fqid(const struct qbman_result *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return qb_attr_code_decode(&code_dqrr_fqid, p);
+}
+
+uint32_t qbman_result_DQ_byte_count(const struct qbman_result *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return qb_attr_code_decode(&code_dqrr_byte_count, p);
+}
+
+uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return qb_attr_code_decode(&code_dqrr_frame_count, p);
+}
+
+uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq)
+{
+	const uint64_t *p = (const uint64_t *)qb_cl(dq);
+
+	return qb_attr_code_decode_64(&code_dqrr_ctx_lo, p);
+}
+
+const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return (const struct qbman_fd *)&p[8];
+}
+
+/**************************************/
+/* Parsing state-change notifications */
+/**************************************/
+
+static struct qb_attr_code code_scn_state = QB_CODE(0, 16, 8);
+static struct qb_attr_code code_scn_rid = QB_CODE(1, 0, 24);
+static struct qb_attr_code code_scn_state_in_mem =
+			QB_CODE(0, SCN_STATE_OFFSET_IN_MEM, 8);
+static struct qb_attr_code code_scn_rid_in_mem =
+			QB_CODE(1, SCN_RID_OFFSET_IN_MEM, 24);
+static struct qb_attr_code code_scn_ctx_lo = QB_CODE(2, 0, 32);
+
+uint8_t qbman_result_SCN_state(const struct qbman_result *scn)
+{
+	const uint32_t *p = qb_cl(scn);
+
+	return (uint8_t)qb_attr_code_decode(&code_scn_state, p);
+}
+
+uint32_t qbman_result_SCN_rid(const struct qbman_result *scn)
+{
+	const uint32_t *p = qb_cl(scn);
+
+	return qb_attr_code_decode(&code_scn_rid, p);
+}
+
+uint64_t qbman_result_SCN_ctx(const struct qbman_result *scn)
+{
+	const uint64_t *p = (const uint64_t *)qb_cl(scn);
+
+	return qb_attr_code_decode_64(&code_scn_ctx_lo, p);
+}
+
+uint8_t qbman_result_SCN_state_in_mem(const struct qbman_result *scn)
+{
+	const uint32_t *p = qb_cl(scn);
+
+	return (uint8_t)qb_attr_code_decode(&code_scn_state_in_mem, p);
+}
+
+uint32_t qbman_result_SCN_rid_in_mem(const struct qbman_result *scn)
+{
+	const uint32_t *p = qb_cl(scn);
+	uint32_t result_rid;
+
+	result_rid = qb_attr_code_decode(&code_scn_rid_in_mem, p);
+	return make_le24(result_rid);
+}
+
+/*****************/
+/* Parsing BPSCN */
+/*****************/
+uint16_t qbman_result_bpscn_bpid(const struct qbman_result *scn)
+{
+	return (uint16_t)qbman_result_SCN_rid_in_mem(scn) & 0x3FFF;
+}
+
+int qbman_result_bpscn_has_free_bufs(const struct qbman_result *scn)
+{
+	return !(int)(qbman_result_SCN_state_in_mem(scn) & 0x1);
+}
+
+int qbman_result_bpscn_is_depleted(const struct qbman_result *scn)
+{
+	return (int)(qbman_result_SCN_state_in_mem(scn) & 0x2);
+}
+
+int qbman_result_bpscn_is_surplus(const struct qbman_result *scn)
+{
+	return (int)(qbman_result_SCN_state_in_mem(scn) & 0x4);
+}
+
+uint64_t qbman_result_bpscn_ctx(const struct qbman_result *scn)
+{
+	uint64_t ctx;
+	uint32_t ctx_hi, ctx_lo;
+
+	ctx = qbman_result_SCN_ctx(scn);
+	ctx_hi = upper32(ctx);
+	ctx_lo = lower32(ctx);
+	return ((uint64_t)make_le32(ctx_hi) << 32 |
+		(uint64_t)make_le32(ctx_lo));
+}
+
+/*****************/
+/* Parsing CGCU  */
+/*****************/
+uint16_t qbman_result_cgcu_cgid(const struct qbman_result *scn)
+{
+	return (uint16_t)qbman_result_SCN_rid_in_mem(scn) & 0xFFFF;
+}
+
+uint64_t qbman_result_cgcu_icnt(const struct qbman_result *scn)
+{
+	uint64_t ctx;
+	uint32_t ctx_hi, ctx_lo;
+
+	ctx = qbman_result_SCN_ctx(scn);
+	ctx_hi = upper32(ctx);
+	ctx_lo = lower32(ctx);
+	return ((uint64_t)(make_le32(ctx_hi) & 0xFF) << 32) |
+		(uint64_t)make_le32(ctx_lo);
+}
+
+/******************/
+/* Buffer release */
+/******************/
+
+/* These should be const, eventually */
+/* static struct qb_attr_code code_release_num = QB_CODE(0, 0, 3); */
+static struct qb_attr_code code_release_set_me = QB_CODE(0, 5, 1);
+static struct qb_attr_code code_release_rcdi = QB_CODE(0, 6, 1);
+static struct qb_attr_code code_release_bpid = QB_CODE(0, 16, 16);
+
+void qbman_release_desc_clear(struct qbman_release_desc *d)
+{
+	uint32_t *cl;
+
+	memset(d, 0, sizeof(*d));
+	cl = qb_cl(d);
+	qb_attr_code_encode(&code_release_set_me, cl, 1);
+}
+
+void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint32_t bpid)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_release_bpid, cl, bpid);
+}
+
+void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_release_rcdi, cl, !!enable);
+}
+
+#define RAR_IDX(rar)     ((rar) & 0x7)
+#define RAR_VB(rar)      ((rar) & 0x80)
+#define RAR_SUCCESS(rar) ((rar) & 0x100)
+
+int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d,
+		      const uint64_t *buffers, unsigned int num_buffers)
+{
+	uint32_t *p;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t rar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_RAR);
+
+	pr_debug("RAR=%08x\n", rar);
+	if (!RAR_SUCCESS(rar))
+		return -EBUSY;
+	QBMAN_BUG_ON(!num_buffers || (num_buffers > 7));
+	/* Start the release command */
+	p = qbman_cena_write_start_wo_shadow(&s->sys,
+					     QBMAN_CENA_SWP_RCR(RAR_IDX(rar)));
+	/* Copy the caller's buffer pointers to the command */
+	u64_to_le32_copy(&p[2], buffers, num_buffers);
+	/* Set the verb byte, have to substitute in the valid-bit and the number
+	 * of buffers.
+	 */
+	lwsync();
+	p[0] = cl[0] | RAR_VB(rar) | num_buffers;
+	qbman_cena_write_complete_wo_shadow(&s->sys,
+					    QBMAN_CENA_SWP_RCR(RAR_IDX(rar)));
+	return 0;
+}
+
+/*******************/
+/* Buffer acquires */
+/*******************/
+
+/* These should be const, eventually */
+static struct qb_attr_code code_acquire_bpid = QB_CODE(0, 16, 16);
+static struct qb_attr_code code_acquire_num = QB_CODE(1, 0, 3);
+static struct qb_attr_code code_acquire_r_num = QB_CODE(1, 0, 3);
+
+int qbman_swp_acquire(struct qbman_swp *s, uint32_t bpid, uint64_t *buffers,
+		      unsigned int num_buffers)
+{
+	uint32_t *p;
+	uint32_t rslt, num;
+
+	QBMAN_BUG_ON(!num_buffers || (num_buffers > 7));
+
+	/* Start the management command */
+	p = qbman_swp_mc_start(s);
+
+	if (!p)
+		return -EBUSY;
+
+	/* Encode the caller-provided attributes */
+	qb_attr_code_encode(&code_acquire_bpid, p, bpid);
+	qb_attr_code_encode(&code_acquire_num, p, num_buffers);
+
+	/* Complete the management command */
+	p = qbman_swp_mc_complete(s, p, p[0] | QBMAN_MC_ACQUIRE);
+
+	/* Decode the outcome */
+	rslt = qb_attr_code_decode(&code_generic_rslt, p);
+	num = qb_attr_code_decode(&code_acquire_r_num, p);
+	QBMAN_BUG_ON(qb_attr_code_decode(&code_generic_verb, p) !=
+		     QBMAN_MC_ACQUIRE);
+
+	/* Determine success or failure */
+	if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
+		pr_err("Acquire buffers from BPID 0x%x failed, code=0x%02x\n",
+		       bpid, rslt);
+		return -EIO;
+	}
+	QBMAN_BUG_ON(num > num_buffers);
+	/* Copy the acquired buffers to the caller's array */
+	u64_from_le32_copy(buffers, &p[2], num);
+	return (int)num;
+}
+
+/*****************/
+/* FQ management */
+/*****************/
+
+static struct qb_attr_code code_fqalt_fqid = QB_CODE(1, 0, 32);
+
+static int qbman_swp_alt_fq_state(struct qbman_swp *s, uint32_t fqid,
+				  uint8_t alt_fq_verb)
+{
+	uint32_t *p;
+	uint32_t rslt;
+
+	/* Start the management command */
+	p = qbman_swp_mc_start(s);
+	if (!p)
+		return -EBUSY;
+
+	qb_attr_code_encode(&code_fqalt_fqid, p, fqid);
+	/* Complete the management command */
+	p = qbman_swp_mc_complete(s, p, p[0] | alt_fq_verb);
+
+	/* Decode the outcome */
+	rslt = qb_attr_code_decode(&code_generic_rslt, p);
+	QBMAN_BUG_ON(qb_attr_code_decode(&code_generic_verb, p) != alt_fq_verb);
+
+	/* Determine success or failure */
+	if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
+		pr_err("ALT FQID %d failed: verb = 0x%08x, code = 0x%02x\n",
+		       fqid, alt_fq_verb, rslt);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+int qbman_swp_fq_schedule(struct qbman_swp *s, uint32_t fqid)
+{
+	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_SCHEDULE);
+}
+
+int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid)
+{
+	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_FORCE);
+}
+
+int qbman_swp_fq_xon(struct qbman_swp *s, uint32_t fqid)
+{
+	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_XON);
+}
+
+int qbman_swp_fq_xoff(struct qbman_swp *s, uint32_t fqid)
+{
+	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_XOFF);
+}
+
+/**********************/
+/* Channel management */
+/**********************/
+
+static struct qb_attr_code code_cdan_cid = QB_CODE(0, 16, 12);
+static struct qb_attr_code code_cdan_we = QB_CODE(1, 0, 8);
+static struct qb_attr_code code_cdan_en = QB_CODE(1, 8, 1);
+static struct qb_attr_code code_cdan_ctx_lo = QB_CODE(2, 0, 32);
+
+/* Hide "ICD" for now as we don't use it, don't set it, and don't test it, so it
+ * would be irresponsible to expose it.
+ */
+#define CODE_CDAN_WE_EN    0x1
+#define CODE_CDAN_WE_CTX   0x4
+
+static int qbman_swp_CDAN_set(struct qbman_swp *s, uint16_t channelid,
+			      uint8_t we_mask, uint8_t cdan_en,
+			      uint64_t ctx)
+{
+	uint32_t *p;
+	uint32_t rslt;
+
+	/* Start the management command */
+	p = qbman_swp_mc_start(s);
+	if (!p)
+		return -EBUSY;
+
+	/* Encode the caller-provided attributes */
+	qb_attr_code_encode(&code_cdan_cid, p, channelid);
+	qb_attr_code_encode(&code_cdan_we, p, we_mask);
+	qb_attr_code_encode(&code_cdan_en, p, cdan_en);
+	qb_attr_code_encode_64(&code_cdan_ctx_lo, (uint64_t *)p, ctx);
+	/* Complete the management command */
+	p = qbman_swp_mc_complete(s, p, p[0] | QBMAN_WQCHAN_CONFIGURE);
+
+	/* Decode the outcome */
+	rslt = qb_attr_code_decode(&code_generic_rslt, p);
+	QBMAN_BUG_ON(qb_attr_code_decode(&code_generic_verb, p)
+					!= QBMAN_WQCHAN_CONFIGURE);
+
+	/* Determine success or failure */
+	if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
+		pr_err("CDAN cQID %d failed: code = 0x%02x\n",
+		       channelid, rslt);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+int qbman_swp_CDAN_set_context(struct qbman_swp *s, uint16_t channelid,
+			       uint64_t ctx)
+{
+	return qbman_swp_CDAN_set(s, channelid,
+				  CODE_CDAN_WE_CTX,
+				  0, ctx);
+}
+
+int qbman_swp_CDAN_enable(struct qbman_swp *s, uint16_t channelid)
+{
+	return qbman_swp_CDAN_set(s, channelid,
+				  CODE_CDAN_WE_EN,
+				  1, 0);
+}
+
+int qbman_swp_CDAN_disable(struct qbman_swp *s, uint16_t channelid)
+{
+	return qbman_swp_CDAN_set(s, channelid,
+				  CODE_CDAN_WE_EN,
+				  0, 0);
+}
+
+int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s, uint16_t channelid,
+				      uint64_t ctx)
+{
+	return qbman_swp_CDAN_set(s, channelid,
+				  CODE_CDAN_WE_EN | CODE_CDAN_WE_CTX,
+				  1, ctx);
+}
+
+uint8_t qbman_get_dqrr_idx(struct qbman_result *dqrr)
+{
+	return QBMAN_IDX_FROM_DQRR(dqrr);
+}
+
+struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx)
+{
+	struct qbman_result *dq;
+
+	dq = qbman_cena_read(&s->sys, QBMAN_CENA_SWP_DQRR(idx));
+	return dq;
+}
+
+int qbman_swp_send_multiple(struct qbman_swp *s,
+			    const struct qbman_eq_desc *d,
+			    const struct qbman_fd *fd,
+			    int frames_to_send)
+{
+	uint32_t *p;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t eqcr_ci;
+	uint8_t diff;
+	int sent = 0;
+	int i;
+	int initial_pi = s->eqcr.pi;
+	uint64_t start_pointer;
+
+	if (!s->eqcr.available) {
+		eqcr_ci = s->eqcr.ci;
+		s->eqcr.ci = qbman_cena_read_reg(&s->sys,
+				 QBMAN_CENA_SWP_EQCR_CI) & 0xF;
+		diff = qm_cyc_diff(QBMAN_EQCR_SIZE,
+				   eqcr_ci, s->eqcr.ci);
+		if (!diff)
+			goto done;
+		s->eqcr.available += diff;
+	}
+
+	/* we are trying to send frames_to_send,
+	 * if we have enough space in the ring
+	 */
+	while (s->eqcr.available && frames_to_send--) {
+		p = qbman_cena_write_start_wo_shadow_fast(&s->sys,
+					QBMAN_CENA_SWP_EQCR((initial_pi) & 7));
+		/* Write command (except of first byte) and FD */
+		memcpy(&p[1], &cl[1], 7 * 4);
+		memcpy(&p[8], &fd[sent], sizeof(struct qbman_fd));
+
+		initial_pi++;
+		initial_pi &= 0xF;
+		s->eqcr.available--;
+		sent++;
+	}
+
+done:
+	initial_pi =  s->eqcr.pi;
+	lwsync();
+
+	/* in order for flushes to complete faster:
+	 * we use a following trick: we record all lines in 32 bit word
+	 */
+
+	initial_pi =  s->eqcr.pi;
+	for (i = 0; i < sent; i++) {
+		p = qbman_cena_write_start_wo_shadow_fast(&s->sys,
+					QBMAN_CENA_SWP_EQCR((initial_pi) & 7));
+
+		p[0] = cl[0] | s->eqcr.pi_vb;
+		initial_pi++;
+		initial_pi &= 0xF;
+
+		if (!(initial_pi & 7))
+			s->eqcr.pi_vb ^= QB_VALID_BIT;
+	}
+
+	initial_pi = s->eqcr.pi;
+
+	/* We need  to flush all the lines but without
+	 * load/store operations between them.
+	 * We assign start_pointer before we start loop so that
+	 * in loop we do not read it from memory
+	 */
+	start_pointer = (uint64_t)s->sys.addr_cena;
+	for (i = 0; i < sent; i++) {
+		p = (uint32_t *)(start_pointer
+				 + QBMAN_CENA_SWP_EQCR(initial_pi & 7));
+		dcbf((uint64_t)p);
+		initial_pi++;
+		initial_pi &= 0xF;
+	}
+
+	/* Update producer index for the next call */
+	s->eqcr.pi = initial_pi;
+
+	return sent;
+}
+
+int qbman_get_version(void)
+{
+	return qman_version;
+}
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.h b/drivers/bus/fslmc/qbman/qbman_portal.h
new file mode 100644
index 0000000..7aa1d4f
--- /dev/null
+++ b/drivers/bus/fslmc/qbman/qbman_portal.h
@@ -0,0 +1,277 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qbman_private.h"
+#include <fsl_qbman_portal.h>
+
+/* All QBMan command and result structures use this "valid bit" encoding */
+#define QB_VALID_BIT ((uint32_t)0x80)
+
+/* Management command result codes */
+#define QBMAN_MC_RSLT_OK      0xf0
+
+/* QBMan DQRR size is set at runtime in qbman_portal.c */
+
+#define QBMAN_EQCR_SIZE 8
+
+static inline u8 qm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return (2 * ringsize) + last - first;
+}
+
+/* --------------------- */
+/* portal data structure */
+/* --------------------- */
+
+struct qbman_swp {
+	struct qbman_swp_desc desc;
+	/* The qbman_sys (ie. arch/OS-specific) support code can put anything it
+	 * needs in here.
+	 */
+	struct qbman_swp_sys sys;
+	/* Management commands */
+	struct {
+#ifdef QBMAN_CHECKING
+		enum swp_mc_check {
+			swp_mc_can_start, /* call __qbman_swp_mc_start() */
+			swp_mc_can_submit, /* call __qbman_swp_mc_submit() */
+			swp_mc_can_poll, /* call __qbman_swp_mc_result() */
+		} check;
+#endif
+		uint32_t valid_bit; /* 0x00 or 0x80 */
+	} mc;
+	/* Push dequeues */
+	uint32_t sdq;
+	/* Volatile dequeues */
+	struct {
+		/* VDQCR supports a "1 deep pipeline", meaning that if you know
+		 * the last-submitted command is already executing in the
+		 * hardware (as evidenced by at least 1 valid dequeue result),
+		 * you can write another dequeue command to the register, the
+		 * hardware will start executing it as soon as the
+		 * already-executing command terminates. (This minimises latency
+		 * and stalls.) With that in mind, this "busy" variable refers
+		 * to whether or not a command can be submitted, not whether or
+		 * not a previously-submitted command is still executing. In
+		 * other words, once proof is seen that the previously-submitted
+		 * command is executing, "vdq" is no longer "busy".
+		 */
+		atomic_t busy;
+		uint32_t valid_bit; /* 0x00 or 0x80 */
+		/* We need to determine when vdq is no longer busy. This depends
+		 * on whether the "busy" (last-submitted) dequeue command is
+		 * targeting DQRR or main-memory, and detected is based on the
+		 * presence of the dequeue command's "token" showing up in
+		 * dequeue entries in DQRR or main-memory (respectively).
+		 */
+		struct qbman_result *storage; /* NULL if DQRR */
+	} vdq;
+	/* DQRR */
+	struct {
+		uint32_t next_idx;
+		uint32_t valid_bit;
+		uint8_t dqrr_size;
+		int reset_bug;
+	} dqrr;
+	struct {
+		uint32_t pi;
+		uint32_t pi_vb;
+		uint32_t ci;
+		int available;
+	} eqcr;
+};
+
+/* -------------------------- */
+/* portal management commands */
+/* -------------------------- */
+
+/* Different management commands all use this common base layer of code to issue
+ * commands and poll for results. The first function returns a pointer to where
+ * the caller should fill in their MC command (though they should ignore the
+ * verb byte), the second function commits merges in the caller-supplied command
+ * verb (which should not include the valid-bit) and submits the command to
+ * hardware, and the third function checks for a completed response (returns
+ * non-NULL if only if the response is complete).
+ */
+void *qbman_swp_mc_start(struct qbman_swp *p);
+void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint32_t cmd_verb);
+void *qbman_swp_mc_result(struct qbman_swp *p);
+
+/* Wraps up submit + poll-for-result */
+static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
+					  uint32_t cmd_verb)
+{
+	int loopvar;
+
+	qbman_swp_mc_submit(swp, cmd, cmd_verb);
+	DBG_POLL_START(loopvar);
+	do {
+		DBG_POLL_CHECK(loopvar);
+		cmd = qbman_swp_mc_result(swp);
+	} while (!cmd);
+	return cmd;
+}
+
+/* ------------ */
+/* qb_attr_code */
+/* ------------ */
+
+/* This struct locates a sub-field within a QBMan portal (CENA) cacheline which
+ * is either serving as a configuration command or a query result. The
+ * representation is inherently little-endian, as the indexing of the words is
+ * itself little-endian in nature and DPAA2 QBMan is little endian for anything
+ * that crosses a word boundary too (64-bit fields are the obvious examples).
+ */
+struct qb_attr_code {
+	unsigned int word; /* which uint32_t[] array member encodes the field */
+	unsigned int lsoffset; /* encoding offset from ls-bit */
+	unsigned int width; /* encoding width. (bool must be 1.) */
+};
+
+/* Some pre-defined codes */
+extern struct qb_attr_code code_generic_verb;
+extern struct qb_attr_code code_generic_rslt;
+
+/* Macros to define codes */
+#define QB_CODE(a, b, c) { a, b, c}
+#define QB_CODE_NULL \
+	QB_CODE((unsigned int)-1, (unsigned int)-1, (unsigned int)-1)
+
+/* Rotate a code "ms", meaning that it moves from less-significant bytes to
+ * more-significant, from less-significant words to more-significant, etc. The
+ * "ls" version does the inverse, from more-significant towards
+ * less-significant.
+ */
+static inline void qb_attr_code_rotate_ms(struct qb_attr_code *code,
+					  unsigned int bits)
+{
+	code->lsoffset += bits;
+	while (code->lsoffset > 31) {
+		code->word++;
+		code->lsoffset -= 32;
+	}
+}
+
+static inline void qb_attr_code_rotate_ls(struct qb_attr_code *code,
+					  unsigned int bits)
+{
+	/* Don't be fooled, this trick should work because the types are
+	 * unsigned. So the case that interests the while loop (the rotate has
+	 * gone too far and the word count needs to compensate for it), is
+	 * manifested when lsoffset is negative. But that equates to a really
+	 * large unsigned value, starting with lots of "F"s. As such, we can
+	 * continue adding 32 back to it until it wraps back round above zero,
+	 * to a value of 31 or less...
+	 */
+	code->lsoffset -= bits;
+	while (code->lsoffset > 31) {
+		code->word--;
+		code->lsoffset += 32;
+	}
+}
+
+/* Implement a loop of code rotations until 'expr' evaluates to FALSE (0). */
+#define qb_attr_code_for_ms(code, bits, expr) \
+		for (; expr; qb_attr_code_rotate_ms(code, bits))
+#define qb_attr_code_for_ls(code, bits, expr) \
+		for (; expr; qb_attr_code_rotate_ls(code, bits))
+
+/* decode a field from a cacheline */
+static inline uint32_t qb_attr_code_decode(const struct qb_attr_code *code,
+					   const uint32_t *cacheline)
+{
+	return d32_uint32_t(code->lsoffset, code->width, cacheline[code->word]);
+}
+
+static inline uint64_t qb_attr_code_decode_64(const struct qb_attr_code *code,
+					      const uint64_t *cacheline)
+{
+	return cacheline[code->word / 2];
+}
+
+/* encode a field to a cacheline */
+static inline void qb_attr_code_encode(const struct qb_attr_code *code,
+				       uint32_t *cacheline, uint32_t val)
+{
+	cacheline[code->word] =
+		r32_uint32_t(code->lsoffset, code->width, cacheline[code->word])
+		| e32_uint32_t(code->lsoffset, code->width, val);
+}
+
+static inline void qb_attr_code_encode_64(const struct qb_attr_code *code,
+					  uint64_t *cacheline, uint64_t val)
+{
+	cacheline[code->word / 2] = val;
+}
+
+/* Small-width signed values (two's-complement) will decode into medium-width
+ * positives. (Eg. for an 8-bit signed field, which stores values from -128 to
+ * +127, a setting of -7 would appear to decode to the 32-bit unsigned value
+ * 249. Likewise -120 would decode as 136.) This function allows the caller to
+ * "re-sign" such fields to 32-bit signed. (Eg. -7, which was 249 with an 8-bit
+ * encoding, will become 0xfffffff9 if you cast the return value to uint32_t).
+ */
+static inline int32_t qb_attr_code_makesigned(const struct qb_attr_code *code,
+					      uint32_t val)
+{
+	QBMAN_BUG_ON(val >= (1u << code->width));
+	/* code->width should never exceed the width of val. If it does then a
+	 * different function with larger val size must be used to translate
+	 * from unsigned to signed
+	 */
+	QBMAN_BUG_ON(code->width > sizeof(val) * CHAR_BIT);
+	/* If the high bit was set, it was encoding a negative */
+	if (val >= 1u << (code->width - 1))
+		return (int32_t)0 - (int32_t)(((uint32_t)1 << code->width) -
+			val);
+	/* Otherwise, it was encoding a positive */
+	return (int32_t)val;
+}
+
+/* ---------------------- */
+/* Descriptors/cachelines */
+/* ---------------------- */
+
+/* To avoid needless dynamic allocation, the driver API often gives the caller
+ * a "descriptor" type that the caller can instantiate however they like.
+ * Ultimately though, it is just a cacheline of binary storage (or something
+ * smaller when it is known that the descriptor doesn't need all 64 bytes) for
+ * holding pre-formatted pieces of hardware commands. The performance-critical
+ * code can then copy these descriptors directly into hardware command
+ * registers more efficiently than trying to construct/format commands
+ * on-the-fly. The API user sees the descriptor as an array of 32-bit words in
+ * order for the compiler to know its size, but the internal details are not
+ * exposed. The following macro is used within the driver for converting *any*
+ * descriptor pointer to a usable array pointer. The use of a macro (instead of
+ * an inline) is necessary to work with different descriptor types and to work
+ * correctly with const and non-const inputs (and similarly-qualified outputs).
+ */
+#define qb_cl(d) (&(d)->dont_manipulate_directly[0])
diff --git a/drivers/bus/fslmc/qbman/qbman_private.h b/drivers/bus/fslmc/qbman/qbman_private.h
new file mode 100644
index 0000000..f5fa13d
--- /dev/null
+++ b/drivers/bus/fslmc/qbman/qbman_private.h
@@ -0,0 +1,170 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* Perform extra checking */
+#define QBMAN_CHECKING
+
+/* To maximise the amount of logic that is common between the Linux driver and
+ * other targets (such as the embedded MC firmware), we pivot here between the
+ * inclusion of two platform-specific headers.
+ *
+ * The first, qbman_sys_decl.h, includes any and all required system headers as
+ * well as providing any definitions for the purposes of compatibility. The
+ * second, qbman_sys.h, is where platform-specific routines go.
+ *
+ * The point of the split is that the platform-independent code (including this
+ * header) may depend on platform-specific declarations, yet other
+ * platform-specific routines may depend on platform-independent definitions.
+ */
+
+#include "qbman_sys_decl.h"
+
+/* When things go wrong, it is a convenient trick to insert a few FOO()
+ * statements in the code to trace progress. TODO: remove this once we are
+ * hacking the code less actively.
+ */
+#define FOO() fsl_os_print("FOO: %s:%d\n", __FILE__, __LINE__)
+
+/* Any time there is a register interface which we poll on, this provides a
+ * "break after x iterations" scheme for it. It's handy for debugging, eg.
+ * where you don't want millions of lines of log output from a polling loop
+ * that won't, because such things tend to drown out the earlier log output
+ * that might explain what caused the problem. (NB: put ";" after each macro!)
+ * TODO: we should probably remove this once we're done sanitising the
+ * simulator...
+ */
+#define DBG_POLL_START(loopvar) (loopvar = 10)
+#define DBG_POLL_CHECK(loopvar) \
+do { \
+	if (!(loopvar--)) \
+		QBMAN_BUG_ON(NULL == "DBG_POLL_CHECK"); \
+} while (0)
+
+/* For CCSR or portal-CINH registers that contain fields at arbitrary offsets
+ * and widths, these macro-generated encode/decode/isolate/remove inlines can
+ * be used.
+ *
+ * Eg. to "d"ecode a 14-bit field out of a register (into a "uint16_t" type),
+ * where the field is located 3 bits "up" from the least-significant bit of the
+ * register (ie. the field location within the 32-bit register corresponds to a
+ * mask of 0x0001fff8), you would do;
+ *                uint16_t field = d32_uint16_t(3, 14, reg_value);
+ *
+ * Or to "e"ncode a 1-bit boolean value (input type is "int", zero is FALSE,
+ * non-zero is TRUE, so must convert all non-zero inputs to 1, hence the "!!"
+ * operator) into a register at bit location 0x00080000 (19 bits "in" from the
+ * LS bit), do;
+ *                reg_value |= e32_int(19, 1, !!field);
+ *
+ * If you wish to read-modify-write a register, such that you leave the 14-bit
+ * field as-is but have all other fields set to zero, then "i"solate the 14-bit
+ * value using;
+ *                reg_value = i32_uint16_t(3, 14, reg_value);
+ *
+ * Alternatively, you could "r"emove the 1-bit boolean field (setting it to
+ * zero) but leaving all other fields as-is;
+ *                reg_val = r32_int(19, 1, reg_value);
+ *
+ */
+#define MAKE_MASK32(width) (width == 32 ? 0xffffffff : \
+				 (uint32_t)((1 << width) - 1))
+#define DECLARE_CODEC32(t) \
+static inline uint32_t e32_##t(uint32_t lsoffset, uint32_t width, t val) \
+{ \
+	QBMAN_BUG_ON(width > (sizeof(t) * 8)); \
+	return ((uint32_t)val & MAKE_MASK32(width)) << lsoffset; \
+} \
+static inline t d32_##t(uint32_t lsoffset, uint32_t width, uint32_t val) \
+{ \
+	QBMAN_BUG_ON(width > (sizeof(t) * 8)); \
+	return (t)((val >> lsoffset) & MAKE_MASK32(width)); \
+} \
+static inline uint32_t i32_##t(uint32_t lsoffset, uint32_t width, \
+				uint32_t val) \
+{ \
+	QBMAN_BUG_ON(width > (sizeof(t) * 8)); \
+	return e32_##t(lsoffset, width, d32_##t(lsoffset, width, val)); \
+} \
+static inline uint32_t r32_##t(uint32_t lsoffset, uint32_t width, \
+				uint32_t val) \
+{ \
+	QBMAN_BUG_ON(width > (sizeof(t) * 8)); \
+	return ~(MAKE_MASK32(width) << lsoffset) & val; \
+}
+DECLARE_CODEC32(uint32_t)
+DECLARE_CODEC32(uint16_t)
+DECLARE_CODEC32(uint8_t)
+DECLARE_CODEC32(int)
+
+	/*********************/
+	/* Debugging assists */
+	/*********************/
+
+static inline void __hexdump(unsigned long start, unsigned long end,
+			     unsigned long p, size_t sz, const unsigned char *c)
+{
+	while (start < end) {
+		unsigned int pos = 0;
+		char buf[64];
+		int nl = 0;
+
+		pos += sprintf(buf + pos, "%08lx: ", start);
+		do {
+			if ((start < p) || (start >= (p + sz)))
+				pos += sprintf(buf + pos, "..");
+			else
+				pos += sprintf(buf + pos, "%02x", *(c++));
+			if (!(++start & 15)) {
+				buf[pos++] = '\n';
+				nl = 1;
+			} else {
+				nl = 0;
+				if (!(start & 1))
+					buf[pos++] = ' ';
+				if (!(start & 3))
+					buf[pos++] = ' ';
+			}
+		} while (start & 15);
+		if (!nl)
+			buf[pos++] = '\n';
+		buf[pos] = '\0';
+		pr_info("%s", buf);
+	}
+}
+
+static inline void hexdump(const void *ptr, size_t sz)
+{
+	unsigned long p = (unsigned long)ptr;
+	unsigned long start = p & ~(unsigned long)15;
+	unsigned long end = (p + sz + 15) & ~(unsigned long)15;
+	const unsigned char *c = ptr;
+
+	__hexdump(start, end, p, sz, c);
+}
+
+#include "qbman_sys.h"
diff --git a/drivers/bus/fslmc/qbman/qbman_sys.h b/drivers/bus/fslmc/qbman/qbman_sys.h
new file mode 100644
index 0000000..5dbcaa5
--- /dev/null
+++ b/drivers/bus/fslmc/qbman/qbman_sys.h
@@ -0,0 +1,385 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* qbman_sys_decl.h and qbman_sys.h are the two platform-specific files in the
+ * driver. They are only included via qbman_private.h, which is itself a
+ * platform-independent file and is included by all the other driver source.
+ *
+ * qbman_sys_decl.h is included prior to all other declarations and logic, and
+ * it exists to provide compatibility with any linux interfaces our
+ * single-source driver code is dependent on (eg. kmalloc). Ie. this file
+ * provides linux compatibility.
+ *
+ * This qbman_sys.h header, on the other hand, is included *after* any common
+ * and platform-neutral declarations and logic in qbman_private.h, and exists to
+ * implement any platform-specific logic of the qbman driver itself. Ie. it is
+ * *not* to provide linux compatibility.
+ */
+
+/* Trace the 3 different classes of read/write access to QBMan. #undef as
+ * required.
+ */
+#undef QBMAN_CCSR_TRACE
+#undef QBMAN_CINH_TRACE
+#undef QBMAN_CENA_TRACE
+
+static inline void word_copy(void *d, const void *s, unsigned int cnt)
+{
+	uint32_t *dd = d;
+	const uint32_t *ss = s;
+
+	while (cnt--)
+		*(dd++) = *(ss++);
+}
+
+/* Currently, the CENA support code expects each 32-bit word to be written in
+ * host order, and these are converted to hardware (little-endian) order on
+ * command submission. However, 64-bit quantities are must be written (and read)
+ * as two 32-bit words with the least-significant word first, irrespective of
+ * host endianness.
+ */
+static inline void u64_to_le32_copy(void *d, const uint64_t *s,
+				    unsigned int cnt)
+{
+	uint32_t *dd = d;
+	const uint32_t *ss = (const uint32_t *)s;
+
+	while (cnt--) {
+		/* TBD: the toolchain was choking on the use of 64-bit types up
+		 * until recently so this works entirely with 32-bit variables.
+		 * When 64-bit types become usable again, investigate better
+		 * ways of doing this.
+		 */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		*(dd++) = ss[1];
+		*(dd++) = ss[0];
+		ss += 2;
+#else
+		*(dd++) = *(ss++);
+		*(dd++) = *(ss++);
+#endif
+	}
+}
+
+static inline void u64_from_le32_copy(uint64_t *d, const void *s,
+				      unsigned int cnt)
+{
+	const uint32_t *ss = s;
+	uint32_t *dd = (uint32_t *)d;
+
+	while (cnt--) {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		dd[1] = *(ss++);
+		dd[0] = *(ss++);
+		dd += 2;
+#else
+		*(dd++) = *(ss++);
+		*(dd++) = *(ss++);
+#endif
+	}
+}
+
+/* Convert a host-native 32bit value into little endian */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+static inline uint32_t make_le32(uint32_t val)
+{
+	return ((val & 0xff) << 24) | ((val & 0xff00) << 8) |
+		((val & 0xff0000) >> 8) | ((val & 0xff000000) >> 24);
+}
+
+static inline uint32_t make_le24(uint32_t val)
+{
+	return (((val & 0xff) << 16) | (val & 0xff00) |
+		((val & 0xff0000) >> 16));
+}
+
+static inline void make_le32_n(uint32_t *val, unsigned int num)
+{
+	while (num--) {
+		*val = make_le32(*val);
+		val++;
+	}
+}
+
+#else
+#define make_le32(val) (val)
+#define make_le24(val) (val)
+#define make_le32_n(val, len) do {} while (0)
+#endif
+
+	/******************/
+	/* Portal access  */
+	/******************/
+struct qbman_swp_sys {
+	/* On GPP, the sys support for qbman_swp is here. The CENA region isi
+	 * not an mmap() of the real portal registers, but an allocated
+	 * place-holder, because the actual writes/reads to/from the portal are
+	 * marshalled from these allocated areas using QBMan's "MC access
+	 * registers". CINH accesses are atomic so there's no need for a
+	 * place-holder.
+	 */
+	uint8_t *cena;
+	uint8_t __iomem *addr_cena;
+	uint8_t __iomem *addr_cinh;
+	uint32_t idx;
+	enum qbman_eqcr_mode eqcr_mode;
+};
+
+/* P_OFFSET is (ACCESS_CMD,0,12) - offset within the portal
+ * C is (ACCESS_CMD,12,1) - is inhibited? (0==CENA, 1==CINH)
+ * SWP_IDX is (ACCESS_CMD,16,10) - Software portal index
+ * P is (ACCESS_CMD,28,1) - (0==special portal, 1==any portal)
+ * T is (ACCESS_CMD,29,1) - Command type (0==READ, 1==WRITE)
+ * E is (ACCESS_CMD,31,1) - Command execute (1 to issue, poll for 0==complete)
+ */
+
+static inline void qbman_cinh_write(struct qbman_swp_sys *s, uint32_t offset,
+				    uint32_t val)
+{
+	__raw_writel(val, s->addr_cinh + offset);
+#ifdef QBMAN_CINH_TRACE
+	pr_info("qbman_cinh_write(%p:%d:0x%03x) 0x%08x\n",
+		s->addr_cinh, s->idx, offset, val);
+#endif
+}
+
+static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset)
+{
+	uint32_t reg = __raw_readl(s->addr_cinh + offset);
+#ifdef QBMAN_CINH_TRACE
+	pr_info("qbman_cinh_read(%p:%d:0x%03x) 0x%08x\n",
+		s->addr_cinh, s->idx, offset, reg);
+#endif
+	return reg;
+}
+
+static inline void *qbman_cena_write_start(struct qbman_swp_sys *s,
+					   uint32_t offset)
+{
+	void *shadow = s->cena + offset;
+
+#ifdef QBMAN_CENA_TRACE
+	pr_info("qbman_cena_write_start(%p:%d:0x%03x) %p\n",
+		s->addr_cena, s->idx, offset, shadow);
+#endif
+	QBMAN_BUG_ON(offset & 63);
+	dcbz(shadow);
+	return shadow;
+}
+
+static inline void *qbman_cena_write_start_wo_shadow(struct qbman_swp_sys *s,
+						     uint32_t offset)
+{
+#ifdef QBMAN_CENA_TRACE
+	pr_info("qbman_cena_write_start(%p:%d:0x%03x)\n",
+		s->addr_cena, s->idx, offset);
+#endif
+	QBMAN_BUG_ON(offset & 63);
+	return (s->addr_cena + offset);
+}
+
+static inline void qbman_cena_write_complete(struct qbman_swp_sys *s,
+					     uint32_t offset, void *cmd)
+{
+	const uint32_t *shadow = cmd;
+	int loop;
+#ifdef QBMAN_CENA_TRACE
+	pr_info("qbman_cena_write_complete(%p:%d:0x%03x) %p\n",
+		s->addr_cena, s->idx, offset, shadow);
+	hexdump(cmd, 64);
+#endif
+	for (loop = 15; loop >= 1; loop--)
+		__raw_writel(shadow[loop], s->addr_cena +
+					 offset + loop * 4);
+	lwsync();
+		__raw_writel(shadow[0], s->addr_cena + offset);
+	dcbf(s->addr_cena + offset);
+}
+
+static inline void qbman_cena_write_complete_wo_shadow(struct qbman_swp_sys *s,
+						       uint32_t offset)
+{
+#ifdef QBMAN_CENA_TRACE
+	pr_info("qbman_cena_write_complete(%p:%d:0x%03x)\n",
+		s->addr_cena, s->idx, offset);
+	hexdump(cmd, 64);
+#endif
+	dcbf(s->addr_cena + offset);
+}
+
+static inline uint32_t qbman_cena_read_reg(struct qbman_swp_sys *s,
+					   uint32_t offset)
+{
+	return __raw_readl(s->addr_cena + offset);
+}
+
+static inline void *qbman_cena_read(struct qbman_swp_sys *s, uint32_t offset)
+{
+	uint32_t *shadow = (uint32_t *)(s->cena + offset);
+	unsigned int loop;
+#ifdef QBMAN_CENA_TRACE
+	pr_info("qbman_cena_read(%p:%d:0x%03x) %p\n",
+		s->addr_cena, s->idx, offset, shadow);
+#endif
+
+	for (loop = 0; loop < 16; loop++)
+		shadow[loop] = __raw_readl(s->addr_cena + offset
+					+ loop * 4);
+#ifdef QBMAN_CENA_TRACE
+	hexdump(shadow, 64);
+#endif
+	return shadow;
+}
+
+static inline void *qbman_cena_read_wo_shadow(struct qbman_swp_sys *s,
+					      uint32_t offset)
+{
+#ifdef QBMAN_CENA_TRACE
+	pr_info("qbman_cena_read(%p:%d:0x%03x) %p\n",
+		s->addr_cena, s->idx, offset, shadow);
+#endif
+
+#ifdef QBMAN_CENA_TRACE
+	hexdump(shadow, 64);
+#endif
+	return s->addr_cena + offset;
+}
+
+static inline void qbman_cena_invalidate(struct qbman_swp_sys *s,
+					 uint32_t offset)
+{
+	dccivac(s->addr_cena + offset);
+}
+
+static inline void qbman_cena_invalidate_prefetch(struct qbman_swp_sys *s,
+						  uint32_t offset)
+{
+	dccivac(s->addr_cena + offset);
+	prefetch_for_load(s->addr_cena + offset);
+}
+
+static inline void qbman_cena_prefetch(struct qbman_swp_sys *s,
+				       uint32_t offset)
+{
+	prefetch_for_load(s->addr_cena + offset);
+}
+
+	/******************/
+	/* Portal support */
+	/******************/
+
+/* The SWP_CFG portal register is special, in that it is used by the
+ * platform-specific code rather than the platform-independent code in
+ * qbman_portal.c. So use of it is declared locally here.
+ */
+#define QBMAN_CINH_SWP_CFG   0xd00
+
+/* For MC portal use, we always configure with
+ * DQRR_MF is (SWP_CFG,20,3) - DQRR max fill (<- 0x4)
+ * EST is (SWP_CFG,16,3) - EQCR_CI stashing threshold (<- 0x2)
+ * RPM is (SWP_CFG,12,2) - RCR production notification mode (<- 0x3)
+ * DCM is (SWP_CFG,10,2) - DQRR consumption notification mode (<- 0x2)
+ * EPM is (SWP_CFG,8,2) - EQCR production notification mode (<- 0x2)
+ * SD is (SWP_CFG,5,1) - memory stashing drop enable (<- TRUE)
+ * SP is (SWP_CFG,4,1) - memory stashing priority (<- TRUE)
+ * SE is (SWP_CFG,3,1) - memory stashing enable (<- TRUE)
+ * DP is (SWP_CFG,2,1) - dequeue stashing priority (<- TRUE)
+ * DE is (SWP_CFG,1,1) - dequeue stashing enable (<- TRUE)
+ * EP is (SWP_CFG,0,1) - EQCR_CI stashing priority (<- TRUE)
+ */
+static inline uint32_t qbman_set_swp_cfg(uint8_t max_fill, uint8_t wn,
+					 uint8_t est, uint8_t rpm, uint8_t dcm,
+					uint8_t epm, int sd, int sp, int se,
+					int dp, int de, int ep)
+{
+	uint32_t reg;
+
+	reg = e32_uint8_t(20, (uint32_t)(3 + (max_fill >> 3)), max_fill) |
+		e32_uint8_t(16, 3, est) |
+		e32_uint8_t(12, 2, rpm) | e32_uint8_t(10, 2, dcm) |
+		e32_uint8_t(8, 2, epm) | e32_int(5, 1, sd) |
+		e32_int(4, 1, sp) | e32_int(3, 1, se) | e32_int(2, 1, dp) |
+		e32_int(1, 1, de) | e32_int(0, 1, ep) |	e32_uint8_t(14, 1, wn);
+	return reg;
+}
+
+static inline int qbman_swp_sys_init(struct qbman_swp_sys *s,
+				     const struct qbman_swp_desc *d,
+				     uint8_t dqrr_size)
+{
+	uint32_t reg;
+
+	s->addr_cena = d->cena_bar;
+	s->addr_cinh = d->cinh_bar;
+	s->idx = (uint32_t)d->idx;
+	s->cena = (void *)get_zeroed_page(GFP_KERNEL);
+	if (!s->cena) {
+		pr_err("Could not allocate page for cena shadow\n");
+		return -1;
+	}
+	s->eqcr_mode = d->eqcr_mode;
+	QBMAN_BUG_ON(d->idx < 0);
+#ifdef QBMAN_CHECKING
+	/* We should never be asked to initialise for a portal that isn't in
+	 * the power-on state. (Ie. don't forget to reset portals when they are
+	 * decommissioned!)
+	 */
+	reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+	QBMAN_BUG_ON(reg);
+#endif
+	if (s->eqcr_mode == qman_eqcr_vb_array)
+		reg = qbman_set_swp_cfg(dqrr_size, 0, 0, 3, 2, 3, 1, 1, 1, 1,
+					1, 1);
+	else
+		reg = qbman_set_swp_cfg(dqrr_size, 0, 2, 3, 2, 2, 1, 1, 1, 1,
+					1, 1);
+	qbman_cinh_write(s, QBMAN_CINH_SWP_CFG, reg);
+	reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+	if (!reg) {
+		pr_err("The portal %d is not enabled!\n", s->idx);
+		kfree(s->cena);
+		return -1;
+	}
+	return 0;
+}
+
+static inline void qbman_swp_sys_finish(struct qbman_swp_sys *s)
+{
+	free_page((unsigned long)s->cena);
+}
+
+static inline void *
+qbman_cena_write_start_wo_shadow_fast(struct qbman_swp_sys *s,
+				      uint32_t offset)
+{
+#ifdef QBMAN_CENA_TRACE
+	pr_info("qbman_cena_write_start(%p:%d:0x%03x)\n",
+		s->addr_cena, s->idx, offset);
+#endif
+	QBMAN_BUG_ON(offset & 63);
+	return (s->addr_cena + offset);
+}
diff --git a/drivers/bus/fslmc/qbman/qbman_sys_decl.h b/drivers/bus/fslmc/qbman/qbman_sys_decl.h
new file mode 100644
index 0000000..e52f5ed
--- /dev/null
+++ b/drivers/bus/fslmc/qbman/qbman_sys_decl.h
@@ -0,0 +1,73 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <compat.h>
+#include <fsl_qbman_base.h>
+
+/* Sanity check */
+#if (__BYTE_ORDER__ != __ORDER_BIG_ENDIAN__) && \
+	(__BYTE_ORDER__ != __ORDER_LITTLE_ENDIAN__)
+#error "Unknown endianness!"
+#endif
+
+/* The platform-independent code shouldn't need endianness, except for
+ * weird/fast-path cases like qbman_result_has_token(), which needs to
+ * perform a passive and endianness-specific test on a read-only data structure
+ * very quickly. It's an exception, and this symbol is used for that case.
+ */
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+#define DQRR_TOK_OFFSET 0
+#define QBMAN_RESULT_VERB_OFFSET_IN_MEM 24
+#define SCN_STATE_OFFSET_IN_MEM 8
+#define SCN_RID_OFFSET_IN_MEM 8
+#else
+#define DQRR_TOK_OFFSET 24
+#define QBMAN_RESULT_VERB_OFFSET_IN_MEM 0
+#define SCN_STATE_OFFSET_IN_MEM 16
+#define SCN_RID_OFFSET_IN_MEM 0
+#endif
+
+/* Similarly-named functions */
+#define upper32(a) upper_32_bits(a)
+#define lower32(a) lower_32_bits(a)
+
+	/****************/
+	/* arch assists */
+	/****************/
+#define dcbz(p) { asm volatile("dc zva, %0" : : "r" (p) : "memory"); }
+#define lwsync() { asm volatile("dmb st" : : : "memory"); }
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dccivac(p) { asm volatile("dc civac, %0" : : "r"(p) : "memory"); }
+static inline void prefetch_for_load(void *p)
+{
+	asm volatile("prfm pldl1keep, [%0, #64]" : : "r" (p));
+}
+
+static inline void prefetch_for_store(void *p)
+{
+	asm volatile("prfm pstl1keep, [%0, #64]" : : "r" (p));
+}
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index 41c80d9..95c1804 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -1,6 +1,25 @@
 DPDK_17.05 {
 	global:
 
+	qbman_check_command_complete;
+	qbman_eq_desc_clear;
+	qbman_eq_desc_set_no_orp;
+	qbman_eq_desc_set_qd;
+	qbman_eq_desc_set_response;
+	qbman_get_version;
+	qbman_pull_desc_clear;
+	qbman_pull_desc_set_fq;
+	qbman_pull_desc_set_numframes;
+	qbman_pull_desc_set_storage;
+	qbman_release_desc_clear;
+	qbman_release_desc_set_bpid;
+	qbman_result_DQ_fd;
+	qbman_result_DQ_flags;
+	qbman_result_has_new_result;
+	qbman_swp_acquire;
+	qbman_swp_pull;
+	qbman_swp_release;
+	qbman_swp_send_multiple;
 	rte_fslmc_driver_register;
 	rte_fslmc_driver_unregister;
 
-- 
1.9.1

  parent reply	other threads:[~2017-03-17 12:37 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-17 12:36 [PATCH v1 00/22] NXP DPAA2 FSLMC Bus driver Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 01/22] mk/dpaa2: add the crc support to the machine type Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 02/22] mk: handle intra drivers dependencies for shared build Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 03/22] bus/fslmc: introducing fsl-mc bus driver Hemant Agrawal
2017-03-17 12:36 ` Hemant Agrawal [this message]
2017-03-17 12:36 ` [PATCH v1 05/22] bus/fslmc: introduce MC object functions Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 06/22] bus/fslmc: add mc dpio object support Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 07/22] bus/fslmc: add mc dpbp " Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 08/22] eal/vfio: adding vfio utility functions in map file Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 09/22] bus/fslmc: add vfio support Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 10/22] bus/fslmc: scan for net and sec device Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 11/22] bus/fslmc: add debug log support Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 12/22] bus/fslmc: dpio portal driver Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 13/22] bus/fslmc: introduce support for hardware mempool object Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 14/22] bus/fslmc: affine dpio to crypto threads Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 15/22] bus/fslmc: define queues for DPAA2 devices Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 16/22] bus/fslmc: define hardware annotation area size Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 17/22] bus/fslmc: introduce true and false macros Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 18/22] bus/fslmc: define VLAN header length Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 19/22] bus/fslmc: add packet FLE definitions Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 20/22] bus/fslmc: add physical-virtual address translation helpers Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 21/22] bus/fslmc: add support for DMA mapping for ARM SMMU Hemant Agrawal
2017-03-17 12:36 ` [PATCH v1 22/22] bus/fslmc: frame queue based dq storage alloc Hemant Agrawal
2017-03-23 14:32 ` [PATCH v1 00/22] NXP DPAA2 FSLMC Bus driver Ferruh Yigit
2017-03-24 12:41 ` [PATCH v2 " Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 01/22] mk/dpaa2: add the crc support to the machine type Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 02/22] mk: handle intra drivers dependencies for shared build Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 03/22] bus/fslmc: introducing fsl-mc bus driver Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 04/22] bus/fslmc: add QBMAN driver to bus Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 05/22] bus/fslmc: introduce MC object functions Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 06/22] bus/fslmc: add mc dpio object support Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 07/22] bus/fslmc: add mc dpbp " Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 08/22] eal/vfio: adding vfio utility functions in map file Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 09/22] bus/fslmc: add vfio support Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 10/22] bus/fslmc: scan for net and sec device Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 11/22] bus/fslmc: add debug log support Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 12/22] bus/fslmc: dpio portal driver Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 13/22] bus/fslmc: introduce support for hardware mempool object Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 14/22] bus/fslmc: affine dpio to crypto threads Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 15/22] bus/fslmc: define queues for DPAA2 devices Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 16/22] bus/fslmc: define hardware annotation area size Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 17/22] bus/fslmc: introduce true and false macros Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 18/22] bus/fslmc: define VLAN header length Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 19/22] bus/fslmc: add packet FLE definitions Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 20/22] bus/fslmc: add physical-virtual address translation helpers Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 21/22] bus/fslmc: add support for DMA mapping for ARM SMMU Hemant Agrawal
2017-03-24 12:41   ` [PATCH v2 22/22] bus/fslmc: frame queue based dq storage alloc Hemant Agrawal
2017-03-24 14:56   ` [PATCH v2 00/22] NXP DPAA2 FSLMC Bus driver Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1489754201-1027-5-git-send-email-hemant.agrawal@nxp.com \
    --to=hemant.agrawal@nxp.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=jerin.jacob@caviumnetworks.com \
    --cc=john.mcnamara@intel.com \
    --cc=shreyansh.jain@nxp.com \
    --cc=thomas.monjalon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.