All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shreyansh Jain <shreyansh.jain@nxp.com>
To: <dev@dpdk.org>
Cc: <ferruh.yigit@intel.com>, <hemant.agrawal@nxp.com>
Subject: [PATCH 12/38] bus/dpaa: add QMan driver core routines
Date: Fri, 16 Jun 2017 11:10:42 +0530	[thread overview]
Message-ID: <1497591668-3320-13-git-send-email-shreyansh.jain@nxp.com> (raw)
In-Reply-To: <1497591668-3320-1-git-send-email-shreyansh.jain@nxp.com>

Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 drivers/bus/dpaa/Makefile                 |    2 +
 drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |   88 ++
 drivers/bus/dpaa/base/qbman/qman.c        | 2402 +++++++++++++++++++++++++++++
 drivers/bus/dpaa/base/qbman/qman.h        |  888 +++++++++++
 drivers/bus/dpaa/base/qbman/qman_driver.c |   12 +
 drivers/bus/dpaa/base/qbman/qman_priv.h   |   11 -
 drivers/bus/dpaa/include/fsl_qman.h       |  767 ++++++++-
 drivers/bus/dpaa/include/fsl_usd.h        |    1 +
 8 files changed, 4148 insertions(+), 23 deletions(-)
 create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.c
 create mode 100644 drivers/bus/dpaa/base/qbman/qman.h

diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile
index f1120bd..ad68828 100644
--- a/drivers/bus/dpaa/Makefile
+++ b/drivers/bus/dpaa/Makefile
@@ -71,7 +71,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \
 	base/fman/of.c \
 	base/fman/netcfg_layer.c \
 	base/qbman/process.c \
+	base/qbman/qman.c \
 	base/qbman/qman_driver.c \
+	base/qbman/dpaa_alloc.c \
 	base/qbman/dpaa_sys.c
 
 # Link Pthread
diff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
new file mode 100644
index 0000000..690576a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c
@@ -0,0 +1,88 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2009-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "dpaa_sys.h"
+#include <process.h>
+#include <fsl_qman.h>
+
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_fqid, result, count, align, partial);
+}
+
+void qman_release_fqid_range(u32 fqid, u32 count)
+{
+	process_release(dpaa_id_fqid, fqid, count);
+}
+
+int qman_reserve_fqid_range(u32 fqid, unsigned int count)
+{
+	return process_reserve(dpaa_id_fqid, fqid, count);
+}
+
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_qpool, result, count, align, partial);
+}
+
+void qman_release_pool_range(u32 pool, u32 count)
+{
+	process_release(dpaa_id_qpool, pool, count);
+}
+
+int qman_reserve_pool_range(u32 pool, u32 count)
+{
+	return process_reserve(dpaa_id_qpool, pool, count);
+}
+
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial)
+{
+	return process_alloc(dpaa_id_cgrid, result, count, align, partial);
+}
+
+void qman_release_cgrid_range(u32 cgrid, u32 count)
+{
+	process_release(dpaa_id_cgrid, cgrid, count);
+}
+
+int qman_reserve_cgrid_range(u32 cgrid, u32 count)
+{
+	return process_reserve(dpaa_id_cgrid, cgrid, count);
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
new file mode 100644
index 0000000..d46e96a
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -0,0 +1,2402 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman.h"
+#include <rte_branch_prediction.h>
+
+/* Compilation constants */
+#define DQRR_MAXFILL	15
+#define EQCR_ITHRESH	4	/* if EQCR congests, interrupt threshold */
+#define IRQNAME		"QMan portal %d"
+#define MAX_IRQNAME	16	/* big enough for "QMan portal %d" */
+/* maximum number of DQRR entries to process in qman_poll() */
+#define FSL_QMAN_POLL_LIMIT 8
+
+/* Lock/unlock frame queues, subject to the "LOCKED" flag. This is about
+ * inter-processor locking only. Note, FQLOCK() is always called either under a
+ * local_irq_save() or from interrupt context - hence there's no need for irq
+ * protection (and indeed, attempting to nest irq-protection doesn't work, as
+ * the "irq en/disable" machinery isn't recursive...).
+ */
+#define FQLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_lock(&__fq478->fqlock); \
+	} while (0)
+#define FQUNLOCK(fq) \
+	do { \
+		struct qman_fq *__fq478 = (fq); \
+		if (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \
+			spin_unlock(&__fq478->fqlock); \
+	} while (0)
+
+static inline void fq_set(struct qman_fq *fq, u32 mask)
+{
+	dpaa_set_bits(mask, &fq->flags);
+}
+
+static inline void fq_clear(struct qman_fq *fq, u32 mask)
+{
+	dpaa_clear_bits(mask, &fq->flags);
+}
+
+static inline int fq_isset(struct qman_fq *fq, u32 mask)
+{
+	return fq->flags & mask;
+}
+
+static inline int fq_isclear(struct qman_fq *fq, u32 mask)
+{
+	return !(fq->flags & mask);
+}
+
+struct qman_portal {
+	struct qm_portal p;
+	/* PORTAL_BITS_*** - dynamic, strictly internal */
+	unsigned long bits;
+	/* interrupt sources processed by portal_isr(), configurable */
+	unsigned long irq_sources;
+	u32 use_eqcr_ci_stashing;
+	u32 slowpoll;	/* only used when interrupts are off */
+	/* only 1 volatile dequeue at a time */
+	struct qman_fq *vdqcr_owned;
+	u32 sdqcr;
+	int dqrr_disable_ref;
+	/* A portal-specific handler for DCP ERNs. If this is NULL, the global
+	 * handler is called instead.
+	 */
+	qman_cb_dc_ern cb_dc_ern;
+	/* When the cpu-affine portal is activated, this is non-NULL */
+	const struct qm_portal_config *config;
+	struct dpa_rbtree retire_table;
+	char irqname[MAX_IRQNAME];
+	/* 2-element array. cgrs[0] is mask, cgrs[1] is snapshot. */
+	struct qman_cgrs *cgrs;
+	/* linked-list of CSCN handlers. */
+	struct list_head cgr_cbs;
+	/* list lock */
+	spinlock_t cgr_lock;
+	/* track if memory was allocated by the driver */
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* Keep a shadow copy of the DQRR on LE systems as the SW needs to
+	 * do byte swaps of DQRR read only memory.  First entry must be aligned
+	 * to 2 ** 10 to ensure DQRR index calculations based shadow copy
+	 * address (6 bits for address shift + 4 bits for the DQRR size).
+	 */
+	struct qm_dqrr_entry shadow_dqrr[QM_DQRR_SIZE]
+		    __attribute__((aligned(1024)));
+#endif
+};
+
+/* Global handler for DCP ERNs. Used when the portal receiving the message does
+ * not have a portal-specific handler.
+ */
+static qman_cb_dc_ern cb_dc_ern;
+
+static cpumask_t affine_mask;
+static DEFINE_SPINLOCK(affine_mask_lock);
+static u16 affine_channels[NR_CPUS];
+static DEFINE_PER_CPU(struct qman_portal, qman_affine_portal);
+
+static inline struct qman_portal *get_affine_portal(void)
+{
+	return &get_cpu_var(qman_affine_portal);
+}
+
+/* This gives a FQID->FQ lookup to cover the fact that we can't directly demux
+ * retirement notifications (the fact they are sometimes h/w-consumed means that
+ * contextB isn't always a s/w demux - and as we can't know which case it is
+ * when looking at the notification, we have to use the slow lookup for all of
+ * them). NB, it's possible to have multiple FQ objects refer to the same FQID
+ * (though at most one of them should be the consumer), so this table isn't for
+ * all FQs - FQs are added when retirement commands are issued, and removed when
+ * they complete, which also massively reduces the size of this table.
+ */
+IMPLEMENT_DPAA_RBTREE(fqtree, struct qman_fq, node, fqid);
+/*
+ * This is what everything can wait on, even if it migrates to a different cpu
+ * to the one whose affine portal it is waiting on.
+ */
+static DECLARE_WAIT_QUEUE_HEAD(affine_queue);
+
+static inline int table_push_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	int ret = fqtree_push(&p->retire_table, fq);
+
+	if (ret)
+		pr_err("ERROR: double FQ-retirement %d\n", fq->fqid);
+	return ret;
+}
+
+static inline void table_del_fq(struct qman_portal *p, struct qman_fq *fq)
+{
+	fqtree_del(&p->retire_table, fq);
+}
+
+static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)
+{
+	return fqtree_find(&p->retire_table, fqid);
+}
+
+static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to HW format */
+	fqd->fq_ctrl = cpu_to_be16(fqd->fq_ctrl);
+	fqd->dest_wq = cpu_to_be16(fqd->dest_wq);
+	fqd->ics_cred = cpu_to_be16(fqd->ics_cred);
+	fqd->context_b = cpu_to_be32(fqd->context_b);
+	fqd->context_a.opaque = cpu_to_be64(fqd->context_a.opaque);
+	fqd->opaque_td = cpu_to_be16(fqd->opaque_td);
+}
+
+static inline void hw_fqd_to_cpu(struct qm_fqd *fqd)
+{
+	/* Byteswap the FQD to CPU format */
+	fqd->fq_ctrl = be16_to_cpu(fqd->fq_ctrl);
+	fqd->dest_wq = be16_to_cpu(fqd->dest_wq);
+	fqd->ics_cred = be16_to_cpu(fqd->ics_cred);
+	fqd->context_b = be32_to_cpu(fqd->context_b);
+	fqd->context_a.opaque = be64_to_cpu(fqd->context_a.opaque);
+}
+
+static inline void cpu_to_hw_fd(struct qm_fd *fd)
+{
+	fd->addr = cpu_to_be40(fd->addr);
+	fd->status = cpu_to_be32(fd->status);
+	fd->opaque = cpu_to_be32(fd->opaque);
+}
+
+static inline void hw_fd_to_cpu(struct qm_fd *fd)
+{
+	fd->addr = be40_to_cpu(fd->addr);
+	fd->status = be32_to_cpu(fd->status);
+	fd->opaque = be32_to_cpu(fd->opaque);
+}
+
+/* In the case that slow- and fast-path handling are both done by qman_poll()
+ * (ie. because there is no interrupt handling), we ought to balance how often
+ * we do the fast-path poll versus the slow-path poll. We'll use two decrementer
+ * sources, so we call the fast poll 'n' times before calling the slow poll
+ * once. The idle decrementer constant is used when the last slow-poll detected
+ * no work to do, and the busy decrementer constant when the last slow-poll had
+ * work to do.
+ */
+#define SLOW_POLL_IDLE   1000
+#define SLOW_POLL_BUSY   10
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is);
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit);
+
+/* Portal interrupt handler */
+static irqreturn_t portal_isr(__always_unused int irq, void *ptr)
+{
+	struct qman_portal *p = ptr;
+	/*
+	 * The CSCI/CCSCI source is cleared inside __poll_portal_slow(), because
+	 * it could race against a Query Congestion State command also given
+	 * as part of the handling of this interrupt source. We mustn't
+	 * clear it a second time in this top-level function.
+	 */
+	u32 clear = QM_DQAVAIL_MASK | (p->irq_sources &
+		~(QM_PIRQ_CSCI | QM_PIRQ_CCSCI));
+	u32 is = qm_isr_status_read(&p->p) & p->irq_sources;
+	/* DQRR-handling if it's interrupt-driven */
+	if (is & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+	/* Handling of anything else that's interrupt-driven */
+	clear |= __poll_portal_slow(p, is);
+	qm_isr_status_clear(&p->p, clear);
+	return IRQ_HANDLED;
+}
+
+/* This inner version is used privately by qman_create_affine_portal(), as well
+ * as by the exported qman_stop_dequeues().
+ */
+static inline void qman_stop_dequeues_ex(struct qman_portal *p)
+{
+	if (!(p->dqrr_disable_ref++))
+		qm_dqrr_set_maxfill(&p->p, 0);
+}
+
+static int drain_mr_fqrni(struct qm_portal *p)
+{
+	const struct qm_mr_entry *msg;
+loop:
+	msg = qm_mr_current(p);
+	if (!msg) {
+		/*
+		 * if MR was full and h/w had other FQRNI entries to produce, we
+		 * need to allow it time to produce those entries once the
+		 * existing entries are consumed. A worst-case situation
+		 * (fully-loaded system) means h/w sequencers may have to do 3-4
+		 * other things before servicing the portal's MR pump, each of
+		 * which (if slow) may take ~50 qman cycles (which is ~200
+		 * processor cycles). So rounding up and then multiplying this
+		 * worst-case estimate by a factor of 10, just to be
+		 * ultra-paranoid, goes as high as 10,000 cycles. NB, we consume
+		 * one entry at a time, so h/w has an opportunity to produce new
+		 * entries well before the ring has been fully consumed, so
+		 * we're being *really* paranoid here.
+		 */
+		u64 now, then = mfatb();
+
+		do {
+			now = mfatb();
+		} while ((then + 10000) > now);
+		msg = qm_mr_current(p);
+		if (!msg)
+			return 0;
+	}
+	if ((msg->verb & QM_MR_VERB_TYPE_MASK) != QM_MR_VERB_FQRNI) {
+		/* We aren't draining anything but FQRNIs */
+		pr_err("Found verb 0x%x in MR\n", msg->verb);
+		return -1;
+	}
+	qm_mr_next(p);
+	qm_mr_cci_consume(p, 1);
+	goto loop;
+}
+
+static inline int qm_eqcr_init(struct qm_portal *portal,
+			       enum qm_eqcr_pmode pmode,
+				unsigned int eq_stash_thresh,
+				int eq_stash_prio)
+{
+	/* This use of 'register', as well as all other occurrences, is because
+	 * it has been observed to generate much faster code with gcc than is
+	 * otherwise the case.
+	 */
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u32 cfg;
+	u8 pi;
+
+	eqcr->ring = portal->addr.ce + QM_CL_EQCR;
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	eqcr->cursor = eqcr->ring + pi;
+	eqcr->vbit = (qm_in(EQCR_PI_CINH) & QM_EQCR_SIZE) ?
+			QM_EQCR_VERB_VBIT : 0;
+	eqcr->available = QM_EQCR_SIZE - 1 -
+			qm_cyc_diff(QM_EQCR_SIZE, eqcr->ci, pi);
+	eqcr->ithresh = qm_in(EQCR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+	eqcr->pmode = pmode;
+#endif
+	cfg = (qm_in(CFG) & 0x00ffffff) |
+		(eq_stash_thresh << 28) | /* QCSP_CFG: EST */
+		(eq_stash_prio << 26)	| /* QCSP_CFG: EP */
+		((pmode & 0x3) << 24);	/* QCSP_CFG::EPM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_eqcr_finish(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 pi, ci;
+	u32 cfg;
+
+	/*
+	 * Disable EQCI stashing because the QMan only
+	 * presents the value it previously stashed to
+	 * maintain coherency.  Setting the stash threshold
+	 * to 1 then 0 ensures that QMan has resyncronized
+	 * its internal copy so that the portal is clean
+	 * when it is reinitialized in the future
+	 */
+	cfg = (qm_in(CFG) & 0x0fffffff) |
+		(1 << 28); /* QCSP_CFG: EST */
+	qm_out(CFG, cfg);
+	cfg &= 0x0fffffff; /* stash threshold = 0 */
+	qm_out(CFG, cfg);
+
+	pi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);
+	ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+
+	/* Refresh EQCR CI cache value */
+	qm_cl_invalidate(EQCR_CI);
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (pi != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("losing uncommited EQCR entries\n");
+	if (ci != eqcr->ci)
+		pr_crit("missing existing EQCR completions\n");
+	if (eqcr->ci != EQCR_PTR2IDX(eqcr->cursor))
+		pr_crit("EQCR destroyed unquiesced\n");
+}
+
+static inline int qm_dqrr_init(struct qm_portal *portal,
+			__maybe_unused const struct qm_portal_config *config,
+			enum qm_dqrr_dmode dmode,
+			__maybe_unused enum qm_dqrr_pmode pmode,
+			enum qm_dqrr_cmode cmode, u8 max_fill)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u32 cfg;
+
+	/* Make sure the DQRR will be idle when we enable */
+	qm_out(DQRR_SDQCR, 0);
+	qm_out(DQRR_VDQCR, 0);
+	qm_out(DQRR_PDQCR, 0);
+	dqrr->ring = portal->addr.ce + QM_CL_DQRR;
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->cursor = dqrr->ring + dqrr->ci;
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+	dqrr->vbit = (qm_in(DQRR_PI_CINH) & QM_DQRR_SIZE) ?
+			QM_DQRR_VERB_VBIT : 0;
+	dqrr->ithresh = qm_in(DQRR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	dqrr->dmode = dmode;
+	dqrr->pmode = pmode;
+	dqrr->cmode = cmode;
+#endif
+	/* Invalidate every ring entry before beginning */
+	for (cfg = 0; cfg < QM_DQRR_SIZE; cfg++)
+		dccivac(qm_cl(dqrr->ring, cfg));
+	cfg = (qm_in(CFG) & 0xff000f00) |
+		((max_fill & (QM_DQRR_SIZE - 1)) << 20) | /* DQRR_MF */
+		((dmode & 1) << 18) |			/* DP */
+		((cmode & 3) << 16) |			/* DCM */
+		0xa0 |					/* RE+SE */
+		(0 ? 0x40 : 0) |			/* Ignore RP */
+		(0 ? 0x10 : 0);				/* Ignore SP */
+	qm_out(CFG, cfg);
+	qm_dqrr_set_maxfill(portal, max_fill);
+	return 0;
+}
+
+static inline void qm_dqrr_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if ((dqrr->cmode != qm_dqrr_cdc) &&
+	    (dqrr->ci != DQRR_PTR2IDX(dqrr->cursor)))
+		pr_crit("Ignoring completed DQRR entries\n");
+#endif
+}
+
+static inline int qm_mr_init(struct qm_portal *portal,
+			     __maybe_unused enum qm_mr_pmode pmode,
+			     enum qm_mr_cmode cmode)
+{
+	register struct qm_mr *mr = &portal->mr;
+	u32 cfg;
+
+	mr->ring = portal->addr.ce + QM_CL_MR;
+	mr->pi = qm_in(MR_PI_CINH) & (QM_MR_SIZE - 1);
+	mr->ci = qm_in(MR_CI_CINH) & (QM_MR_SIZE - 1);
+	mr->cursor = mr->ring + mr->ci;
+	mr->fill = qm_cyc_diff(QM_MR_SIZE, mr->ci, mr->pi);
+	mr->vbit = (qm_in(MR_PI_CINH) & QM_MR_SIZE) ? QM_MR_VERB_VBIT : 0;
+	mr->ithresh = qm_in(MR_ITR);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mr->pmode = pmode;
+	mr->cmode = cmode;
+#endif
+	cfg = (qm_in(CFG) & 0xfffff0ff) |
+		((cmode & 1) << 8);		/* QCSP_CFG:MM */
+	qm_out(CFG, cfg);
+	return 0;
+}
+
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+	const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+	DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+		mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+		if (!mr->pi)
+			mr->vbit ^= QM_MR_VERB_VBIT;
+		mr->fill++;
+		res = MR_INC(res);
+	}
+	dcbit_ro(res);
+}
+
+static inline
+struct qman_portal *qman_create_portal(
+			struct qman_portal *portal,
+			      const struct qm_portal_config *c,
+			      const struct qman_cgrs *cgrs)
+{
+	struct qm_portal *p;
+	char buf[16];
+	int ret;
+	u32 isdr;
+
+	p = &portal->p;
+
+	portal->use_eqcr_ci_stashing = ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+	/*
+	 * prep the low-level portal struct with the mapped addresses from the
+	 * config, everything that follows depends on it and "config" is more
+	 * for (de)reference
+	 */
+	p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
+	p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
+	/*
+	 * If CI-stashing is used, the current defaults use a threshold of 3,
+	 * and stash with high-than-DQRR priority.
+	 */
+	if (qm_eqcr_init(p, qm_eqcr_pvb,
+			 portal->use_eqcr_ci_stashing ? 3 : 0, 1)) {
+		pr_err("Qman EQCR initialisation failed\n");
+		goto fail_eqcr;
+	}
+	if (qm_dqrr_init(p, c, qm_dqrr_dpush, qm_dqrr_pvb,
+			 qm_dqrr_cdc, DQRR_MAXFILL)) {
+		pr_err("Qman DQRR initialisation failed\n");
+		goto fail_dqrr;
+	}
+	if (qm_mr_init(p, qm_mr_pvb, qm_mr_cci)) {
+		pr_err("Qman MR initialisation failed\n");
+		goto fail_mr;
+	}
+	if (qm_mc_init(p)) {
+		pr_err("Qman MC initialisation failed\n");
+		goto fail_mc;
+	}
+
+	/* static interrupt-gating controls */
+	qm_dqrr_set_ithresh(p, 0);
+	qm_mr_set_ithresh(p, 0);
+	qm_isr_set_iperiod(p, 0);
+	portal->cgrs = kmalloc(2 * sizeof(*cgrs), GFP_KERNEL);
+	if (!portal->cgrs)
+		goto fail_cgrs;
+	/* initial snapshot is no-depletion */
+	qman_cgrs_init(&portal->cgrs[1]);
+	if (cgrs)
+		portal->cgrs[0] = *cgrs;
+	else
+		/* if the given mask is NULL, assume all CGRs can be seen */
+		qman_cgrs_fill(&portal->cgrs[0]);
+	INIT_LIST_HEAD(&portal->cgr_cbs);
+	spin_lock_init(&portal->cgr_lock);
+	portal->bits = 0;
+	portal->slowpoll = 0;
+	portal->sdqcr = QM_SDQCR_SOURCE_CHANNELS | QM_SDQCR_COUNT_UPTO3 |
+			QM_SDQCR_DEDICATED_PRECEDENCE | QM_SDQCR_TYPE_PRIO_QOS |
+			QM_SDQCR_TOKEN_SET(0xab) | QM_SDQCR_CHANNELS_DEDICATED;
+	portal->dqrr_disable_ref = 0;
+	portal->cb_dc_ern = NULL;
+	sprintf(buf, "qportal-%d", c->channel);
+	dpa_rbtree_init(&portal->retire_table);
+	isdr = 0xffffffff;
+	qm_isr_disable_write(p, isdr);
+	portal->irq_sources = 0;
+	qm_isr_enable_write(p, portal->irq_sources);
+	qm_isr_status_clear(p, 0xffffffff);
+	snprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);
+	if (request_irq(c->irq, portal_isr, 0, portal->irqname,
+			portal)) {
+		pr_err("request_irq() failed\n");
+		goto fail_irq;
+	}
+
+	/* Need EQCR to be empty before continuing */
+	isdr &= ~QM_PIRQ_EQCI;
+	qm_isr_disable_write(p, isdr);
+	ret = qm_eqcr_get_fill(p);
+	if (ret) {
+		pr_err("Qman EQCR unclean\n");
+		goto fail_eqcr_empty;
+	}
+	isdr &= ~(QM_PIRQ_DQRI | QM_PIRQ_MRI);
+	qm_isr_disable_write(p, isdr);
+	if (qm_dqrr_current(p)) {
+		pr_err("Qman DQRR unclean\n");
+		qm_dqrr_cdc_consume_n(p, 0xffff);
+	}
+	if (qm_mr_current(p) && drain_mr_fqrni(p)) {
+		/* special handling, drain just in case it's a few FQRNIs */
+		if (drain_mr_fqrni(p))
+			goto fail_dqrr_mr_empty;
+	}
+	/* Success */
+	portal->config = c;
+	qm_isr_disable_write(p, 0);
+	qm_isr_uninhibit(p);
+	/* Write a sane SDQCR */
+	qm_dqrr_sdqcr_set(p, portal->sdqcr);
+	return portal;
+fail_dqrr_mr_empty:
+fail_eqcr_empty:
+	free_irq(c->irq, portal);
+fail_irq:
+	kfree(portal->cgrs);
+	spin_lock_destroy(&portal->cgr_lock);
+fail_cgrs:
+	qm_mc_finish(p);
+fail_mc:
+	qm_mr_finish(p);
+fail_mr:
+	qm_dqrr_finish(p);
+fail_dqrr:
+	qm_eqcr_finish(p);
+fail_eqcr:
+	return NULL;
+}
+
+struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c,
+					      const struct qman_cgrs *cgrs)
+{
+	struct qman_portal *res;
+	struct qman_portal *portal = get_affine_portal();
+	/* A criteria for calling this function (from qman_driver.c) is that
+	 * we're already affine to the cpu and won't schedule onto another cpu.
+	 */
+
+	res = qman_create_portal(portal, c, cgrs);
+	if (res) {
+		spin_lock(&affine_mask_lock);
+		CPU_SET(c->cpu, &affine_mask);
+		affine_channels[c->cpu] =
+			c->channel;
+		spin_unlock(&affine_mask_lock);
+	}
+	return res;
+}
+
+static inline
+void qman_destroy_portal(struct qman_portal *qm)
+{
+	const struct qm_portal_config *pcfg;
+
+	/* Stop dequeues on the portal */
+	qm_dqrr_sdqcr_set(&qm->p, 0);
+
+	/*
+	 * NB we do this to "quiesce" EQCR. If we add enqueue-completions or
+	 * something related to QM_PIRQ_EQCI, this may need fixing.
+	 * Also, due to the prefetching model used for CI updates in the enqueue
+	 * path, this update will only invalidate the CI cacheline *after*
+	 * working on it, so we need to call this twice to ensure a full update
+	 * irrespective of where the enqueue processing was at when the teardown
+	 * began.
+	 */
+	qm_eqcr_cce_update(&qm->p);
+	qm_eqcr_cce_update(&qm->p);
+	pcfg = qm->config;
+
+	free_irq(pcfg->irq, qm);
+
+	kfree(qm->cgrs);
+	qm_mc_finish(&qm->p);
+	qm_mr_finish(&qm->p);
+	qm_dqrr_finish(&qm->p);
+	qm_eqcr_finish(&qm->p);
+
+	qm->config = NULL;
+
+	spin_lock_destroy(&qm->cgr_lock);
+}
+
+const struct qm_portal_config *qman_destroy_affine_portal(void)
+{
+	/* We don't want to redirect if we're a slave, use "raw" */
+	struct qman_portal *qm = get_affine_portal();
+	const struct qm_portal_config *pcfg;
+	int cpu;
+
+	pcfg = qm->config;
+	cpu = pcfg->cpu;
+
+	qman_destroy_portal(qm);
+
+	spin_lock(&affine_mask_lock);
+	CPU_CLR(cpu, &affine_mask);
+	spin_unlock(&affine_mask_lock);
+	return pcfg;
+}
+
+int qman_get_portal_index(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->config->index;
+}
+
+/* Inline helper to reduce nesting in __poll_portal_slow() */
+static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,
+				   const struct qm_mr_entry *msg, u8 verb)
+{
+	FQLOCK(fq);
+	switch (verb) {
+	case QM_MR_VERB_FQRL:
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_ORL));
+		fq_clear(fq, QMAN_FQ_STATE_ORL);
+		table_del_fq(p, fq);
+		break;
+	case QM_MR_VERB_FQRN:
+		DPAA_ASSERT((fq->state == qman_fq_state_parked) ||
+			    (fq->state == qman_fq_state_sched));
+		DPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_CHANGING));
+		fq_clear(fq, QMAN_FQ_STATE_CHANGING);
+		if (msg->fq.fqs & QM_MR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (msg->fq.fqs & QM_MR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		fq->state = qman_fq_state_retired;
+		break;
+	case QM_MR_VERB_FQPN:
+		DPAA_ASSERT(fq->state == qman_fq_state_sched);
+		DPAA_ASSERT(fq_isclear(fq, QMAN_FQ_STATE_CHANGING));
+		fq->state = qman_fq_state_parked;
+	}
+	FQUNLOCK(fq);
+}
+
+static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
+{
+	const struct qm_mr_entry *msg;
+	struct qm_mr_entry swapped_msg;
+
+	if (is & QM_PIRQ_CSCI) {
+		struct qman_cgrs rr, c;
+		struct qm_mc_result *mcr;
+		struct qman_cgr *cgr;
+
+		spin_lock(&p->cgr_lock);
+		/*
+		 * The CSCI bit must be cleared _before_ issuing the
+		 * Query Congestion State command, to ensure that a long
+		 * CGR State Change callback cannot miss an intervening
+		 * state change.
+		 */
+		qm_isr_status_clear(&p->p, QM_PIRQ_CSCI);
+		qm_mc_start(&p->p);
+		qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+		while (!(mcr = qm_mc_result(&p->p)))
+			cpu_relax();
+		/* mask out the ones I'm not interested in */
+		qman_cgrs_and(&rr, (const struct qman_cgrs *)
+			&mcr->querycongestion.state, &p->cgrs[0]);
+		/* check previous snapshot for delta, enter/exit congestion */
+		qman_cgrs_xor(&c, &rr, &p->cgrs[1]);
+		/* update snapshot */
+		qman_cgrs_cp(&p->cgrs[1], &rr);
+		/* Invoke callback */
+		list_for_each_entry(cgr, &p->cgr_cbs, node)
+			if (cgr->cb && qman_cgrs_get(&c, cgr->cgrid))
+				cgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid));
+		spin_unlock(&p->cgr_lock);
+	}
+
+	if (is & QM_PIRQ_EQRI) {
+		qm_eqcr_cce_update(&p->p);
+		qm_eqcr_set_ithresh(&p->p, 0);
+		wake_up(&affine_queue);
+	}
+
+	if (is & QM_PIRQ_MRI) {
+		struct qman_fq *fq;
+		u8 verb, num = 0;
+mr_loop:
+		qm_mr_pvb_update(&p->p);
+		msg = qm_mr_current(&p->p);
+		if (!msg)
+			goto mr_done;
+		swapped_msg = *msg;
+		hw_fd_to_cpu(&swapped_msg.ern.fd);
+		verb = msg->verb & QM_MR_VERB_TYPE_MASK;
+		/* The message is a software ERN iff the 0x20 bit is set */
+		if (verb & 0x20) {
+			switch (verb) {
+			case QM_MR_VERB_FQRNI:
+				/* nada, we drop FQRNIs on the floor */
+				break;
+			case QM_MR_VERB_FQRN:
+			case QM_MR_VERB_FQRL:
+				/* Lookup in the retirement table */
+				fq = table_find_fq(p,
+						   be32_to_cpu(msg->fq.fqid));
+				BUG_ON(!fq);
+				fq_state_change(p, fq, &swapped_msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_FQPN:
+				/* Parked */
+				fq = (void *)(uintptr_t)
+					be32_to_cpu(msg->fq.contextB);
+				fq_state_change(p, fq, msg, verb);
+				if (fq->cb.fqs)
+					fq->cb.fqs(p, fq, &swapped_msg);
+				break;
+			case QM_MR_VERB_DC_ERN:
+				/* DCP ERN */
+				if (p->cb_dc_ern)
+					p->cb_dc_ern(p, msg);
+				else if (cb_dc_ern)
+					cb_dc_ern(p, msg);
+				else {
+					static int warn_once;
+
+					if (!warn_once) {
+						pr_crit("Leaking DCP ERNs!\n");
+						warn_once = 1;
+					}
+				}
+				break;
+			default:
+				pr_crit("Invalid MR verb 0x%02x\n", verb);
+			}
+		} else {
+			/* Its a software ERN */
+			fq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);
+			fq->cb.ern(p, fq, &swapped_msg);
+		}
+		num++;
+		qm_mr_next(&p->p);
+		goto mr_loop;
+mr_done:
+		qm_mr_cci_consume(&p->p, num);
+	}
+	/*
+	 * QM_PIRQ_CSCI/CCSCI has already been cleared, as part of its specific
+	 * processing. If that interrupt source has meanwhile been re-asserted,
+	 * we mustn't clear it here (or in the top-level interrupt handler).
+	 */
+	return is & (QM_PIRQ_EQCI | QM_PIRQ_EQRI | QM_PIRQ_MRI);
+}
+
+/*
+ * remove some slowish-path stuff from the "fast path" and make sure it isn't
+ * inlined.
+ */
+static noinline void clear_vdqcr(struct qman_portal *p, struct qman_fq *fq)
+{
+	p->vdqcr_owned = NULL;
+	FQLOCK(fq);
+	fq_clear(fq, QMAN_FQ_STATE_VDQCR);
+	FQUNLOCK(fq);
+	wake_up(&affine_queue);
+}
+
+/*
+ * The only states that would conflict with other things if they ran at the
+ * same time on the same cpu are:
+ *
+ *   (i) setting/clearing vdqcr_owned, and
+ *  (ii) clearing the NE (Not Empty) flag.
+ *
+ * Both are safe. Because;
+ *
+ *   (i) this clearing can only occur after qman_set_vdq() has set the
+ *	 vdqcr_owned field (which it does before setting VDQCR), and
+ *	 qman_volatile_dequeue() blocks interrupts and preemption while this is
+ *	 done so that we can't interfere.
+ *  (ii) the NE flag is only cleared after qman_retire_fq() has set it, and as
+ *	 with (i) that API prevents us from interfering until it's safe.
+ *
+ * The good thing is that qman_set_vdq() and qman_retire_fq() run far
+ * less frequently (ie. per-FQ) than __poll_portal_fast() does, so the nett
+ * advantage comes from this function not having to "lock" anything at all.
+ *
+ * Note also that the callbacks are invoked at points which are safe against the
+ * above potential conflicts, but that this function itself is not re-entrant
+ * (this is because the function tracks one end of each FIFO in the portal and
+ * we do *not* want to lock that). So the consequence is that it is safe for
+ * user callbacks to call into any QMan API.
+ */
+static inline unsigned int __poll_portal_fast(struct qman_portal *p,
+					      unsigned int poll_limit)
+{
+	const struct qm_dqrr_entry *dq;
+	struct qman_fq *fq;
+	enum qman_cb_dqrr_result res;
+	unsigned int limit = 0;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+	do {
+		qm_dqrr_pvb_update(&p->p);
+		dq = qm_dqrr_current(&p->p);
+		if (!dq)
+			break;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	/* If running on an LE system the fields of the
+	 * dequeue entry must be swapper.  Because the
+	 * QMan HW will ignore writes the DQRR entry is
+	 * copied and the index stored within the copy
+	 */
+		shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+		*shadow = *dq;
+		dq = shadow;
+		shadow->fqid = be32_to_cpu(shadow->fqid);
+		shadow->contextB = be32_to_cpu(shadow->contextB);
+		shadow->seqnum = be16_to_cpu(shadow->seqnum);
+		hw_fd_to_cpu(&shadow->fd);
+#endif
+
+		if (dq->stat & QM_DQRR_STAT_UNSCHEDULED) {
+			/*
+			 * VDQCR: don't trust context_b as the FQ may have
+			 * been configured for h/w consumption and we're
+			 * draining it post-retirement.
+			 */
+			fq = p->vdqcr_owned;
+			/*
+			 * We only set QMAN_FQ_STATE_NE when retiring, so we
+			 * only need to check for clearing it when doing
+			 * volatile dequeues.  It's one less thing to check
+			 * in the critical path (SDQCR).
+			 */
+			if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+				fq_clear(fq, QMAN_FQ_STATE_NE);
+			/*
+			 * This is duplicated from the SDQCR code, but we
+			 * have stuff to do before *and* after this callback,
+			 * and we don't want multiple if()s in the critical
+			 * path (SDQCR).
+			 */
+			res = fq->cb.dqrr(p, fq, dq);
+			if (res == qman_cb_dqrr_stop)
+				break;
+			/* Check for VDQCR completion */
+			if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+				clear_vdqcr(p, fq);
+		} else {
+			/* SDQCR: context_b points to the FQ */
+			fq = (void *)(uintptr_t)dq->contextB;
+			/* Now let the callback do its stuff */
+			res = fq->cb.dqrr(p, fq, dq);
+			/*
+			 * The callback can request that we exit without
+			 * consuming this entry nor advancing;
+			 */
+			if (res == qman_cb_dqrr_stop)
+				break;
+		}
+		/* Interpret 'dq' from a driver perspective. */
+		/*
+		 * Parking isn't possible unless HELDACTIVE was set. NB,
+		 * FORCEELIGIBLE implies HELDACTIVE, so we only need to
+		 * check for HELDACTIVE to cover both.
+		 */
+		DPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||
+			    (res != qman_cb_dqrr_park));
+		/* just means "skip it, I'll consume it myself later on" */
+		if (res != qman_cb_dqrr_defer)
+			qm_dqrr_cdc_consume_1ptr(&p->p, dq,
+						 res == qman_cb_dqrr_park);
+		/* Move forward */
+		qm_dqrr_next(&p->p);
+		/*
+		 * Entry processed and consumed, increment our counter.  The
+		 * callback can request that we exit after consuming the
+		 * entry, and we also exit if we reach our processing limit,
+		 * so loop back only if neither of these conditions is met.
+		 */
+	} while (++limit < poll_limit && res != qman_cb_dqrr_consume_stop);
+
+	return limit;
+}
+
+u16 qman_affine_channel(int cpu)
+{
+	if (cpu < 0) {
+		struct qman_portal *portal = get_affine_portal();
+
+		cpu = portal->config->cpu;
+	}
+	BUG_ON(!CPU_ISSET(cpu, &affine_mask));
+	return affine_channels[cpu];
+}
+
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)
+{
+	struct qman_portal *p = get_affine_portal();
+	const struct qm_dqrr_entry *dq;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	struct qm_dqrr_entry *shadow;
+#endif
+
+	qm_dqrr_pvb_update(&p->p);
+	dq = qm_dqrr_current(&p->p);
+	if (!dq)
+		return NULL;
+
+	if (!(dq->stat & QM_DQRR_STAT_FD_VALID)) {
+		/* Invalid DQRR - put the portal and consume the DQRR.
+		 * Return NULL to user as no packet is seen.
+		 */
+		qman_dqrr_consume(fq, (struct qm_dqrr_entry *)dq);
+		return NULL;
+	}
+
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+	*shadow = *dq;
+	dq = shadow;
+	shadow->fqid = be32_to_cpu(shadow->fqid);
+	shadow->contextB = be32_to_cpu(shadow->contextB);
+	shadow->seqnum = be16_to_cpu(shadow->seqnum);
+	hw_fd_to_cpu(&shadow->fd);
+#endif
+
+	if (dq->stat & QM_DQRR_STAT_FQ_EMPTY)
+		fq_clear(fq, QMAN_FQ_STATE_NE);
+
+	return (struct qm_dqrr_entry *)dq;
+}
+
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)
+		clear_vdqcr(p, fq);
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, 0);
+	qm_dqrr_next(&p->p);
+}
+
+int qman_poll_dqrr(unsigned int limit)
+{
+	struct qman_portal *p = get_affine_portal();
+	int ret;
+
+	ret = __poll_portal_fast(p, limit);
+	return ret;
+}
+
+void qman_poll(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	if ((~p->irq_sources) & QM_PIRQ_SLOW) {
+		if (!(p->slowpoll--)) {
+			u32 is = qm_isr_status_read(&p->p) & ~p->irq_sources;
+			u32 active = __poll_portal_slow(p, is);
+
+			if (active) {
+				qm_isr_status_clear(&p->p, active);
+				p->slowpoll = SLOW_POLL_BUSY;
+			} else
+				p->slowpoll = SLOW_POLL_IDLE;
+		}
+	}
+	if ((~p->irq_sources) & QM_PIRQ_DQRI)
+		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
+}
+
+void qman_stop_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qman_stop_dequeues_ex(p);
+}
+
+void qman_start_dequeues(void)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	DPAA_ASSERT(p->dqrr_disable_ref > 0);
+	if (!(--p->dqrr_disable_ref))
+		qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
+}
+
+void qman_static_dequeue_add(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr |= pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+void qman_static_dequeue_del(u32 pools)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	pools &= p->config->pools;
+	p->sdqcr &= ~pools;
+	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
+}
+
+u32 qman_static_dequeue_get(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	return p->sdqcr;
+}
+
+void qman_dca(struct qm_dqrr_entry *dq, int park_request)
+{
+	struct qman_portal *p = get_affine_portal();
+
+	qm_dqrr_cdc_consume_1ptr(&p->p, dq, park_request);
+}
+
+/* Frame queue API */
+static const char *mcr_result_str(u8 result)
+{
+	switch (result) {
+	case QM_MCR_RESULT_NULL:
+		return "QM_MCR_RESULT_NULL";
+	case QM_MCR_RESULT_OK:
+		return "QM_MCR_RESULT_OK";
+	case QM_MCR_RESULT_ERR_FQID:
+		return "QM_MCR_RESULT_ERR_FQID";
+	case QM_MCR_RESULT_ERR_FQSTATE:
+		return "QM_MCR_RESULT_ERR_FQSTATE";
+	case QM_MCR_RESULT_ERR_NOTEMPTY:
+		return "QM_MCR_RESULT_ERR_NOTEMPTY";
+	case QM_MCR_RESULT_PENDING:
+		return "QM_MCR_RESULT_PENDING";
+	case QM_MCR_RESULT_ERR_BADCOMMAND:
+		return "QM_MCR_RESULT_ERR_BADCOMMAND";
+	}
+	return "<unknown MCR result>";
+}
+
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
+{
+	struct qm_fqd fqd;
+	struct qm_mcr_queryfq_np np;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID) {
+		int ret = qman_alloc_fqid(&fqid);
+
+		if (ret)
+			return ret;
+	}
+	spin_lock_init(&fq->fqlock);
+	fq->fqid = fqid;
+	fq->flags = flags;
+	fq->state = qman_fq_state_oos;
+	fq->cgr_groupid = 0;
+
+	if (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))
+		return 0;
+	/* Everything else is AS_IS support */
+	p = get_affine_portal();
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(&fqd);
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ_NP);
+	if (mcr->result != QM_MCR_RESULT_OK) {
+		pr_err("QUERYFQ_NP failed: %s\n", mcr_result_str(mcr->result));
+		goto err;
+	}
+	np = mcr->queryfq_np;
+	/* Phew, have queryfq and queryfq_np results, stitch together
+	 * the FQ object from those.
+	 */
+	fq->cgr_groupid = fqd.cgid;
+	switch (np.state & QM_MCR_NP_STATE_MASK) {
+	case QM_MCR_NP_STATE_OOS:
+		break;
+	case QM_MCR_NP_STATE_RETIRED:
+		fq->state = qman_fq_state_retired;
+		if (np.frm_cnt)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		break;
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+		fq->state = qman_fq_state_sched;
+		if (np.state & QM_MCR_NP_STATE_R)
+			fq_set(fq, QMAN_FQ_STATE_CHANGING);
+		break;
+	case QM_MCR_NP_STATE_PARKED:
+		fq->state = qman_fq_state_parked;
+		break;
+	default:
+		DPAA_ASSERT(NULL == "invalid FQ state");
+	}
+	if (fqd.fq_ctrl & QM_FQCTRL_CGE)
+		fq->state |= QMAN_FQ_STATE_CGR_EN;
+	return 0;
+err:
+	if (flags & QMAN_FQ_FLAG_DYNAMIC_FQID)
+		qman_release_fqid(fqid);
+	return -EIO;
+}
+
+void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
+{
+	/*
+	 * We don't need to lock the FQ as it is a pre-condition that the FQ be
+	 * quiesced. Instead, run some checks.
+	 */
+	switch (fq->state) {
+	case qman_fq_state_parked:
+		DPAA_ASSERT(flags & QMAN_FQ_DESTROY_PARKED);
+	case qman_fq_state_oos:
+		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
+			qman_release_fqid(fq->fqid);
+
+		return;
+	default:
+		break;
+	}
+	DPAA_ASSERT(NULL == "qman_free_fq() on unquiesced FQ!");
+}
+
+u32 qman_fq_fqid(struct qman_fq *fq)
+{
+	return fq->fqid;
+}
+
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags)
+{
+	if (state)
+		*state = fq->state;
+	if (flags)
+		*flags = fq->flags;
+}
+
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	u8 res, myverb = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		QM_MCC_VERB_INITFQ_SCHED : QM_MCC_VERB_INITFQ_PARKED;
+
+	if ((fq->state != qman_fq_state_oos) &&
+	    (fq->state != qman_fq_state_parked))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	if (opts && (opts->we_mask & QM_INITFQ_WE_OAC)) {
+		/* And can't be set at the same time as TDTHRESH */
+		if (opts->we_mask & QM_INITFQ_WE_TDTHRESH)
+			return -EINVAL;
+	}
+	/* Issue an INITFQ_[PARKED|SCHED] management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     ((fq->state != qman_fq_state_oos) &&
+				(fq->state != qman_fq_state_parked)))) {
+		FQUNLOCK(fq);
+		return -EBUSY;
+	}
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initfq = *opts;
+	mcc->initfq.fqid = cpu_to_be32(fq->fqid);
+	mcc->initfq.count = 0;
+	/*
+	 * If the FQ does *not* have the TO_DCPORTAL flag, context_b is set as a
+	 * demux pointer. Otherwise, the caller-provided value is allowed to
+	 * stand, don't overwrite it.
+	 */
+	if (fq_isclear(fq, QMAN_FQ_FLAG_TO_DCPORTAL)) {
+		dma_addr_t phys_fq;
+
+		mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;
+		mcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;
+		/*
+		 *  and the physical address - NB, if the user wasn't trying to
+		 * set CONTEXTA, clear the stashing settings.
+		 */
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_CONTEXTA)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;
+			memset(&mcc->initfq.fqd.context_a, 0,
+			       sizeof(mcc->initfq.fqd.context_a));
+		} else {
+			phys_fq = rte_mem_virt2phy(fq);
+			qm_fqd_stashing_set64(&mcc->initfq.fqd, phys_fq);
+		}
+	}
+	if (flags & QMAN_INITFQ_FLAG_LOCAL) {
+		mcc->initfq.fqd.dest.channel = p->config->channel;
+		if (!(mcc->initfq.we_mask & QM_INITFQ_WE_DESTWQ)) {
+			mcc->initfq.we_mask |= QM_INITFQ_WE_DESTWQ;
+			mcc->initfq.fqd.dest.wq = 4;
+		}
+	}
+	mcc->initfq.we_mask = cpu_to_be16(mcc->initfq.we_mask);
+	cpu_to_hw_fqd(&mcc->initfq.fqd);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		FQUNLOCK(fq);
+		return -EIO;
+	}
+	if (opts) {
+		if (opts->we_mask & QM_INITFQ_WE_FQCTRL) {
+			if (opts->fqd.fq_ctrl & QM_FQCTRL_CGE)
+				fq_set(fq, QMAN_FQ_STATE_CGR_EN);
+			else
+				fq_clear(fq, QMAN_FQ_STATE_CGR_EN);
+		}
+		if (opts->we_mask & QM_INITFQ_WE_CGID)
+			fq->cgr_groupid = opts->fqd.cgid;
+	}
+	fq->state = (flags & QMAN_INITFQ_FLAG_SCHED) ?
+		qman_fq_state_sched : qman_fq_state_parked;
+	FQUNLOCK(fq);
+	return 0;
+}
+
+int qman_schedule_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_parked)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTERFQ_SCHED management command */
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state != qman_fq_state_parked))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_SCHED);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_SCHED);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_sched;
+out:
+	FQUNLOCK(fq);
+
+	return ret;
+}
+
+int qman_retire_fq(struct qman_fq *fq, u32 *flags)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int rval;
+	u8 res;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_sched))
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_retired) ||
+				(fq->state == qman_fq_state_oos))) {
+		rval = -EBUSY;
+		goto out;
+	}
+	rval = table_push_fq(p, fq);
+	if (rval)
+		goto out;
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_RETIRE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_RETIRE);
+	res = mcr->result;
+	/*
+	 * "Elegant" would be to treat OK/PENDING the same way; set CHANGING,
+	 * and defer the flags until FQRNI or FQRN (respectively) show up. But
+	 * "Friendly" is to process OK immediately, and not set CHANGING. We do
+	 * friendly, otherwise the caller doesn't necessarily have a fully
+	 * "retired" FQ on return even if the retirement was immediate. However
+	 * this does mean some code duplication between here and
+	 * fq_state_change().
+	 */
+	if (likely(res == QM_MCR_RESULT_OK)) {
+		rval = 0;
+		/* Process 'fq' right away, we'll ignore FQRNI */
+		if (mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY)
+			fq_set(fq, QMAN_FQ_STATE_NE);
+		if (mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)
+			fq_set(fq, QMAN_FQ_STATE_ORL);
+		else
+			table_del_fq(p, fq);
+		if (flags)
+			*flags = fq->flags;
+		fq->state = qman_fq_state_retired;
+		if (fq->cb.fqs) {
+			/*
+			 * Another issue with supporting "immediate" retirement
+			 * is that we're forced to drop FQRNIs, because by the
+			 * time they're seen it may already be "too late" (the
+			 * fq may have been OOS'd and free()'d already). But if
+			 * the upper layer wants a callback whether it's
+			 * immediate or not, we have to fake a "MR" entry to
+			 * look like an FQRNI...
+			 */
+			struct qm_mr_entry msg;
+
+			msg.verb = QM_MR_VERB_FQRNI;
+			msg.fq.fqs = mcr->alterfq.fqs;
+			msg.fq.fqid = fq->fqid;
+			msg.fq.contextB = (u32)(uintptr_t)fq;
+			fq->cb.fqs(p, fq, &msg);
+		}
+	} else if (res == QM_MCR_RESULT_PENDING) {
+		rval = 1;
+		fq_set(fq, QMAN_FQ_STATE_CHANGING);
+	} else {
+		rval = -EIO;
+		table_del_fq(p, fq);
+	}
+out:
+	FQUNLOCK(fq);
+	return rval;
+}
+
+int qman_oos_fq(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+
+	if (fq->state != qman_fq_state_retired)
+		return -EINVAL;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_BLOCKOOS)) ||
+		     (fq->state != qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_OOS);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_OOS);
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+	fq->state = qman_fq_state_oos;
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_fq_flow_control(struct qman_fq *fq, int xon)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p;
+
+	int ret = 0;
+	u8 res;
+	u8 myverb;
+
+	if ((fq->state == qman_fq_state_oos) ||
+	    (fq->state == qman_fq_state_retired) ||
+		(fq->state == qman_fq_state_parked))
+		return -EINVAL;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
+		return -EINVAL;
+#endif
+	/* Issue a ALTER_FQXON or ALTER_FQXOFF management command */
+	p = get_affine_portal();
+	FQLOCK(fq);
+	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
+		     (fq->state == qman_fq_state_parked) ||
+			(fq->state == qman_fq_state_oos) ||
+			(fq->state == qman_fq_state_retired))) {
+		ret = -EBUSY;
+		goto out;
+	}
+	mcc = qm_mc_start(&p->p);
+	mcc->alterfq.fqid = fq->fqid;
+	mcc->alterfq.count = 0;
+	myverb = xon ? QM_MCC_VERB_ALTER_FQXON : QM_MCC_VERB_ALTER_FQXOFF;
+
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+
+	res = mcr->result;
+	if (res != QM_MCR_RESULT_OK) {
+		ret = -EIO;
+		goto out;
+	}
+out:
+	FQUNLOCK(fq);
+	return ret;
+}
+
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*fqd = mcr->queryfq.fqd;
+	hw_fqd_to_cpu(fqd);
+	if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_fq_has_pkts(struct qman_fq *fq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	int ret = 0;
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		ret = !!mcr->queryfq_np.frm_cnt;
+	return ret;
+}
+
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		*np = mcr->queryfq_np;
+		np->fqd_link = be24_to_cpu(np->fqd_link);
+		np->odp_seq = be16_to_cpu(np->odp_seq);
+		np->orp_nesn = be16_to_cpu(np->orp_nesn);
+		np->orp_ea_hseq  = be16_to_cpu(np->orp_ea_hseq);
+		np->orp_ea_tseq  = be16_to_cpu(np->orp_ea_tseq);
+		np->orp_ea_hptr = be24_to_cpu(np->orp_ea_hptr);
+		np->orp_ea_tptr = be24_to_cpu(np->orp_ea_tptr);
+		np->pfdr_hptr = be24_to_cpu(np->pfdr_hptr);
+		np->pfdr_tptr = be24_to_cpu(np->pfdr_tptr);
+		np->ics_surp = be16_to_cpu(np->ics_surp);
+		np->byte_cnt = be32_to_cpu(np->byte_cnt);
+		np->frm_cnt = be24_to_cpu(np->frm_cnt);
+		np->ra1_sfdr = be16_to_cpu(np->ra1_sfdr);
+		np->ra2_sfdr = be16_to_cpu(np->ra2_sfdr);
+		np->od1_sfdr = be16_to_cpu(np->od1_sfdr);
+		np->od2_sfdr = be16_to_cpu(np->od2_sfdr);
+		np->od3_sfdr = be16_to_cpu(np->od3_sfdr);
+	}
+	if (res == QM_MCR_RESULT_ERR_FQID)
+		return -ERANGE;
+	else if (res != QM_MCR_RESULT_OK)
+		return -EIO;
+	return 0;
+}
+
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res, myverb;
+
+	myverb = (query_dedicated) ? QM_MCR_VERB_QUERYWQ_DEDICATED :
+				 QM_MCR_VERB_QUERYWQ;
+	mcc = qm_mc_start(&p->p);
+	mcc->querywq.channel.id = cpu_to_be16(wq->channel.id);
+	qm_mc_commit(&p->p, myverb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK) {
+		int i, array_len;
+
+		wq->channel.id = be16_to_cpu(mcr->querywq.channel.id);
+		array_len = ARRAY_SIZE(mcr->querywq.wq_len);
+		for (i = 0; i < array_len; i++)
+			wq->wq_len[i] = be32_to_cpu(mcr->querywq.wq_len[i]);
+	}
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERYWQ failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
+		       struct qm_mcr_cgrtestwrite *result)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->cgrtestwrite.cgid = cgr->cgrid;
+	mcc->cgrtestwrite.i_bcnt_hi = (u8)(i_bcnt >> 32);
+	mcc->cgrtestwrite.i_bcnt_lo = (u32)i_bcnt;
+	qm_mc_commit(&p->p, QM_MCC_VERB_CGRTESTWRITE);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_CGRTESTWRITE);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*result = mcr->cgrtestwrite;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("CGR TEST WRITE failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	return 0;
+}
+
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	mcc = qm_mc_start(&p->p);
+	mcc->querycgr.cgid = cgr->cgrid;
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCGR);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYCGR);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*cgrd = mcr->querycgr;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CGR failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	cgrd->cgr.wr_parm_g.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_g.word);
+	cgrd->cgr.wr_parm_y.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_y.word);
+	cgrd->cgr.wr_parm_r.word =
+		be32_to_cpu(cgrd->cgr.wr_parm_r.word);
+	cgrd->cgr.cscn_targ =  be32_to_cpu(cgrd->cgr.cscn_targ);
+	cgrd->cgr.__cs_thres = be16_to_cpu(cgrd->cgr.__cs_thres);
+	for (i = 0; i < ARRAY_SIZE(cgrd->cscn_targ_swp); i++)
+		cgrd->cscn_targ_swp[i] =
+			be32_to_cpu(cgrd->cscn_targ_swp[i]);
+	return 0;
+}
+
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion)
+{
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+	u8 res;
+	unsigned int i;
+
+	qm_mc_start(&p->p);
+	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			QM_MCC_VERB_QUERYCONGESTION);
+	res = mcr->result;
+	if (res == QM_MCR_RESULT_OK)
+		*congestion = mcr->querycongestion;
+	if (res != QM_MCR_RESULT_OK) {
+		pr_err("QUERY_CONGESTION failed: %s\n", mcr_result_str(res));
+		return -EIO;
+	}
+	for (i = 0; i < ARRAY_SIZE(congestion->state.state); i++)
+		congestion->state.state[i] =
+			be32_to_cpu(congestion->state.state[i]);
+	return 0;
+}
+
+int qman_set_vdq(struct qman_fq *fq, u16 num)
+{
+	struct qman_portal *p = get_affine_portal();
+	uint32_t vdqcr;
+	int ret = -EBUSY;
+
+	vdqcr = QM_VDQCR_EXACT;
+	vdqcr |= QM_VDQCR_NUMFRAMES_SET(num);
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired)) {
+		ret = -EINVAL;
+		goto out;
+	}
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR)) {
+		ret = -EBUSY;
+		goto out;
+	}
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (!ret)
+		qm_dqrr_vdqcr_set(&p->p, vdqcr);
+
+out:
+	return ret;
+}
+
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
+			  u32 vdqcr)
+{
+	struct qman_portal *p;
+	int ret = -EBUSY;
+
+	if ((fq->state != qman_fq_state_parked) &&
+	    (fq->state != qman_fq_state_retired))
+		return -EINVAL;
+	if (vdqcr & QM_VDQCR_FQID_MASK)
+		return -EINVAL;
+	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+		return -EBUSY;
+	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
+
+	p = get_affine_portal();
+
+	if (!p->vdqcr_owned) {
+		FQLOCK(fq);
+		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+			goto escape;
+		fq_set(fq, QMAN_FQ_STATE_VDQCR);
+		FQUNLOCK(fq);
+		p->vdqcr_owned = fq;
+		ret = 0;
+	}
+escape:
+	if (ret)
+		return ret;
+
+	/* VDQCR is set */
+	qm_dqrr_vdqcr_set(&p->p, vdqcr);
+	return 0;
+}
+
+static noinline void update_eqcr_ci(struct qman_portal *p, u8 avail)
+{
+	if (avail)
+		qm_eqcr_cce_prefetch(&p->p);
+	else
+		qm_eqcr_cce_update(&p->p);
+}
+
+int qman_eqcr_is_empty(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	u8 avail;
+
+	update_eqcr_ci(p, 0);
+	avail = qm_eqcr_get_fill(&p->p);
+	return (avail == 0);
+}
+
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine)
+{
+	if (affine) {
+		struct qman_portal *p = get_affine_portal();
+
+		p->cb_dc_ern = handler;
+	} else
+		cb_dc_ern = handler;
+}
+
+static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
+					struct qman_fq *fq,
+					const struct qm_fd *fd,
+					u32 flags)
+{
+	struct qm_eqcr_entry *eq;
+	u8 avail;
+
+	if (p->use_eqcr_ci_stashing) {
+		/*
+		 * The stashing case is easy, only update if we need to in
+		 * order to try and liberate ring entries.
+		 */
+		eq = qm_eqcr_start_stash(&p->p);
+	} else {
+		/*
+		 * The non-stashing case is harder, need to prefetch ahead of
+		 * time.
+		 */
+		avail = qm_eqcr_get_avail(&p->p);
+		if (avail < 2)
+			update_eqcr_ci(p, avail);
+		eq = qm_eqcr_start_no_stash(&p->p);
+	}
+
+	if (unlikely(!eq))
+		return NULL;
+
+	if (flags & QMAN_ENQUEUE_FLAG_DCA)
+		eq->dca = QM_EQCR_DCA_ENABLE |
+			((flags & QMAN_ENQUEUE_FLAG_DCA_PARK) ?
+					QM_EQCR_DCA_PARK : 0) |
+			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
+	eq->fqid = cpu_to_be32(fq->fqid);
+	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+	eq->fd = *fd;
+	cpu_to_hw_fd(&eq->fd);
+	return eq;
+}
+
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_CMD_ENQUEUE |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+	/* Factor the below out, it's used from qman_enqueue_orp() too */
+	return 0;
+}
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send)
+{
+	struct qman_portal *p = get_affine_portal();
+	struct qm_portal *portal = &p->p;
+
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
+
+	u8 i, diff, old_ci, sent = 0;
+
+	/* Update the available entries if no entry is free */
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return 0;
+	}
+
+	/* try to send as many frames as possible */
+	while (eqcr->available && frames_to_send--) {
+		eq->fqid = cpu_to_be32(fq->fqid);
+		eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
+		eq->fd.opaque_addr = fd->opaque_addr;
+		eq->fd.addr = cpu_to_be40(fd->addr);
+		eq->fd.status = cpu_to_be32(fd->status);
+		eq->fd.opaque = cpu_to_be32(fd->opaque);
+
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		eqcr->available--;
+		sent++;
+		fd++;
+	}
+	lwsync();
+
+	/* In order for flushes to complete faster, all lines are recorded in
+	 * 32 bit word.
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		eq->__dont_write_directly__verb =
+			QM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;
+		prev_eq = eq;
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+		if (unlikely((prev_eq + 1) != eq))
+			eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+	}
+
+	/* We need  to flush all the lines but without load/store operations
+	 * between them
+	 */
+	eq = eqcr->cursor;
+	for (i = 0; i < sent; i++) {
+		dcbf(eq);
+		eq = (void *)((unsigned long)(eq + 1) &
+			(~(unsigned long)(QM_EQCR_SIZE << 6)));
+	}
+	/* Update cursor for the next call */
+	eqcr->cursor = eq;
+	return sent;
+}
+
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum)
+{
+	struct qman_portal *p  = get_affine_portal();
+	struct qm_eqcr_entry *eq;
+
+	eq = try_p_eq_start(p, fq, fd, flags);
+	if (!eq)
+		return -EBUSY;
+	/* Process ORP-specifics here */
+	if (flags & QMAN_ENQUEUE_FLAG_NLIS)
+		orp_seqnum |= QM_EQCR_SEQNUM_NLIS;
+	else {
+		orp_seqnum &= ~QM_EQCR_SEQNUM_NLIS;
+		if (flags & QMAN_ENQUEUE_FLAG_NESN)
+			orp_seqnum |= QM_EQCR_SEQNUM_NESN;
+		else
+			/* No need to check 4 QMAN_ENQUEUE_FLAG_HOLE */
+			orp_seqnum &= ~QM_EQCR_SEQNUM_NESN;
+	}
+	eq->seqnum = cpu_to_be16(orp_seqnum);
+	eq->orp = cpu_to_be32(orp->fqid);
+	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
+	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_ORP |
+		((flags & (QMAN_ENQUEUE_FLAG_HOLE | QMAN_ENQUEUE_FLAG_NESN)) ?
+				0 : QM_EQCR_VERB_CMD_ENQUEUE) |
+		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
+
+	return 0;
+}
+
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	struct qman_portal *p = get_affine_portal();
+
+	u8 res;
+	u8 verb = QM_MCC_VERB_MODIFYCGR;
+
+	mcc = qm_mc_start(&p->p);
+	if (opts)
+		mcc->initcgr = *opts;
+	mcc->initcgr.we_mask = cpu_to_be16(mcc->initcgr.we_mask);
+	mcc->initcgr.cgr.wr_parm_g.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_g.word);
+	mcc->initcgr.cgr.wr_parm_y.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_y.word);
+	mcc->initcgr.cgr.wr_parm_r.word =
+		cpu_to_be32(mcc->initcgr.cgr.wr_parm_r.word);
+	mcc->initcgr.cgr.cscn_targ =  cpu_to_be32(mcc->initcgr.cgr.cscn_targ);
+	mcc->initcgr.cgr.__cs_thres = cpu_to_be16(mcc->initcgr.cgr.__cs_thres);
+
+	mcc->initcgr.cgid = cgr->cgrid;
+	if (flags & QMAN_CGR_FLAG_USE_INIT)
+		verb = QM_MCC_VERB_INITCGR;
+	qm_mc_commit(&p->p, verb);
+	while (!(mcr = qm_mc_result(&p->p)))
+		cpu_relax();
+
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == verb);
+	res = mcr->result;
+	return (res == QM_MCR_RESULT_OK) ? 0 : -EIO;
+}
+
+#define TARG_MASK(n) (0x80000000 >> (n->config->channel - \
+					QM_CHANNEL_SWPORTAL0))
+#define TARG_DCP_MASK(n) (0x80000000 >> (10 + n))
+#define PORTAL_IDX(n) (n->config->channel - QM_CHANNEL_SWPORTAL0)
+
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret;
+	struct qman_portal *p;
+
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	p = get_affine_portal();
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	cgr->chan = p->config->channel;
+	spin_lock(&p->cgr_lock);
+
+	/* if no opts specified, just add it to the list */
+	if (!opts)
+		goto add_list;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		goto release_lock;
+	if (opts)
+		local_opts = *opts;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+			QM_CGR_TARG_UDP_CTRL_WRITE_BIT | PORTAL_IDX(p);
+	else
+		/* Overwrite TARG */
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+							TARG_MASK(p);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT, &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		goto release_lock;
+add_list:
+	list_add(&cgr->node, &p->cgr_cbs);
+
+	/* Determine if newly added object requires its callback to be called */
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret) {
+		/* we can't go back, so proceed and return success, but screen
+		 * and wail to the log file.
+		 */
+		pr_crit("CGR HW state partially modified\n");
+		ret = 0;
+		goto release_lock;
+	}
+	if (cgr->cb && cgr_state.cgr.cscn_en && qman_cgrs_get(&p->cgrs[1],
+							      cgr->cgrid))
+		cgr->cb(p, cgr, 1);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+	return ret;
+}
+
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts)
+{
+	struct qm_mcc_initcgr local_opts;
+	struct qm_mcr_querycgr cgr_state;
+	int ret;
+
+	if ((qman_ip_rev & 0xFF00) < QMAN_REV30) {
+		pr_warn("QMan version doesn't support CSCN => DCP portal\n");
+		return -EINVAL;
+	}
+	/* We have to check that the provided CGRID is within the limits of the
+	 * data-structures, for obvious reasons. However we'll let h/w take
+	 * care of determining whether it's within the limits of what exists on
+	 * the SoC.
+	 */
+	if (cgr->cgrid >= __CGR_NUM)
+		return -EINVAL;
+
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)
+		return ret;
+
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	if (opts)
+		local_opts = *opts;
+
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl =
+				QM_CGR_TARG_UDP_CTRL_WRITE_BIT |
+				QM_CGR_TARG_UDP_CTRL_DCP | dcp_portal;
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
+					TARG_DCP_MASK(dcp_portal);
+	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
+
+	/* send init if flags indicate so */
+	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
+		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT,
+				      &local_opts);
+	else
+		ret = qman_modify_cgr(cgr, 0, &local_opts);
+
+	return ret;
+}
+
+int qman_delete_cgr(struct qman_cgr *cgr)
+{
+	struct qm_mcr_querycgr cgr_state;
+	struct qm_mcc_initcgr local_opts;
+	int ret = 0;
+	struct qman_cgr *i;
+	struct qman_portal *p = get_affine_portal();
+
+	if (cgr->chan != p->config->channel) {
+		pr_crit("Attempting to delete cgr from different portal than"
+			" it was create: create 0x%x, delete 0x%x\n",
+			cgr->chan, p->config->channel);
+		ret = -EINVAL;
+		goto put_portal;
+	}
+	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+	spin_lock(&p->cgr_lock);
+	list_del(&cgr->node);
+	/*
+	 * If there are no other CGR objects for this CGRID in the list,
+	 * update CSCN_TARG accordingly
+	 */
+	list_for_each_entry(i, &p->cgr_cbs, node)
+		if ((i->cgrid == cgr->cgrid) && i->cb)
+			goto release_lock;
+	ret = qman_query_cgr(cgr, &cgr_state);
+	if (ret)  {
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+		goto release_lock;
+	}
+	/* Overwrite TARG */
+	local_opts.we_mask = QM_CGR_WE_CSCN_TARG;
+	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
+		local_opts.cgr.cscn_targ_upd_ctrl = PORTAL_IDX(p);
+	else
+		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ &
+							 ~(TARG_MASK(p));
+	ret = qman_modify_cgr(cgr, 0, &local_opts);
+	if (ret)
+		/* add back to the list */
+		list_add(&cgr->node, &p->cgr_cbs);
+release_lock:
+	spin_unlock(&p->cgr_lock);
+put_portal:
+	return ret;
+}
+
+int qman_shutdown_fq(u32 fqid)
+{
+	struct qman_portal *p;
+	struct qm_portal *low_p;
+	struct qm_mc_command *mcc;
+	struct qm_mc_result *mcr;
+	u8 state;
+	int orl_empty, fq_empty, drain = 0;
+	u32 result;
+	u32 channel, wq;
+	u16 dest_wq;
+
+	p = get_affine_portal();
+	low_p = &p->p;
+
+	/* Determine the state of the FQID */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ_NP);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+	state = mcr->queryfq_np.state & QM_MCR_NP_STATE_MASK;
+	if (state == QM_MCR_NP_STATE_OOS)
+		return 0; /* Already OOS, no need to do anymore checks */
+
+	/* Query which channel the FQ is using */
+	mcc = qm_mc_start(low_p);
+	mcc->queryfq.fqid = cpu_to_be32(fqid);
+	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ);
+	while (!(mcr = qm_mc_result(low_p)))
+		cpu_relax();
+	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
+
+	/* Need to store these since the MCR gets reused */
+	dest_wq = be16_to_cpu(mcr->queryfq.fqd.dest_wq);
+	channel = dest_wq & 0x7;
+	wq = dest_wq >> 3;
+
+	switch (state) {
+	case QM_MCR_NP_STATE_TEN_SCHED:
+	case QM_MCR_NP_STATE_TRU_SCHED:
+	case QM_MCR_NP_STATE_ACTIVE:
+	case QM_MCR_NP_STATE_PARKED:
+		orl_empty = 0;
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_RETIRE);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_RETIRE);
+		result = mcr->result; /* Make a copy as we reuse MCR below */
+
+		if (result == QM_MCR_RESULT_PENDING) {
+			/* Need to wait for the FQRN in the message ring, which
+			 * will only occur once the FQ has been drained.  In
+			 * order for the FQ to drain the portal needs to be set
+			 * to dequeue from the channel the FQ is scheduled on
+			 */
+			const struct qm_mr_entry *msg;
+			const struct qm_dqrr_entry *dqrr = NULL;
+			int found_fqrn = 0;
+			__maybe_unused u16 dequeue_wq = 0;
+
+			/* Flag that we need to drain FQ */
+			drain = 1;
+
+			if (channel >= qm_channel_pool1 &&
+			    channel < (u16)(qm_channel_pool1 + 15)) {
+				/* Pool channel, enable the bit in the portal */
+				dequeue_wq = (channel -
+					      qm_channel_pool1 + 1) << 4 | wq;
+			} else if (channel < qm_channel_pool1) {
+				/* Dedicated channel */
+				dequeue_wq = wq;
+			} else {
+				pr_info("Cannot recover FQ 0x%x,"
+					" it is scheduled on channel 0x%x",
+					fqid, channel);
+				return -EBUSY;
+			}
+			/* Set the sdqcr to drain this channel */
+			if (channel < qm_channel_pool1)
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+					  QM_SDQCR_CHANNELS_DEDICATED);
+			else
+				qm_dqrr_sdqcr_set(low_p,
+						  QM_SDQCR_TYPE_ACTIVE |
+						  QM_SDQCR_CHANNELS_POOL_CONV
+						  (channel));
+			while (!found_fqrn) {
+				/* Keep draining DQRR while checking the MR*/
+				qm_dqrr_pvb_update(low_p);
+				dqrr = qm_dqrr_current(low_p);
+				while (dqrr) {
+					qm_dqrr_cdc_consume_1ptr(
+						low_p, dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+				/* Process message ring too */
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+				while (msg) {
+					if ((msg->verb &
+					     QM_MR_VERB_TYPE_MASK)
+					    == QM_MR_VERB_FQRN)
+						found_fqrn = 1;
+					qm_mr_next(low_p);
+					qm_mr_cci_consume_to_current(low_p);
+					qm_mr_pvb_update(low_p);
+					msg = qm_mr_current(low_p);
+				}
+				cpu_relax();
+			}
+		}
+		if (result != QM_MCR_RESULT_OK &&
+		    result !=  QM_MCR_RESULT_PENDING) {
+			/* error */
+			pr_err("qman_retire_fq failed on FQ 0x%x,"
+			       " result=0x%x\n", fqid, result);
+			return -1;
+		}
+		if (!(mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)) {
+			/* ORL had no entries, no need to wait until the
+			 * ERNs come in.
+			 */
+			orl_empty = 1;
+		}
+		/* Retirement succeeded, check to see if FQ needs
+		 * to be drained.
+		 */
+		if (drain || mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY) {
+			/* FQ is Not Empty, drain using volatile DQ commands */
+			fq_empty = 0;
+			do {
+				const struct qm_dqrr_entry *dqrr = NULL;
+				u32 vdqcr = fqid | QM_VDQCR_NUMFRAMES_SET(3);
+
+				qm_dqrr_vdqcr_set(low_p, vdqcr);
+
+				/* Wait for a dequeue to occur */
+				while (dqrr == NULL) {
+					qm_dqrr_pvb_update(low_p);
+					dqrr = qm_dqrr_current(low_p);
+					if (!dqrr)
+						cpu_relax();
+				}
+				/* Process the dequeues, making sure to
+				 * empty the ring completely.
+				 */
+				while (dqrr) {
+					if (dqrr->fqid == fqid &&
+					    dqrr->stat & QM_DQRR_STAT_FQ_EMPTY)
+						fq_empty = 1;
+					qm_dqrr_cdc_consume_1ptr(low_p,
+								 dqrr, 0);
+					qm_dqrr_pvb_update(low_p);
+					qm_dqrr_next(low_p);
+					dqrr = qm_dqrr_current(low_p);
+				}
+			} while (fq_empty == 0);
+		}
+		qm_dqrr_sdqcr_set(low_p, 0);
+
+		/* Wait for the ORL to have been completely drained */
+		while (orl_empty == 0) {
+			const struct qm_mr_entry *msg;
+
+			qm_mr_pvb_update(low_p);
+			msg = qm_mr_current(low_p);
+			while (msg) {
+				if ((msg->verb & QM_MR_VERB_TYPE_MASK) ==
+				    QM_MR_VERB_FQRL)
+					orl_empty = 1;
+				qm_mr_next(low_p);
+				qm_mr_cci_consume_to_current(low_p);
+				qm_mr_pvb_update(low_p);
+				msg = qm_mr_current(low_p);
+			}
+			cpu_relax();
+		}
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result != QM_MCR_RESULT_OK) {
+			pr_err(
+			"OOS after drain Failed on FQID 0x%x, result 0x%x\n",
+			       fqid, mcr->result);
+			return -1;
+		}
+		return 0;
+
+	case QM_MCR_NP_STATE_RETIRED:
+		/* Send OOS Command */
+		mcc = qm_mc_start(low_p);
+		mcc->alterfq.fqid = cpu_to_be32(fqid);
+		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
+		while (!(mcr = qm_mc_result(low_p)))
+			cpu_relax();
+		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
+			   QM_MCR_VERB_ALTER_OOS);
+		if (mcr->result) {
+			pr_err("OOS Failed on FQID 0x%x\n", fqid);
+			return -1;
+		}
+		return 0;
+
+	}
+	return -1;
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman.h b/drivers/bus/dpaa/base/qbman/qman.h
new file mode 100644
index 0000000..ee78d31
--- /dev/null
+++ b/drivers/bus/dpaa/base/qbman/qman.h
@@ -0,0 +1,888 @@
+/*-
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ *   BSD LICENSE
+ *
+ * Copyright 2008-2016 Freescale Semiconductor Inc.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *   GPL LICENSE SUMMARY
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qman_priv.h"
+
+/***************************/
+/* Portal register assists */
+/***************************/
+#define QM_REG_EQCR_PI_CINH	0x3000
+#define QM_REG_EQCR_CI_CINH	0x3040
+#define QM_REG_EQCR_ITR		0x3080
+#define QM_REG_DQRR_PI_CINH	0x3100
+#define QM_REG_DQRR_CI_CINH	0x3140
+#define QM_REG_DQRR_ITR		0x3180
+#define QM_REG_DQRR_DCAP	0x31C0
+#define QM_REG_DQRR_SDQCR	0x3200
+#define QM_REG_DQRR_VDQCR	0x3240
+#define QM_REG_DQRR_PDQCR	0x3280
+#define QM_REG_MR_PI_CINH	0x3300
+#define QM_REG_MR_CI_CINH	0x3340
+#define QM_REG_MR_ITR		0x3380
+#define QM_REG_CFG		0x3500
+#define QM_REG_ISR		0x3600
+#define QM_REG_IIR              0x36C0
+#define QM_REG_ITPR		0x3740
+
+/* Cache-enabled register offsets */
+#define QM_CL_EQCR		0x0000
+#define QM_CL_DQRR		0x1000
+#define QM_CL_MR		0x2000
+#define QM_CL_EQCR_PI_CENA	0x3000
+#define QM_CL_EQCR_CI_CENA	0x3040
+#define QM_CL_DQRR_PI_CENA	0x3100
+#define QM_CL_DQRR_CI_CENA	0x3140
+#define QM_CL_MR_PI_CENA	0x3300
+#define QM_CL_MR_CI_CENA	0x3340
+#define QM_CL_CR		0x3800
+#define QM_CL_RR0		0x3900
+#define QM_CL_RR1		0x3940
+
+/* BTW, the drivers (and h/w programming model) already obtain the required
+ * synchronisation for portal accesses via lwsync(), hwsync(), and
+ * data-dependencies. Use of barrier()s or other order-preserving primitives
+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which
+ * simply ensure that the compiler treats the portal registers as volatile (ie.
+ * non-coherent).
+ */
+
+/* Cache-inhibited register access. */
+#define __qm_in(qm, o)		be32_to_cpu(__raw_readl((qm)->ci  + (o)))
+#define __qm_out(qm, o, val)	__raw_writel((cpu_to_be32(val)), \
+					     (qm)->ci + (o))
+#define qm_in(reg)		__qm_in(&portal->addr, QM_REG_##reg)
+#define qm_out(reg, val)	__qm_out(&portal->addr, QM_REG_##reg, val)
+
+/* Cache-enabled (index) register access */
+#define __qm_cl_touch_ro(qm, o) dcbt_ro((qm)->ce + (o))
+#define __qm_cl_touch_rw(qm, o) dcbt_rw((qm)->ce + (o))
+#define __qm_cl_in(qm, o)	be32_to_cpu(__raw_readl((qm)->ce + (o)))
+#define __qm_cl_out(qm, o, val) \
+	do { \
+		u32 *__tmpclout = (qm)->ce + (o); \
+		__raw_writel(cpu_to_be32(val), __tmpclout); \
+		dcbf(__tmpclout); \
+	} while (0)
+#define __qm_cl_invalidate(qm, o) dccivac((qm)->ce + (o))
+#define qm_cl_touch_ro(reg) __qm_cl_touch_ro(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_touch_rw(reg) __qm_cl_touch_rw(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_in(reg)	    __qm_cl_in(&portal->addr, QM_CL_##reg##_CENA)
+#define qm_cl_out(reg, val) __qm_cl_out(&portal->addr, QM_CL_##reg##_CENA, val)
+#define qm_cl_invalidate(reg)\
+	__qm_cl_invalidate(&portal->addr, QM_CL_##reg##_CENA)
+
+/* Cache-enabled ring access */
+#define qm_cl(base, idx)	((void *)base + ((idx) << 6))
+
+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf
+ * analysis, look at using the "extra" bit in the ring index registers to avoid
+ * cyclic issues.
+ */
+static inline u8 qm_cyc_diff(u8 ringsize, u8 first, u8 last)
+{
+	/* 'first' is included, 'last' is excluded */
+	if (first <= last)
+		return last - first;
+	return ringsize + last - first;
+}
+
+/* Portal modes.
+ *   Enum types;
+ *     pmode == production mode
+ *     cmode == consumption mode,
+ *     dmode == h/w dequeue mode.
+ *   Enum values use 3 letter codes. First letter matches the portal mode,
+ *   remaining two letters indicate;
+ *     ci == cache-inhibited portal register
+ *     ce == cache-enabled portal register
+ *     vb == in-band valid-bit (cache-enabled)
+ *     dc == DCA (Discrete Consumption Acknowledgment), DQRR-only
+ *   As for "enum qm_dqrr_dmode", it should be self-explanatory.
+ */
+enum qm_eqcr_pmode {		/* matches QCSP_CFG::EPM */
+	qm_eqcr_pci = 0,	/* PI index, cache-inhibited */
+	qm_eqcr_pce = 1,	/* PI index, cache-enabled */
+	qm_eqcr_pvb = 2		/* valid-bit */
+};
+
+enum qm_dqrr_dmode {		/* matches QCSP_CFG::DP */
+	qm_dqrr_dpush = 0,	/* SDQCR  + VDQCR */
+	qm_dqrr_dpull = 1	/* PDQCR */
+};
+
+enum qm_dqrr_pmode {		/* s/w-only */
+	qm_dqrr_pci,		/* reads DQRR_PI_CINH */
+	qm_dqrr_pce,		/* reads DQRR_PI_CENA */
+	qm_dqrr_pvb		/* reads valid-bit */
+};
+
+enum qm_dqrr_cmode {		/* matches QCSP_CFG::DCM */
+	qm_dqrr_cci = 0,	/* CI index, cache-inhibited */
+	qm_dqrr_cce = 1,	/* CI index, cache-enabled */
+	qm_dqrr_cdc = 2		/* Discrete Consumption Acknowledgment */
+};
+
+enum qm_mr_pmode {		/* s/w-only */
+	qm_mr_pci,		/* reads MR_PI_CINH */
+	qm_mr_pce,		/* reads MR_PI_CENA */
+	qm_mr_pvb		/* reads valid-bit */
+};
+
+enum qm_mr_cmode {		/* matches QCSP_CFG::MM */
+	qm_mr_cci = 0,		/* CI index, cache-inhibited */
+	qm_mr_cce = 1		/* CI index, cache-enabled */
+};
+
+/* ------------------------- */
+/* --- Portal structures --- */
+
+#define QM_EQCR_SIZE		8
+#define QM_DQRR_SIZE		16
+#define QM_MR_SIZE		8
+
+struct qm_eqcr {
+	struct qm_eqcr_entry *ring, *cursor;
+	u8 ci, available, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	u32 busy;
+	enum qm_eqcr_pmode pmode;
+#endif
+};
+
+struct qm_dqrr {
+	const struct qm_dqrr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum qm_dqrr_dmode dmode;
+	enum qm_dqrr_pmode pmode;
+	enum qm_dqrr_cmode cmode;
+#endif
+};
+
+struct qm_mr {
+	const struct qm_mr_entry *ring, *cursor;
+	u8 pi, ci, fill, ithresh, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum qm_mr_pmode pmode;
+	enum qm_mr_cmode cmode;
+#endif
+};
+
+struct qm_mc {
+	struct qm_mc_command *cr;
+	struct qm_mc_result *rr;
+	u8 rridx, vbit;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	enum {
+		/* Can be _mc_start()ed */
+		qman_mc_idle,
+		/* Can be _mc_commit()ed or _mc_abort()ed */
+		qman_mc_user,
+		/* Can only be _mc_retry()ed */
+		qman_mc_hw
+	} state;
+#endif
+};
+
+#define QM_PORTAL_ALIGNMENT ____cacheline_aligned
+
+struct qm_addr {
+	void __iomem *ce;	/* cache-enabled */
+	void __iomem *ci;	/* cache-inhibited */
+};
+
+struct qm_portal {
+	struct qm_addr addr;
+	struct qm_eqcr eqcr;
+	struct qm_dqrr dqrr;
+	struct qm_mr mr;
+	struct qm_mc mc;
+} QM_PORTAL_ALIGNMENT;
+
+/* Bit-wise logic to wrap a ring pointer by clearing the "carry bit" */
+#define EQCR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_EQCR_SIZE << 6)))
+
+extern dma_addr_t rte_mem_virt2phy(const void *addr);
+
+/* Bit-wise logic to convert a ring pointer to a ring index */
+static inline u8 EQCR_PTR2IDX(struct qm_eqcr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_EQCR_SIZE - 1);
+}
+
+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */
+static inline void EQCR_INC(struct qm_eqcr *eqcr)
+{
+	/* NB: this is odd-looking, but experiments show that it generates fast
+	 * code with essentially no branching overheads. We increment to the
+	 * next EQCR pointer and handle overflow and 'vbit'.
+	 */
+	struct qm_eqcr_entry *partial = eqcr->cursor + 1;
+
+	eqcr->cursor = EQCR_CARRYCLEAR(partial);
+	if (partial != eqcr->cursor)
+		eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_no_stash(struct qm_portal
+								 *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available)
+		return NULL;
+
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 1;
+#endif
+
+	return eqcr->cursor;
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_start_stash(struct qm_portal
+								*portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci;
+
+	DPAA_ASSERT(!eqcr->busy);
+	if (!eqcr->available) {
+		old_ci = eqcr->ci;
+		eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+		diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+		eqcr->available += diff;
+		if (!diff)
+			return NULL;
+	}
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 1;
+#endif
+	return eqcr->cursor;
+}
+
+static inline void qm_eqcr_abort(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline struct qm_eqcr_entry *qm_eqcr_pend_and_next(
+					struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->busy);
+	DPAA_ASSERT(eqcr->pmode != qm_eqcr_pvb);
+	if (eqcr->available == 1)
+		return NULL;
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcr->cursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	return eqcr->cursor;
+}
+
+#define EQCR_COMMIT_CHECKS(eqcr) \
+do { \
+	DPAA_ASSERT(eqcr->busy); \
+	DPAA_ASSERT(eqcr->cursor->orp == (eqcr->cursor->orp & 0x00ffffff)); \
+	DPAA_ASSERT(eqcr->cursor->fqid == (eqcr->cursor->fqid & 0x00ffffff)); \
+} while (0)
+
+static inline void qm_eqcr_pci_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pci);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	hwsync();
+	qm_out(EQCR_PI_CINH, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	qm_cl_invalidate(EQCR_PI);
+	qm_cl_touch_rw(EQCR_PI);
+}
+
+static inline void qm_eqcr_pce_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);
+	eqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	EQCR_INC(eqcr);
+	eqcr->available--;
+	dcbf(eqcr->cursor);
+	lwsync();
+	qm_cl_out(EQCR_PI, EQCR_PTR2IDX(eqcr->cursor));
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline void qm_eqcr_pvb_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	struct qm_eqcr_entry *eqcursor;
+
+	EQCR_COMMIT_CHECKS(eqcr);
+	DPAA_ASSERT(eqcr->pmode == qm_eqcr_pvb);
+	lwsync();
+	eqcursor = eqcr->cursor;
+	eqcursor->__dont_write_directly__verb = myverb | eqcr->vbit;
+	dcbf(eqcursor);
+	EQCR_INC(eqcr);
+	eqcr->available--;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	eqcr->busy = 0;
+#endif
+}
+
+static inline u8 qm_eqcr_cci_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline void qm_eqcr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	qm_cl_touch_ro(EQCR_CI);
+}
+
+static inline u8 qm_eqcr_cce_update(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+	u8 diff, old_ci = eqcr->ci;
+
+	eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+	qm_cl_invalidate(EQCR_CI);
+	diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+	eqcr->available += diff;
+	return diff;
+}
+
+static inline u8 qm_eqcr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->ithresh;
+}
+
+static inline void qm_eqcr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	eqcr->ithresh = ithresh;
+	qm_out(EQCR_ITR, ithresh);
+}
+
+static inline u8 qm_eqcr_get_avail(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return eqcr->available;
+}
+
+static inline u8 qm_eqcr_get_fill(struct qm_portal *portal)
+{
+	register struct qm_eqcr *eqcr = &portal->eqcr;
+
+	return QM_EQCR_SIZE - 1 - eqcr->available;
+}
+
+#define DQRR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_DQRR_SIZE << 6)))
+
+static inline u8 DQRR_PTR2IDX(const struct qm_dqrr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_DQRR_SIZE - 1);
+}
+
+static inline const struct qm_dqrr_entry *DQRR_INC(
+						const struct qm_dqrr_entry *e)
+{
+	return DQRR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_dqrr_set_maxfill(struct qm_portal *portal, u8 mf)
+{
+	qm_out(CFG, (qm_in(CFG) & 0xff0fffff) |
+		((mf & (QM_DQRR_SIZE - 1)) << 20));
+}
+
+static inline const struct qm_dqrr_entry *qm_dqrr_current(
+						struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	if (!dqrr->fill)
+		return NULL;
+	return dqrr->cursor;
+}
+
+static inline u8 qm_dqrr_cursor(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return DQRR_PTR2IDX(dqrr->cursor);
+}
+
+static inline u8 qm_dqrr_next(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->fill);
+	dqrr->cursor = DQRR_INC(dqrr->cursor);
+	return --dqrr->fill;
+}
+
+static inline u8 qm_dqrr_pci_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pci);
+	dqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	qm_cl_invalidate(DQRR_PI);
+	qm_cl_touch_ro(DQRR_PI);
+}
+
+static inline u8 qm_dqrr_pce_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 diff, old_pi = dqrr->pi;
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);
+	dqrr->pi = qm_cl_in(DQRR_PI) & (QM_DQRR_SIZE - 1);
+	diff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);
+	dqrr->fill += diff;
+	return diff;
+}
+
+static inline void qm_dqrr_pvb_update(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+	const struct qm_dqrr_entry *res = qm_cl(dqrr->ring, dqrr->pi);
+
+	DPAA_ASSERT(dqrr->pmode == qm_dqrr_pvb);
+	/* when accessing 'verb', use __raw_readb() to ensure that compiler
+	 * inlining doesn't try to optimise out "excess reads".
+	 */
+	if ((__raw_readb(&res->verb) & QM_DQRR_VERB_VBIT) == dqrr->vbit) {
+		dqrr->pi = (dqrr->pi + 1) & (QM_DQRR_SIZE - 1);
+		if (!dqrr->pi)
+			dqrr->vbit ^= QM_DQRR_VERB_VBIT;
+		dqrr->fill++;
+	}
+}
+
+static inline void qm_dqrr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_out(DQRR_CI_CINH, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_rw(DQRR_CI);
+}
+
+static inline void qm_dqrr_cce_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cce_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);
+	dqrr->ci = DQRR_PTR2IDX(dqrr->cursor);
+	qm_cl_out(DQRR_CI, dqrr->ci);
+}
+
+static inline void qm_dqrr_cdc_consume_1(struct qm_portal *portal, u8 idx,
+					 int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |	/* S */
+		((park ? 1 : 0) << 6) |	/* PK */
+		idx);			/* DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_1ptr(struct qm_portal *portal,
+					    const struct qm_dqrr_entry *dq,
+					int park)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+	u8 idx = DQRR_PTR2IDX(dq);
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	DPAA_ASSERT(idx < QM_DQRR_SIZE);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* DQRR_DCAP::S */
+		((park ? 1 : 0) << 6) |		/* DQRR_DCAP::PK */
+		idx);				/* DQRR_DCAP::DCAP_CI */
+}
+
+static inline void qm_dqrr_cdc_consume_n(struct qm_portal *portal, u16 bitmask)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (1 << 8) |		/* DQRR_DCAP::S */
+		((u32)bitmask << 16));		/* DQRR_DCAP::DCAP_CI */
+	dqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+	dqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);
+}
+
+static inline u8 qm_dqrr_cdc_cci(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);
+}
+
+static inline void qm_dqrr_cdc_cce_prefetch(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	qm_cl_invalidate(DQRR_CI);
+	qm_cl_touch_ro(DQRR_CI);
+}
+
+static inline u8 qm_dqrr_cdc_cce(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);
+	return qm_cl_in(DQRR_CI) & (QM_DQRR_SIZE - 1);
+}
+
+static inline u8 qm_dqrr_get_ci(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	return dqrr->ci;
+}
+
+static inline void qm_dqrr_park(struct qm_portal *portal, u8 idx)
+{
+	__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		(idx & (QM_DQRR_SIZE - 1)));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_park_current(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	DPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);
+	qm_out(DQRR_DCAP, (0 << 8) |		/* S */
+		(1 << 6) |			/* PK */
+		DQRR_PTR2IDX(dqrr->cursor));	/* DCAP_CI */
+}
+
+static inline void qm_dqrr_sdqcr_set(struct qm_portal *portal, u32 sdqcr)
+{
+	qm_out(DQRR_SDQCR, sdqcr);
+}
+
+static inline u32 qm_dqrr_sdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_SDQCR);
+}
+
+static inline void qm_dqrr_vdqcr_set(struct qm_portal *portal, u32 vdqcr)
+{
+	qm_out(DQRR_VDQCR, vdqcr);
+}
+
+static inline u32 qm_dqrr_vdqcr_get(struct qm_portal *portal)
+{
+	return qm_in(DQRR_VDQCR);
+}
+
+static inline u8 qm_dqrr_get_ithresh(struct qm_portal *portal)
+{
+	register struct qm_dqrr *dqrr = &portal->dqrr;
+
+	return dqrr->ithresh;
+}
+
+static inline void qm_dqrr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(DQRR_ITR, ithresh);
+}
+
+static inline u8 qm_dqrr_get_maxfill(struct qm_portal *portal)
+{
+	return (qm_in(CFG) & 0x00f00000) >> 20;
+}
+
+/* -------------- */
+/* --- MR API --- */
+
+#define MR_CARRYCLEAR(p) \
+	(void *)((unsigned long)(p) & (~(unsigned long)(QM_MR_SIZE << 6)))
+
+static inline u8 MR_PTR2IDX(const struct qm_mr_entry *e)
+{
+	return ((uintptr_t)e >> 6) & (QM_MR_SIZE - 1);
+}
+
+static inline const struct qm_mr_entry *MR_INC(const struct qm_mr_entry *e)
+{
+	return MR_CARRYCLEAR(e + 1);
+}
+
+static inline void qm_mr_finish(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (mr->ci != MR_PTR2IDX(mr->cursor))
+		pr_crit("Ignoring completed MR entries\n");
+}
+
+static inline const struct qm_mr_entry *qm_mr_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	if (!mr->fill)
+		return NULL;
+	return mr->cursor;
+}
+
+static inline u8 qm_mr_next(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->fill);
+	mr->cursor = MR_INC(mr->cursor);
+	return --mr->fill;
+}
+
+static inline void qm_mr_cci_consume(struct qm_portal *portal, u8 num)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = (mr->ci + num) & (QM_MR_SIZE - 1);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_cci_consume_to_current(struct qm_portal *portal)
+{
+	register struct qm_mr *mr = &portal->mr;
+
+	DPAA_ASSERT(mr->cmode == qm_mr_cci);
+	mr->ci = MR_PTR2IDX(mr->cursor);
+	qm_out(MR_CI_CINH, mr->ci);
+}
+
+static inline void qm_mr_set_ithresh(struct qm_portal *portal, u8 ithresh)
+{
+	qm_out(MR_ITR, ithresh);
+}
+
+/* ------------------------------ */
+/* --- Management command API --- */
+static inline int qm_mc_init(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	mc->cr = portal->addr.ce + QM_CL_CR;
+	mc->rr = portal->addr.ce + QM_CL_RR0;
+	mc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &
+			QM_MCC_VERB_VBIT) ?  0 : 1;
+	mc->vbit = mc->rridx ? QM_MCC_VERB_VBIT : 0;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_idle;
+#endif
+	return 0;
+}
+
+static inline void qm_mc_finish(struct qm_portal *portal)
+{
+	__maybe_unused register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	if (mc->state != qman_mc_idle)
+		pr_crit("Losing incomplete MC command\n");
+#endif
+}
+
+static inline struct qm_mc_command *qm_mc_start(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+
+	DPAA_ASSERT(mc->state == qman_mc_idle);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_user;
+#endif
+	dcbz_64(mc->cr);
+	return mc->cr;
+}
+
+static inline void qm_mc_commit(struct qm_portal *portal, u8 myverb)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_user);
+	lwsync();
+	mc->cr->__dont_write_directly__verb = myverb | mc->vbit;
+	dcbf(mc->cr);
+	dcbit_ro(rr);
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_hw;
+#endif
+}
+
+static inline struct qm_mc_result *qm_mc_result(struct qm_portal *portal)
+{
+	register struct qm_mc *mc = &portal->mc;
+	struct qm_mc_result *rr = mc->rr + mc->rridx;
+
+	DPAA_ASSERT(mc->state == qman_mc_hw);
+	/* The inactive response register's verb byte always returns zero until
+	 * its command is submitted and completed. This includes the valid-bit,
+	 * in case you were wondering.
+	 */
+	if (!__raw_readb(&rr->verb)) {
+		dcbit_ro(rr);
+		return NULL;
+	}
+	mc->rridx ^= 1;
+	mc->vbit ^= QM_MCC_VERB_VBIT;
+#ifdef RTE_LIBRTE_DPAA_CHECKING
+	mc->state = qman_mc_idle;
+#endif
+	return rr;
+}
+
+/* Portal interrupt register API */
+static inline void qm_isr_set_iperiod(struct qm_portal *portal, u16 iperiod)
+{
+	qm_out(ITPR, iperiod);
+}
+
+static inline u32 __qm_isr_read(struct qm_portal *portal, enum qm_isr_reg n)
+{
+#if defined(RTE_ARCH_ARM64)
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 6));
+#else
+	return __qm_in(&portal->addr, QM_REG_ISR + (n << 2));
+#endif
+}
+
+static inline void __qm_isr_write(struct qm_portal *portal, enum qm_isr_reg n,
+				  u32 val)
+{
+#if defined(RTE_ARCH_ARM64)
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 6), val);
+#else
+	__qm_out(&portal->addr, QM_REG_ISR + (n << 2), val);
+#endif
+}
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 80dde20..a7faf17 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -66,6 +66,7 @@ static __thread struct dpaa_ioctl_portal_map map = {
 static int fsl_qman_portal_init(uint32_t index, int is_shared)
 {
 	cpu_set_t cpuset;
+	struct qman_portal *portal;
 	int loop, ret;
 	struct dpaa_ioctl_irq_map irq_map;
 
@@ -116,6 +117,14 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 	pcfg.node = NULL;
 	pcfg.irq = fd;
 
+	portal = qman_create_affine_portal(&pcfg, NULL);
+	if (!portal) {
+		pr_err("Qman portal initialisation failed (%d)\n",
+		       pcfg.cpu);
+		process_portal_unmap(&map.addr);
+		return -EBUSY;
+	}
+
 	irq_map.type = dpaa_portal_qman;
 	irq_map.portal_cinh = map.addr.cinh;
 	process_portal_irq_map(fd, &irq_map);
@@ -124,10 +133,13 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
 
 static int fsl_qman_portal_finish(void)
 {
+	__maybe_unused const struct qm_portal_config *cfg;
 	int ret;
 
 	process_portal_irq_unmap(fd);
 
+	cfg = qman_destroy_affine_portal();
+	BUG_ON(cfg != &pcfg);
 	ret = process_portal_unmap(&map.addr);
 	if (ret)
 		error(0, ret, "process_portal_unmap()");
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index e9826c2..4ae2ea5 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -44,10 +44,6 @@
 #include "dpaa_sys.h"
 #include <fsl_qman.h>
 
-#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)
-#error "_ARM64 requires _FSL_QMAN_FQ_LOOKUP"
-#endif
-
 /* Congestion Groups */
 /*
  * This wrapper represents a bit-array for the state of the 256 QMan congestion
@@ -201,13 +197,6 @@ void qm_set_liodns(struct qm_portal_config *pcfg);
 int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
 		       struct qm_mcr_cgrtestwrite *result);
 
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
-/* If the fq object pointer is greater than the size of context_b field,
- * than a lookup table is required.
- */
-int qman_setup_fq_lookup_table(size_t num_entries);
-#endif
-
 /*   QMan s/w corenet portal, low-level i/face	 */
 
 /*
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 740ee25..7d9ad00 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -46,15 +46,6 @@ extern "C" {
 
 #include <dpaa_rbtree.h>
 
-/* FQ lookups (turn this on for 64bit user-space) */
-#if (__WORDSIZE == 64)
-#define CONFIG_FSL_QMAN_FQ_LOOKUP
-/* if FQ lookups are supported, this controls the number of initialised,
- * s/w-consumed FQs that can be supported at any one time.
- */
-#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)
-#endif
-
 /* Last updated for v00.800 of the BG */
 
 /* Hardware constants */
@@ -1254,9 +1245,6 @@ struct qman_fq {
 	enum qman_fq_state state;
 	int cgr_groupid;
 	struct rb_node node;
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
-	u32 key;
-#endif
 };
 
 /*
@@ -1275,6 +1263,761 @@ struct qman_cgr {
 	struct list_head node;
 };
 
+/* Flags to qman_create_fq() */
+#define QMAN_FQ_FLAG_NO_ENQUEUE      0x00000001 /* can't enqueue */
+#define QMAN_FQ_FLAG_NO_MODIFY       0x00000002 /* can only enqueue */
+#define QMAN_FQ_FLAG_TO_DCPORTAL     0x00000004 /* consumed by CAAM/PME/Fman */
+#define QMAN_FQ_FLAG_LOCKED          0x00000008 /* multi-core locking */
+#define QMAN_FQ_FLAG_AS_IS           0x00000010 /* query h/w state */
+#define QMAN_FQ_FLAG_DYNAMIC_FQID    0x00000020 /* (de)allocate fqid */
+
+/* Flags to qman_destroy_fq() */
+#define QMAN_FQ_DESTROY_PARKED       0x00000001 /* FQ can be parked or OOS */
+
+/* Flags from qman_fq_state() */
+#define QMAN_FQ_STATE_CHANGING       0x80000000 /* 'state' is changing */
+#define QMAN_FQ_STATE_NE             0x40000000 /* retired FQ isn't empty */
+#define QMAN_FQ_STATE_ORL            0x20000000 /* retired FQ has ORL */
+#define QMAN_FQ_STATE_BLOCKOOS       0xe0000000 /* if any are set, no OOS */
+#define QMAN_FQ_STATE_CGR_EN         0x10000000 /* CGR enabled */
+#define QMAN_FQ_STATE_VDQCR          0x08000000 /* being volatile dequeued */
+
+/* Flags to qman_init_fq() */
+#define QMAN_INITFQ_FLAG_SCHED       0x00000001 /* schedule rather than park */
+#define QMAN_INITFQ_FLAG_LOCAL       0x00000004 /* set dest portal */
+
+/* Flags to qman_enqueue(). NB, the strange numbering is to align with hardware,
+ * bit-wise. (NB: the PME API is sensitive to these precise numberings too, so
+ * any change here should be audited in PME.)
+ */
+#define QMAN_ENQUEUE_FLAG_WATCH_CGR  0x00080000 /* watch congestion state */
+#define QMAN_ENQUEUE_FLAG_DCA        0x00008000 /* perform enqueue-DCA */
+#define QMAN_ENQUEUE_FLAG_DCA_PARK   0x00004000 /* If DCA, requests park */
+#define QMAN_ENQUEUE_FLAG_DCA_PTR(p)		/* If DCA, p is DQRR entry */ \
+		(((u32)(p) << 2) & 0x00000f00)
+#define QMAN_ENQUEUE_FLAG_C_GREEN    0x00000000 /* choose one C_*** flag */
+#define QMAN_ENQUEUE_FLAG_C_YELLOW   0x00000008
+#define QMAN_ENQUEUE_FLAG_C_RED      0x00000010
+#define QMAN_ENQUEUE_FLAG_C_OVERRIDE 0x00000018
+/* For the ORP-specific qman_enqueue_orp() variant;
+ * - this flag indicates "Not Last In Sequence", ie. all but the final fragment
+ *   of a frame.
+ */
+#define QMAN_ENQUEUE_FLAG_NLIS       0x01000000
+/* - this flag performs no enqueue but fills in an ORP sequence number that
+ *   would otherwise block it (eg. if a frame has been dropped).
+ */
+#define QMAN_ENQUEUE_FLAG_HOLE       0x02000000
+/* - this flag performs no enqueue but advances NESN to the given sequence
+ *   number.
+ */
+#define QMAN_ENQUEUE_FLAG_NESN       0x04000000
+
+/* Flags to qman_modify_cgr() */
+#define QMAN_CGR_FLAG_USE_INIT       0x00000001
+#define QMAN_CGR_MODE_FRAME          0x00000001
+
+/**
+ * qman_get_portal_index - get portal configuration index
+ */
+int qman_get_portal_index(void);
+
+/**
+ * qman_affine_channel - return the channel ID of an portal
+ * @cpu: the cpu whose affine portal is the subject of the query
+ *
+ * If @cpu is -1, the affine portal for the current CPU will be used. It is a
+ * bug to call this function for any value of @cpu (other than -1) that is not a
+ * member of the cpu mask.
+ */
+u16 qman_affine_channel(int cpu);
+
+/**
+ * qman_set_vdq - Issue a volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @num: Number of Frames requested for volatile dequeue
+ *
+ * This function will issue a volatile dequeue command to the QMAN.
+ */
+int qman_set_vdq(struct qman_fq *fq, u16 num);
+
+/**
+ * qman_dequeue - Get the DQRR entry after volatile dequeue command
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ *
+ * This function will return the DQRR entry after a volatile dequeue command
+ * is issued. It will keep returning NULL until there is no packet available on
+ * the DQRR.
+ */
+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
+
+/**
+ * qman_dqrr_consume - Consume the DQRR entriy after volatile dequeue
+ * @fq: Frame Queue on which the volatile dequeue command is issued
+ * @dq: DQRR entry to consume. This is the one which is provided by the
+ *    'qbman_dequeue' command.
+ *
+ * This will comsume the DQRR enrey and make it available for next volatile
+ * dequeue.
+ */
+void qman_dqrr_consume(struct qman_fq *fq,
+		       struct qm_dqrr_entry *dq);
+
+/**
+ * qman_poll_dqrr - process DQRR (fast-path) entries
+ * @limit: the maximum number of DQRR entries to process
+ *
+ * Use of this function requires that DQRR processing not be interrupt-driven.
+ * Ie. the value returned by qman_irqsource_get() should not include
+ * QM_PIRQ_DQRI. If the current CPU is sharing a portal hosted on another CPU,
+ * this function will return -EINVAL, otherwise the return value is >=0 and
+ * represents the number of DQRR entries processed.
+ */
+int qman_poll_dqrr(unsigned int limit);
+
+/**
+ * qman_poll
+ *
+ * Dispatcher logic on a cpu can use this to trigger any maintenance of the
+ * affine portal. There are two classes of portal processing in question;
+ * fast-path (which involves demuxing dequeue ring (DQRR) entries and tracking
+ * enqueue ring (EQCR) consumption), and slow-path (which involves EQCR
+ * thresholds, congestion state changes, etc). This function does whatever
+ * processing is not triggered by interrupts.
+ *
+ * Note, if DQRR and some slow-path processing are poll-driven (rather than
+ * interrupt-driven) then this function uses a heuristic to determine how often
+ * to run slow-path processing - as slow-path processing introduces at least a
+ * minimum latency each time it is run, whereas fast-path (DQRR) processing is
+ * close to zero-cost if there is no work to be done.
+ */
+void qman_poll(void);
+
+/**
+ * qman_stop_dequeues - Stop h/w dequeuing to the s/w portal
+ *
+ * Disables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_stop_dequeues(void);
+
+/**
+ * qman_start_dequeues - (Re)start h/w dequeuing to the s/w portal
+ *
+ * Enables DQRR processing of the portal. This is reference-counted, so
+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
+ * truly re-enable dequeuing.
+ */
+void qman_start_dequeues(void);
+
+/**
+ * qman_static_dequeue_add - Add pool channels to the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Adds a set of pool channels to the portal's static dequeue command register
+ * (SDQCR). The requested pools are limited to those the portal has dequeue
+ * access to.
+ */
+void qman_static_dequeue_add(u32 pools);
+
+/**
+ * qman_static_dequeue_del - Remove pool channels from the portal SDQCR
+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
+ *
+ * Removes a set of pool channels from the portal's static dequeue command
+ * register (SDQCR). The requested pools are limited to those the portal has
+ * dequeue access to.
+ */
+void qman_static_dequeue_del(u32 pools);
+
+/**
+ * qman_static_dequeue_get - return the portal's current SDQCR
+ *
+ * Returns the portal's current static dequeue command register (SDQCR). The
+ * entire register is returned, so if only the currently-enabled pool channels
+ * are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
+ */
+u32 qman_static_dequeue_get(void);
+
+/**
+ * qman_dca - Perform a Discrete Consumption Acknowledgment
+ * @dq: the DQRR entry to be consumed
+ * @park_request: indicates whether the held-active @fq should be parked
+ *
+ * Only allowed in DCA-mode portals, for DQRR entries whose handler callback had
+ * previously returned 'qman_cb_dqrr_defer'. NB, as with the other APIs, this
+ * does not take a 'portal' argument but implies the core affine portal from the
+ * cpu that is currently executing the function. For reasons of locking, this
+ * function must be called from the same CPU as that which processed the DQRR
+ * entry in the first place.
+ */
+void qman_dca(struct qm_dqrr_entry *dq, int park_request);
+
+/**
+ * qman_eqcr_is_empty - Determine if portal's EQCR is empty
+ *
+ * For use in situations where a cpu-affine caller needs to determine when all
+ * enqueues for the local portal have been processed by Qman but can't use the
+ * QMAN_ENQUEUE_FLAG_WAIT_SYNC flag to do this from the final qman_enqueue().
+ * The function forces tracking of EQCR consumption (which normally doesn't
+ * happen until enqueue processing needs to find space to put new enqueue
+ * commands), and returns zero if the ring still has unprocessed entries,
+ * non-zero if it is empty.
+ */
+int qman_eqcr_is_empty(void);
+
+/**
+ * qman_set_dc_ern - Set the handler for DCP enqueue rejection notifications
+ * @handler: callback for processing DCP ERNs
+ * @affine: whether this handler is specific to the locally affine portal
+ *
+ * If a hardware block's interface to Qman (ie. its direct-connect portal, or
+ * DCP) is configured not to receive enqueue rejections, then any enqueues
+ * through that DCP that are rejected will be sent to a given software portal.
+ * If @affine is non-zero, then this handler will only be used for DCP ERNs
+ * received on the portal affine to the current CPU. If multiple CPUs share a
+ * portal and they all call this function, they will be setting the handler for
+ * the same portal! If @affine is zero, then this handler will be global to all
+ * portals handled by this instance of the driver. Only those portals that do
+ * not have their own affine handler will use the global handler.
+ */
+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
+
+	/* FQ management */
+	/* ------------- */
+/**
+ * qman_create_fq - Allocates a FQ
+ * @fqid: the index of the FQD to encapsulate, must be "Out of Service"
+ * @flags: bit-mask of QMAN_FQ_FLAG_*** options
+ * @fq: memory for storing the 'fq', with callbacks filled in
+ *
+ * Creates a frame queue object for the given @fqid, unless the
+ * QMAN_FQ_FLAG_DYNAMIC_FQID flag is set in @flags, in which case a FQID is
+ * dynamically allocated (or the function fails if none are available). Once
+ * created, the caller should not touch the memory at 'fq' except as extended to
+ * adjacent memory for user-defined fields (see the definition of "struct
+ * qman_fq" for more info). NO_MODIFY is only intended for enqueuing to
+ * pre-existing frame-queues that aren't to be otherwise interfered with, it
+ * prevents all other modifications to the frame queue. The TO_DCPORTAL flag
+ * causes the driver to honour any contextB modifications requested in the
+ * qm_init_fq() API, as this indicates the frame queue will be consumed by a
+ * direct-connect portal (PME, CAAM, or Fman). When frame queues are consumed by
+ * software portals, the contextB field is controlled by the driver and can't be
+ * modified by the caller. If the AS_IS flag is specified, management commands
+ * will be used on portal @p to query state for frame queue @fqid and construct
+ * a frame queue object based on that, rather than assuming/requiring that it be
+ * Out of Service.
+ */
+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
+
+/**
+ * qman_destroy_fq - Deallocates a FQ
+ * @fq: the frame queue object to release
+ * @flags: bit-mask of QMAN_FQ_FREE_*** options
+ *
+ * The memory for this frame queue object ('fq' provided in qman_create_fq()) is
+ * not deallocated but the caller regains ownership, to do with as desired. The
+ * FQ must be in the 'out-of-service' state unless the QMAN_FQ_FREE_PARKED flag
+ * is specified, in which case it may also be in the 'parked' state.
+ */
+void qman_destroy_fq(struct qman_fq *fq, u32 flags);
+
+/**
+ * qman_fq_fqid - Queries the frame queue ID of a FQ object
+ * @fq: the frame queue object to query
+ */
+u32 qman_fq_fqid(struct qman_fq *fq);
+
+/**
+ * qman_fq_state - Queries the state of a FQ object
+ * @fq: the frame queue object to query
+ * @state: pointer to state enum to return the FQ scheduling state
+ * @flags: pointer to state flags to receive QMAN_FQ_STATE_*** bitmask
+ *
+ * Queries the state of the FQ object, without performing any h/w commands.
+ * This captures the state, as seen by the driver, at the time the function
+ * executes.
+ */
+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
+
+/**
+ * qman_init_fq - Initialises FQ fields, leaves the FQ "parked" or "scheduled"
+ * @fq: the frame queue object to modify, must be 'parked' or new.
+ * @flags: bit-mask of QMAN_INITFQ_FLAG_*** options
+ * @opts: the FQ-modification settings, as defined in the low-level API
+ *
+ * The @opts parameter comes from the low-level portal API. Select
+ * QMAN_INITFQ_FLAG_SCHED in @flags to cause the frame queue to be scheduled
+ * rather than parked. NB, @opts can be NULL.
+ *
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver;
+ * 1. the 'count' and 'fqid' fields are always ignored (this operation only
+ * affects one frame queue: @fq).
+ * 2. the QM_INITFQ_WE_CONTEXTB option of the 'we_mask' field and the associated
+ * 'fqd' structure's 'context_b' field are sometimes overwritten;
+ *   - if @fq was not created with QMAN_FQ_FLAG_TO_DCPORTAL, then context_b is
+ *     initialised to a value used by the driver for demux.
+ *   - if context_b is initialised for demux, so is context_a in case stashing
+ *     is requested (see item 4).
+ * (So caller control of context_b is only possible for TO_DCPORTAL frame queue
+ * objects.)
+ * 3. if @flags contains QMAN_INITFQ_FLAG_LOCAL, the 'fqd' structure's
+ * 'dest::channel' field will be overwritten to match the portal used to issue
+ * the command. If the WE_DESTWQ write-enable bit had already been set by the
+ * caller, the channel workqueue will be left as-is, otherwise the write-enable
+ * bit is set and the workqueue is set to a default of 4. If the "LOCAL" flag
+ * isn't set, the destination channel/workqueue fields and the write-enable bit
+ * are left as-is.
+ * 4. if the driver overwrites context_a/b for demux, then if
+ * QM_INITFQ_WE_CONTEXTA is set, the driver will only overwrite
+ * context_a.address fields and will leave the stashing fields provided by the
+ * user alone, otherwise it will zero out the context_a.stashing fields.
+ */
+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
+
+/**
+ * qman_schedule_fq - Schedules a FQ
+ * @fq: the frame queue object to schedule, must be 'parked'
+ *
+ * Schedules the frame queue, which must be Parked, which takes it to
+ * Tentatively-Scheduled or Truly-Scheduled depending on its fill-level.
+ */
+int qman_schedule_fq(struct qman_fq *fq);
+
+/**
+ * qman_retire_fq - Retires a FQ
+ * @fq: the frame queue object to retire
+ * @flags: FQ flags (as per qman_fq_state) if retirement completes immediately
+ *
+ * Retires the frame queue. This returns zero if it succeeds immediately, +1 if
+ * the retirement was started asynchronously, otherwise it returns negative for
+ * failure. When this function returns zero, @flags is set to indicate whether
+ * the retired FQ is empty and/or whether it has any ORL fragments (to show up
+ * as ERNs). Otherwise the corresponding flags will be known when a subsequent
+ * FQRN message shows up on the portal's message ring.
+ *
+ * NB, if the retirement is asynchronous (the FQ was in the Truly Scheduled or
+ * Active state), the completion will be via the message ring as a FQRN - but
+ * the corresponding callback may occur before this function returns!! Ie. the
+ * caller should be prepared to accept the callback as the function is called,
+ * not only once it has returned.
+ */
+int qman_retire_fq(struct qman_fq *fq, u32 *flags);
+
+/**
+ * qman_oos_fq - Puts a FQ "out of service"
+ * @fq: the frame queue object to be put out-of-service, must be 'retired'
+ *
+ * The frame queue must be retired and empty, and if any order restoration list
+ * was released as ERNs at the time of retirement, they must all be consumed.
+ */
+int qman_oos_fq(struct qman_fq *fq);
+
+/**
+ * qman_fq_flow_control - Set the XON/XOFF state of a FQ
+ * @fq: the frame queue object to be set to XON/XOFF state, must not be 'oos',
+ * or 'retired' or 'parked' state
+ * @xon: boolean to set fq in XON or XOFF state
+ *
+ * The frame should be in Tentatively Scheduled state or Truly Schedule sate,
+ * otherwise the IFSI interrupt will be asserted.
+ */
+int qman_fq_flow_control(struct qman_fq *fq, int xon);
+
+/**
+ * qman_query_fq - Queries FQD fields (via h/w query command)
+ * @fq: the frame queue object to be queried
+ * @fqd: storage for the queried FQD fields
+ */
+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd);
+
+/**
+ * qman_query_fq_has_pkts - Queries non-programmable FQD fields and returns '1'
+ * if packets are in the frame queue. If there are no packets on frame
+ * queue '0' is returned.
+ * @fq: the frame queue object to be queried
+ */
+int qman_query_fq_has_pkts(struct qman_fq *fq);
+
+/**
+ * qman_query_fq_np - Queries non-programmable FQD fields
+ * @fq: the frame queue object to be queried
+ * @np: storage for the queried FQD fields
+ */
+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
+
+/**
+ * qman_query_wq - Queries work queue lengths
+ * @query_dedicated: If non-zero, query length of WQs in the channel dedicated
+ *		to this software portal. Otherwise, query length of WQs in a
+ *		channel  specified in wq.
+ * @wq: storage for the queried WQs lengths. Also specified the channel to
+ *	to query if query_dedicated is zero.
+ */
+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
+
+/**
+ * qman_volatile_dequeue - Issue a volatile dequeue command
+ * @fq: the frame queue object to dequeue from
+ * @flags: a bit-mask of QMAN_VOLATILE_FLAG_*** options
+ * @vdqcr: bit mask of QM_VDQCR_*** options, as per qm_dqrr_vdqcr_set()
+ *
+ * Attempts to lock access to the portal's VDQCR volatile dequeue functionality.
+ * The function will block and sleep if QMAN_VOLATILE_FLAG_WAIT is specified and
+ * the VDQCR is already in use, otherwise returns non-zero for failure. If
+ * QMAN_VOLATILE_FLAG_FINISH is specified, the function will only return once
+ * the VDQCR command has finished executing (ie. once the callback for the last
+ * DQRR entry resulting from the VDQCR command has been called). If not using
+ * the FINISH flag, completion can be determined either by detecting the
+ * presence of the QM_DQRR_STAT_UNSCHEDULED and QM_DQRR_STAT_DQCR_EXPIRED bits
+ * in the "stat" field of the "struct qm_dqrr_entry" passed to the FQ's dequeue
+ * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
+ * "flags" retrieved from qman_fq_state().
+ */
+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
+
+/**
+ * qman_enqueue - Enqueue a frame to a frame queue
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ *
+ * Fills an entry in the EQCR of portal @qm to enqueue the frame described by
+ * @fd. The descriptor details are copied from @fd to the EQCR entry, the 'pid'
+ * field is ignored. The return value is non-zero on error, such as ring full
+ * (and FLAG_WAIT not specified), congestion avoidance (FLAG_WATCH_CGR
+ * specified), etc. If the ring is full and FLAG_WAIT is specified, this
+ * function will block. If FLAG_INTERRUPT is set, the EQCI bit of the portal
+ * interrupt will assert when Qman consumes the EQCR entry (subject to "status
+ * disable", "enable", and "inhibit" registers). If FLAG_DCA is set, Qman will
+ * perform an implied "discrete consumption acknowledgment" on the dequeue
+ * ring's (DQRR) entry, at the ring index specified by the FLAG_DCA_IDX(x)
+ * macro. (As an alternative to issuing explicit DCA actions on DQRR entries,
+ * this implicit DCA can delay the release of a "held active" frame queue
+ * corresponding to a DQRR entry until Qman consumes the EQCR entry - providing
+ * order-preservation semantics in packet-forwarding scenarios.) If FLAG_DCA is
+ * set, then FLAG_DCA_PARK can also be set to imply that the DQRR consumption
+ * acknowledgment should "park request" the "held active" frame queue. Ie.
+ * when the portal eventually releases that frame queue, it will be left in the
+ * Parked state rather than Tentatively Scheduled or Truly Scheduled. If the
+ * portal is watching congestion groups, the QMAN_ENQUEUE_FLAG_WATCH_CGR flag
+ * is requested, and the FQ is a member of a congestion group, then this
+ * function returns -EAGAIN if the congestion group is currently congested.
+ * Note, this does not eliminate ERNs, as the async interface means we can be
+ * sending enqueue commands to an un-congested FQ that becomes congested before
+ * the enqueue commands are processed, but it does minimise needless thrashing
+ * of an already busy hardware resource by throttling many of the to-be-dropped
+ * enqueues "at the source".
+ */
+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
+
+int qman_enqueue_multi(struct qman_fq *fq,
+		       const struct qm_fd *fd,
+		int frames_to_send);
+
+typedef int (*qman_cb_precommit) (void *arg);
+
+/**
+ * qman_enqueue_orp - Enqueue a frame to a frame queue using an ORP
+ * @fq: the frame queue object to enqueue to
+ * @fd: a descriptor of the frame to be enqueued
+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
+ * @orp: the frame queue object used as an order restoration point.
+ * @orp_seqnum: the sequence number of this frame in the order restoration path
+ *
+ * Similar to qman_enqueue(), but with the addition of an Order Restoration
+ * Point (@orp) and corresponding sequence number (@orp_seqnum) for this
+ * enqueue operation to employ order restoration. Each frame queue object acts
+ * as an Order Definition Point (ODP) by providing each frame dequeued from it
+ * with an incrementing sequence number, this value is generally ignored unless
+ * that sequence of dequeued frames will need order restoration later. Each
+ * frame queue object also encapsulates an Order Restoration Point (ORP), which
+ * is a re-assembly context for re-ordering frames relative to their sequence
+ * numbers as they are enqueued. The ORP does not have to be within the frame
+ * queue that receives the enqueued frame, in fact it is usually the frame
+ * queue from which the frames were originally dequeued. For the purposes of
+ * order restoration, multiple frames (or "fragments") can be enqueued for a
+ * single sequence number by setting the QMAN_ENQUEUE_FLAG_NLIS flag for all
+ * enqueues except the final fragment of a given sequence number. Ordering
+ * between sequence numbers is guaranteed, even if fragments of different
+ * sequence numbers are interlaced with one another. Fragments of the same
+ * sequence number will retain the order in which they are enqueued. If no
+ * enqueue is to performed, QMAN_ENQUEUE_FLAG_HOLE indicates that the given
+ * sequence number is to be "skipped" by the ORP logic (eg. if a frame has been
+ * dropped from a sequence), or QMAN_ENQUEUE_FLAG_NESN indicates that the given
+ * sequence number should become the ORP's "Next Expected Sequence Number".
+ *
+ * Side note: a frame queue object can be used purely as an ORP, without
+ * carrying any frames at all. Care should be taken not to deallocate a frame
+ * queue object that is being actively used as an ORP, as a future allocation
+ * of the frame queue object may start using the internal ORP before the
+ * previous use has finished.
+ */
+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
+		     struct qman_fq *orp, u16 orp_seqnum);
+
+/**
+ * qman_alloc_fqid_range - Allocate a contiguous range of FQIDs
+ * @result: is set by the API to the base FQID of the allocated range
+ * @count: the number of FQIDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count FQIDs
+ *
+ * Returns the number of frame queues allocated, or a negative error code. If
+ * @partial is non zero, the allocation request may return a smaller range of
+ * FQs than requested (though alignment will be as requested). If @partial is
+ * zero, the return value will either be 'count' or negative.
+ */
+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_fqid(u32 *result)
+{
+	int ret = qman_alloc_fqid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_fqid_range - Release the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ *
+ * This function can also be used to seed the allocator with ranges of FQIDs
+ * that it can subsequently allocate from.
+ */
+void qman_release_fqid_range(u32 fqid, unsigned int count);
+static inline void qman_release_fqid(u32 fqid)
+{
+	qman_release_fqid_range(fqid, 1);
+}
+
+void qman_seed_fqid_range(u32 fqid, unsigned int count);
+
+int qman_shutdown_fq(u32 fqid);
+
+/**
+ * qman_reserve_fqid_range - Reserve the specified range of frame queue IDs
+ * @fqid: the base FQID of the range to deallocate
+ * @count: the number of FQIDs in the range
+ */
+int qman_reserve_fqid_range(u32 fqid, unsigned int count);
+static inline int qman_reserve_fqid(u32 fqid)
+{
+	return qman_reserve_fqid_range(fqid, 1);
+}
+
+/* Pool-channel management */
+/**
+ * qman_alloc_pool_range - Allocate a contiguous range of pool-channel IDs
+ * @result: is set by the API to the base pool-channel ID of the allocated range
+ * @count: the number of pool-channel IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of pool-channel IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_pool(u32 *result)
+{
+	int ret = qman_alloc_pool_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_pool_range - Release the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to deallocate
+ * @count: the number of pool-channel IDs in the range
+ */
+void qman_release_pool_range(u32 id, unsigned int count);
+static inline void qman_release_pool(u32 id)
+{
+	qman_release_pool_range(id, 1);
+}
+
+/**
+ * qman_reserve_pool_range - Reserve the specified range of pool-channel IDs
+ * @id: the base pool-channel ID of the range to reserve
+ * @count: the number of pool-channel IDs in the range
+ */
+int qman_reserve_pool_range(u32 id, unsigned int count);
+static inline int qman_reserve_pool(u32 id)
+{
+	return qman_reserve_pool_range(id, 1);
+}
+
+void qman_seed_pool_range(u32 id, unsigned int count);
+
+	/* CGR management */
+	/* -------------- */
+/**
+ * qman_create_cgr - Register a congestion group object
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: optional state of CGR settings
+ *
+ * Registers this object to receiving congestion entry/exit callbacks on the
+ * portal affine to the cpu portal on which this API is executed. If opts is
+ * NULL then only the callback (cgr->cb) function is registered. If @flags
+ * contains QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset
+ * any unspecified parameters) will be used rather than a modify hw hardware
+ * (which only modifies the specified parameters).
+ */
+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_create_cgr_to_dcp - Register a congestion group object to DCP portal
+ * @cgr: the 'cgr' object, with fields filled in
+ * @flags: QMAN_CGR_FLAG_* values
+ * @dcp_portal: the DCP portal to which the cgr object is registered.
+ * @opts: optional state of CGR settings
+ *
+ */
+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
+			   struct qm_mcc_initcgr *opts);
+
+/**
+ * qman_delete_cgr - Deregisters a congestion group object
+ * @cgr: the 'cgr' object to deregister
+ *
+ * "Unplugs" this CGR object from the portal affine to the cpu on which this API
+ * is executed. This must be excuted on the same affine portal on which it was
+ * created.
+ */
+int qman_delete_cgr(struct qman_cgr *cgr);
+
+/**
+ * qman_modify_cgr - Modify CGR fields
+ * @cgr: the 'cgr' object to modify
+ * @flags: QMAN_CGR_FLAG_* values
+ * @opts: the CGR-modification settings
+ *
+ * The @opts parameter comes from the low-level portal API, and can be NULL.
+ * Note that some fields and options within @opts may be ignored or overwritten
+ * by the driver, in particular the 'cgrid' field is ignored (this operation
+ * only affects the given CGR object). If @flags contains
+ * QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset any
+ * unspecified parameters) will be used rather than a modify hw hardware (which
+ * only modifies the specified parameters).
+ */
+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
+		    struct qm_mcc_initcgr *opts);
+
+/**
+* qman_query_cgr - Queries CGR fields
+* @cgr: the 'cgr' object to query
+* @result: storage for the queried congestion group record
+*/
+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *result);
+
+/**
+ * qman_query_congestion - Queries the state of all congestion groups
+ * @congestion: storage for the queried state of all congestion groups
+ */
+int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
+
+/**
+ * qman_alloc_cgrid_range - Allocate a contiguous range of CGR IDs
+ * @result: is set by the API to the base CGR ID of the allocated range
+ * @count: the number of CGR IDs required
+ * @align: required alignment of the allocated range
+ * @partial: non-zero if the API can return fewer than @count
+ *
+ * Returns the number of CGR IDs allocated, or a negative error code.
+ * If @partial is non zero, the allocation request may return a smaller range of
+ * than requested (though alignment will be as requested). If @partial is zero,
+ * the return value will either be 'count' or negative.
+ */
+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
+static inline int qman_alloc_cgrid(u32 *result)
+{
+	int ret = qman_alloc_cgrid_range(result, 1, 0, 0);
+
+	return (ret > 0) ? 0 : ret;
+}
+
+/**
+ * qman_release_cgrid_range - Release the specified range of CGR IDs
+ * @id: the base CGR ID of the range to deallocate
+ * @count: the number of CGR IDs in the range
+ */
+void qman_release_cgrid_range(u32 id, unsigned int count);
+static inline void qman_release_cgrid(u32 id)
+{
+	qman_release_cgrid_range(id, 1);
+}
+
+/**
+ * qman_reserve_cgrid_range - Reserve the specified range of CGR ID
+ * @id: the base CGR ID of the range to reserve
+ * @count: the number of CGR IDs in the range
+ */
+int qman_reserve_cgrid_range(u32 id, unsigned int count);
+static inline int qman_reserve_cgrid(u32 id)
+{
+	return qman_reserve_cgrid_range(id, 1);
+}
+
+void qman_seed_cgrid_range(u32 id, unsigned int count);
+
+	/* Helpers */
+	/* ------- */
+/**
+ * qman_poll_fq_for_init - Check if an FQ has been initialised from OOS
+ * @fqid: the FQID that will be initialised by other s/w
+ *
+ * In many situations, a FQID is provided for communication between s/w
+ * entities, and whilst the consumer is responsible for initialising and
+ * scheduling the FQ, the producer(s) generally create a wrapper FQ object using
+ * and only call qman_enqueue() (no FQ initialisation, scheduling, etc). Ie;
+ *     qman_create_fq(..., QMAN_FQ_FLAG_NO_MODIFY, ...);
+ * However, data can not be enqueued to the FQ until it is initialised out of
+ * the OOS state - this function polls for that condition. It is particularly
+ * useful for users of IPC functions - each endpoint's Rx FQ is the other
+ * endpoint's Tx FQ, so each side can initialise and schedule their Rx FQ object
+ * and then use this API on the (NO_MODIFY) Tx FQ object in order to
+ * synchronise. The function returns zero for success, +1 if the FQ is still in
+ * the OOS state, or negative if there was an error.
+ */
+static inline int qman_poll_fq_for_init(struct qman_fq *fq)
+{
+	struct qm_mcr_queryfq_np np;
+	int err;
+
+	err = qman_query_fq_np(fq, &np);
+	if (err)
+		return err;
+	if ((np.state & QM_MCR_NP_STATE_MASK) == QM_MCR_NP_STATE_OOS)
+		return 1;
+	return 0;
+}
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+#define cpu_to_hw_sg(x) (x)
+#define hw_sg_to_cpu(x) (x)
+#else
+#define cpu_to_hw_sg(x)  __cpu_to_hw_sg(x)
+#define hw_sg_to_cpu(x)  __hw_sg_to_cpu(x)
+
+static inline void __cpu_to_hw_sg(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = cpu_to_be64(sgentry->opaque);
+	sgentry->val = cpu_to_be32(sgentry->val);
+	sgentry->val_off = cpu_to_be16(sgentry->val_off);
+}
+
+static inline void __hw_sg_to_cpu(struct qm_sg_entry *sgentry)
+{
+	sgentry->opaque = be64_to_cpu(sgentry->opaque);
+	sgentry->val = be32_to_cpu(sgentry->val);
+	sgentry->val_off = be16_to_cpu(sgentry->val_off);
+}
+#endif
 
 #ifdef __cplusplus
 }
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index b0d953f..a4897b0 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -42,6 +42,7 @@
 #define __FSL_USD_H
 
 #include <compat.h>
+#include <fsl_qman.h>
 
 #ifdef __cplusplus
 extern "C" {
-- 
2.7.4

  parent reply	other threads:[~2017-06-16  5:32 UTC|newest]

Thread overview: 367+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-16  5:40 [PATCH 00/38] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
2017-06-16  5:40 ` [PATCH 01/38] eal: add support for 24 40 and 48 bit operations Shreyansh Jain
2017-06-16  8:57   ` Bruce Richardson
2017-06-16  9:21     ` Shreyansh Jain
2017-06-16  9:42       ` Thomas Monjalon
2017-06-16 10:34       ` Adrien Mazarguil
2017-06-19 11:00         ` Shreyansh Jain
2017-06-19 13:52           ` Wiles, Keith
2017-06-20 10:43             ` Hemant Agrawal
2017-06-20 14:34               ` Wiles, Keith
2017-06-21  8:17                 ` Hemant Agrawal
2017-06-21  8:32                   ` Bruce Richardson
2017-06-21  9:02                   ` Adrien Mazarguil
2017-10-02 10:16   ` Avi Kivity
2017-06-16  5:40 ` [PATCH 02/38] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-06-16  5:40 ` [PATCH 03/38] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-06-16  5:40 ` [PATCH 04/38] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-06-16  5:40 ` [PATCH 05/38] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-06-16  5:40 ` [PATCH 06/38] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-06-16  5:40 ` [PATCH 07/38] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-06-16  5:40 ` [PATCH 08/38] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-06-16  5:40 ` [PATCH 09/38] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-06-16  5:40 ` [PATCH 10/38] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-06-16  5:40 ` [PATCH 11/38] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-06-16  5:40 ` Shreyansh Jain [this message]
2017-06-16  5:40 ` [PATCH 13/38] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-06-16  5:40 ` [PATCH 14/38] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
2017-06-16  5:40 ` [PATCH 15/38] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-06-16  5:40 ` [PATCH 16/38] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-06-16  5:40 ` [PATCH 17/38] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-06-16  5:40 ` [PATCH 18/38] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-06-28 15:51   ` Ferruh Yigit
2017-06-29 14:17     ` Shreyansh Jain
2017-06-16  5:40 ` [PATCH 19/38] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
2017-06-16  5:40 ` [PATCH 20/38] maintainers: claim ownership of DPAA Mempool driver Shreyansh Jain
2017-06-16  5:40 ` [PATCH 21/38] drivers: enable compilation " Shreyansh Jain
2017-06-16  5:40 ` [PATCH 22/38] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-06-28 15:41   ` Ferruh Yigit
2017-06-29 14:29     ` Shreyansh Jain
2017-07-02  6:47       ` Shreyansh Jain
2017-06-16  5:40 ` [PATCH 23/38] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-06-16  5:40 ` [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
2017-06-28 15:45   ` Ferruh Yigit
2017-06-29 14:55     ` Shreyansh Jain
2017-06-29 15:41       ` Ferruh Yigit
2017-06-30 11:48         ` Shreyansh Jain
2017-07-04 14:50         ` Shreyansh Jain
2017-06-16  5:40 ` [PATCH 25/38] net/dpaa: add support for MTU update Shreyansh Jain
2017-06-28 15:45   ` Ferruh Yigit
2017-06-29 14:56     ` Shreyansh Jain
2017-06-29 15:43       ` Ferruh Yigit
2017-06-16  5:40 ` [PATCH 26/38] net/dpaa: add support for jumbo frames Shreyansh Jain
2017-06-16  5:40 ` [PATCH 27/38] net/dpaa: add support for link status update Shreyansh Jain
2017-06-28 15:46   ` Ferruh Yigit
2017-06-29 14:57     ` Shreyansh Jain
2017-06-16  5:40 ` [PATCH 28/38] net/dpaa: add support for device info Shreyansh Jain
2017-06-16  5:40 ` [PATCH 29/38] net/dpaa: add support for promiscuous toggle Shreyansh Jain
2017-06-16  5:41 ` [PATCH 30/38] net/dpaa: add support for multicast toggle Shreyansh Jain
2017-06-28 15:47   ` Ferruh Yigit
2017-06-29 14:58     ` Shreyansh Jain
2017-06-16  5:41 ` [PATCH 31/38] net/dpaa: add support for basic stats Shreyansh Jain
2017-06-16  5:41 ` [PATCH 32/38] net/dpaa: add support for MAC address update Shreyansh Jain
2017-06-16  5:41 ` [PATCH 33/38] net/dpaa: add support for flow control Shreyansh Jain
2017-06-28 15:47   ` Ferruh Yigit
2017-06-30  9:37     ` Shreyansh Jain
2017-06-16  5:41 ` [PATCH 34/38] net/dpaa: add support for hashed RSS Shreyansh Jain
2017-06-28 15:48   ` Ferruh Yigit
2017-06-30 10:31     ` Shreyansh Jain
2017-06-30 11:39       ` Ferruh Yigit
2017-07-04 14:49         ` Shreyansh Jain
2017-06-16  5:41 ` [PATCH 35/38] net/dpaa: add support for packet type parsing Shreyansh Jain
2017-06-28 15:50   ` Ferruh Yigit
2017-06-30 11:40     ` Shreyansh Jain
2017-07-04 12:11       ` Shreyansh Jain
2017-06-16  5:41 ` [PATCH 36/38] net/dpaa: add support for checksum offload Shreyansh Jain
2017-06-28 15:50   ` Ferruh Yigit
2017-07-04 14:48     ` Shreyansh Jain
2017-06-16  5:41 ` [PATCH 37/38] net/dpaa: add support for Scattered Rx Shreyansh Jain
2017-06-16  5:41 ` [PATCH 38/38] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-06-28 15:51   ` Ferruh Yigit
2017-06-30 11:47     ` Shreyansh Jain
2017-07-04 14:43 ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-07-04 14:43   ` [PATCH v2 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 13/40] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 19/40] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 20/40] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 21/40] maintainers: claim ownership " Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 22/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 23/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 24/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 25/40] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 26/40] net/dpaa: add support for MTU update Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 27/40] net/dpaa: add support for jumbo frames Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 28/40] net/dpaa: add support for link status update Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 29/40] net/dpaa: add support for device info and speed capability Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 30/40] net/dpaa: add support for promiscuous toggle Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 31/40] net/dpaa: add support for multicast toggle Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 32/40] net/dpaa: add support for MAC address update Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 33/40] net/dpaa: add support for basic stats Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 34/40] net/dpaa: add support for flow control Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 35/40] net/dpaa: add support for hashed RSS Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 36/40] net/dpaa: add support for packet type parsing Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 37/40] net/dpaa: add support for checksum offload Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 38/40] net/dpaa: add support for Scattered Rx Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 39/40] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-07-04 14:44   ` [PATCH v2 40/40] net/dpaa: support for firmware version get API Shreyansh Jain
2017-07-05  0:13   ` [PATCH v2 00/40] Introduce NXP DPAA Bus, Mempool and PMD Thomas Monjalon
2017-07-05  4:38     ` Shreyansh Jain
2017-07-05  6:28       ` Thomas Monjalon
2017-08-23 14:11   ` [PATCH v3 " Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 13/40] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 19/40] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 20/40] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
2017-09-21 21:55       ` Thomas Monjalon
2017-09-22  6:35         ` Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 21/40] maintainers: claim ownership " Shreyansh Jain
2017-09-21 21:56       ` Thomas Monjalon
2017-09-22  6:47         ` Shreyansh Jain
2017-09-22  6:53           ` Thomas Monjalon
2017-09-22  7:37             ` Shreyansh Jain
2017-09-22  7:35               ` Thomas Monjalon
2017-09-27  8:30                 ` Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 22/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 23/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 24/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-09-21 22:03       ` Thomas Monjalon
2017-09-22  6:51         ` Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 25/40] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
2017-08-23 14:11     ` [PATCH v3 26/40] net/dpaa: add support for MTU update Shreyansh Jain
2017-09-21 22:07       ` Thomas Monjalon
2017-09-22  6:48         ` Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 27/40] net/dpaa: add support for jumbo frames Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 28/40] net/dpaa: add support for link status update Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 29/40] net/dpaa: add support for device info and speed capability Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 30/40] net/dpaa: add support for promiscuous toggle Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 31/40] net/dpaa: add support for multicast toggle Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 32/40] net/dpaa: add support for MAC address update Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 33/40] net/dpaa: add support for basic stats Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 34/40] net/dpaa: add support for flow control Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 35/40] net/dpaa: add support for hashed RSS Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 36/40] net/dpaa: add support for packet type parsing Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 37/40] net/dpaa: add support for checksum offload Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 38/40] net/dpaa: add support for Scattered Rx Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 39/40] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-08-23 14:12     ` [PATCH v3 40/40] net/dpaa: support for firmware version get API Shreyansh Jain
2017-09-09 11:20     ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 01/41] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 02/41] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-09-18 14:47         ` Ferruh Yigit
2017-09-19 13:14           ` Shreyansh Jain
2017-09-19 13:33             ` Ferruh Yigit
2017-09-25 14:32             ` Shreyansh Jain
2017-09-25 15:11               ` Ferruh Yigit
2017-09-26 11:26                 ` Shreyansh Jain
2017-09-27  9:30                 ` Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 03/41] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-09-18 14:49         ` Ferruh Yigit
2017-09-19 13:18           ` Shreyansh Jain
2017-09-19 13:40             ` Ferruh Yigit
2017-09-19 13:57               ` Shreyansh Jain
2017-09-26 12:43                 ` Shreyansh Jain
2017-09-27 23:09                   ` Ferruh Yigit
2017-09-09 11:20       ` [PATCH v4 04/41] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-09-18 14:49         ` Ferruh Yigit
2017-09-19 13:37           ` Shreyansh Jain
2017-09-19 14:15             ` Ferruh Yigit
2017-09-19 20:01               ` Thomas Monjalon
2017-09-20 20:39                 ` Jan Viktorin
2017-09-09 11:20       ` [PATCH v4 05/41] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-09-18 14:50         ` Ferruh Yigit
2017-09-18 16:15           ` Thomas Monjalon
2017-09-18 17:12             ` Hemant Agrawal
2017-09-19 13:43           ` Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 06/41] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 07/41] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-09-18 14:51         ` Ferruh Yigit
2017-09-19 14:17           ` Shreyansh Jain
2017-09-09 11:20       ` [PATCH v4 08/41] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 09/41] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 10/41] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 11/41] bus/dpaa: add QMan driver core routines Shreyansh Jain
2017-09-18 14:53         ` Ferruh Yigit
2017-09-19 14:18           ` Shreyansh Jain
2017-09-28 11:45             ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 12/41] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 13/41] bus/dpaa: add support for FMAN frame queue lookup Shreyansh Jain
2017-09-18 14:51         ` Ferruh Yigit
2017-09-28 11:47           ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 14/41] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 15/41] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 16/41] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 17/41] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-09-18 14:53         ` Ferruh Yigit
2017-09-19 14:25           ` Shreyansh Jain
2017-09-28 11:49             ` Shreyansh Jain
2017-09-18 18:33         ` Mcnamara, John
2017-09-09 11:21       ` [PATCH v4 18/41] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 19/41] mempool/dpaa: add support for NXP DPAA Mempool Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 20/41] drivers: enable compilation of DPAA Mempool driver Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 21/41] maintainers: claim ownership " Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 22/41] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 23/41] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 24/41] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 25/41] net/dpaa: add support for Tx and Rx queue setup Shreyansh Jain
2017-09-18 14:55         ` Ferruh Yigit
2017-09-21 12:59           ` Shreyansh Jain
2017-09-28 11:51             ` Shreyansh Jain
2017-09-18 14:55         ` Ferruh Yigit
2017-09-21 13:00           ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 26/41] net/dpaa: add support for MTU update Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 27/41] net/dpaa: add support for jumbo frames Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 28/41] net/dpaa: add support for link status update Shreyansh Jain
2017-09-18 14:56         ` Ferruh Yigit
2017-09-21 13:09           ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 29/41] net/dpaa: add support for device info and speed capability Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 30/41] net/dpaa: add support for promiscuous toggle Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 31/41] net/dpaa: add support for multicast toggle Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 32/41] net/dpaa: add support for MAC address update Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 33/41] net/dpaa: add support for basic stats Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 34/41] net/dpaa: add support for flow control Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 35/41] net/dpaa: add support for hashed RSS Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 36/41] net/dpaa: add support for packet type parsing Shreyansh Jain
2017-09-18 14:56         ` Ferruh Yigit
2017-09-21 13:16           ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 37/41] net/dpaa: add support for checksum offload Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 38/41] net/dpaa: add support for Scattered Rx Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 39/41] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 40/41] net/dpaa: support for firmware version get API Shreyansh Jain
2017-09-18 14:57         ` Ferruh Yigit
2017-09-21 13:18           ` Shreyansh Jain
2017-09-09 11:21       ` [PATCH v4 41/41] net/dpaa: support for extended statistics Shreyansh Jain
2017-09-18 14:57         ` Ferruh Yigit
2017-09-21 13:26           ` Shreyansh Jain
2017-09-27  8:26             ` Shreyansh Jain
2017-09-27 23:37               ` Ferruh Yigit
2017-09-28  2:30                 ` Shreyansh Jain
2017-09-28  2:52                   ` Shreyansh Jain
2017-09-28  9:26                     ` Ferruh Yigit
2017-09-21 22:09       ` [PATCH v4 00/41] Introduce NXP DPAA Bus, Mempool and PMD Thomas Monjalon
2017-09-21 22:10       ` Thomas Monjalon
2017-09-22  6:25         ` Shreyansh Jain
2017-09-22  6:33           ` Thomas Monjalon
2017-09-22 13:06         ` Shreyansh Jain
2017-09-22 13:13           ` Thomas Monjalon
2017-09-22 14:00             ` Shreyansh Jain
2017-09-22 14:19               ` Thomas Monjalon
2017-09-23 10:39                 ` Shreyansh Jain
2017-09-28 11:33       ` [PATCH v5 00/40] " Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 13/40] bus/dpaa: support FMAN frame queue lookup Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 19/40] mempool/dpaa: support NXP DPAA Mempool Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 20/40] config: enable compilation of DPAA Mempool driver Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 21/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 22/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 23/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 24/40] net/dpaa: support Tx and Rx queue setup Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 25/40] net/dpaa: support MTU update Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 26/40] net/dpaa: support jumbo frames Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 27/40] net/dpaa: support link status update Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 28/40] net/dpaa: support device info and speed capability Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 29/40] net/dpaa: support promiscuous toggle Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 30/40] net/dpaa: support multicast toggle Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 31/40] net/dpaa: support MAC address update Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 32/40] net/dpaa: support basic stats Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 33/40] net/dpaa: support flow control Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 34/40] net/dpaa: support hashed RSS Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 35/40] net/dpaa: support packet type parsing Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 36/40] net/dpaa: support checksum offload Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 37/40] net/dpaa: support Scattered Rx Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 38/40] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 39/40] net/dpaa: support firmware version get API Shreyansh Jain
2017-09-28 11:33         ` [PATCH v5 40/40] net/dpaa: support extended statistics Shreyansh Jain
2017-09-28 12:29         ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 01/40] config: add NXP DPAA SoC build configuration Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 02/40] bus/dpaa: introduce NXP DPAA Bus driver skeleton Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 03/40] bus/dpaa: add compatibility and helper macros Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 04/40] bus/dpaa: add OF parser for device scanning Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 05/40] bus/dpaa: introducing FMan configurations Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 06/40] bus/dpaa: add FMan hardware operations Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 07/40] bus/dpaa: enable DPAA IOCTL portal driver Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 08/40] bus/dpaa: add layer for interrupt emulation using pthread Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 09/40] bus/dpaa: add routines for managing a RB tree Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 10/40] bus/dpaa: add QMAN interface driver Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 11/40] bus/dpaa: add QMan driver core routines Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 12/40] bus/dpaa: add BMAN driver core Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 13/40] bus/dpaa: support FMAN frame queue lookup Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 14/40] bus/dpaa: add BMan hardware interfaces Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 15/40] bus/dpaa: add fman flow control threshold setting Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 16/40] bus/dpaa: integrate DPAA Bus with hardware blocks Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 17/40] doc: add NXP DPAA PMD documentation Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 18/40] bus/dpaa: add DPAA mempool logging macros Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 19/40] mempool/dpaa: support NXP DPAA Mempool Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 20/40] config: enable compilation of DPAA Mempool driver Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 21/40] bus/dpaa: add DPAA PMD logging macros Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 22/40] net/dpaa: add NXP DPAA PMD driver skeleton Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 23/40] config: enable NXP DPAA PMD compilation Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 24/40] net/dpaa: support Tx and Rx queue setup Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 25/40] net/dpaa: support MTU update Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 26/40] net/dpaa: support jumbo frames Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 27/40] net/dpaa: support link status update Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 28/40] net/dpaa: support device info and speed capability Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 29/40] net/dpaa: support promiscuous toggle Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 30/40] net/dpaa: support multicast toggle Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 31/40] net/dpaa: support MAC address update Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 32/40] net/dpaa: support basic stats Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 33/40] net/dpaa: support flow control Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 34/40] net/dpaa: support hashed RSS Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 35/40] net/dpaa: support packet type parsing Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 36/40] net/dpaa: support checksum offload Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 37/40] net/dpaa: support Scattered Rx Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 38/40] net/dpaa: add packet dump for debugging Shreyansh Jain
2017-09-28 12:29           ` [PATCH v6 39/40] net/dpaa: support firmware version get API Shreyansh Jain
2017-09-28 12:30           ` [PATCH v6 40/40] net/dpaa: support extended statistics Shreyansh Jain
2017-09-28 14:10           ` [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation Shreyansh Jain
2017-09-28 14:10             ` [PATCH 2/2] config: fix DPAA PMD linking Shreyansh Jain
2017-09-28 14:12             ` [PATCH 1/2] bus/dpaa: fix incorrect ccsr mem allocation Shreyansh Jain
2017-10-02 23:05             ` Ferruh Yigit
2017-10-02 23:05           ` [PATCH v6 00/40] Introduce NXP DPAA Bus, Mempool and PMD Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1497591668-3320-13-git-send-email-shreyansh.jain@nxp.com \
    --to=shreyansh.jain@nxp.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=hemant.agrawal@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.