linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver
@ 2016-11-11  8:19 Madalin Bucur
  2016-11-11  8:19 ` [PATCH net-next v7 01/10] devres: add devm_alloc_percpu() Madalin Bucur
                   ` (9 more replies)
  0 siblings, 10 replies; 14+ messages in thread
From: Madalin Bucur @ 2016-11-11  8:19 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

This patch series adds the Ethernet driver for the Freescale
QorIQ Data Path Acceleration Architecture (DPAA).

This version includes changes following the feedback received
on previous versions from Eric Dumazet, Bob Cochran, Joe Perches,
Paul Bolle, Joakim Tjernlund, Scott Wood, David Miller - thank you.

Together with the driver a managed version of alloc_percpu
is provided that simplifies the release of per-CPU memory.

The Freescale DPAA architecture consists in a series of hardware
blocks that support the Ethernet connectivity. The Ethernet driver
depends upon the following drivers that are currently in the Linux
kernel:
 - Peripheral Access Memory Unit (PAMU)
    drivers/iommu/fsl_*
 - Frame Manager (FMan) added in v4.4
    drivers/net/ethernet/freescale/fman
 - Queue Manager (QMan), Buffer Manager (BMan) added in v4.9-rc1
    drivers/soc/fsl/qbman

dpaa_eth interfaces mapping to FMan MACs:

  dpaa_eth       /eth0\     ...       /ethN\
  driver        |      |             |      |
  -------------   ----   -----------   ----   -------------
       -Ports  / Tx  Rx \    ...    / Tx  Rx \
  FMan        |          |         |          |
       -MACs  |   MAC0   |         |   MACN   |
             /   dtsec0   \  ...  /   dtsecN   \ (or tgec)
            /              \     /              \(or memac)
  ---------  --------------  ---  --------------  ---------
      FMan, FMan Port, FMan SP, FMan MURAM drivers
  ---------------------------------------------------------
      FMan HW blocks: MURAM, MACs, Ports, SP
  ---------------------------------------------------------

dpaa_eth relation to QMan, FMan:
              ________________________________
  dpaa_eth   /            eth0                \
  driver    /                                  \
  ---------   -^-   -^-   -^-   ---    ---------
  QMan driver / \   / \   / \  \   /  | BMan    |
             |Rx | |Rx | |Tx | |Tx |  | driver  |
  ---------  |Dfl| |Err| |Cnf| |FQs|  |         |
  QMan HW    |FQ | |FQ | |FQ | |   |  |         |
             /   \ /   \ /   \  \ /   |         |
  ---------   ---   ---   ---   -v-    ---------
            |        FMan QMI         |         |
            | FMan HW       FMan BMI  | BMan HW |
              -----------------------   --------

where the acronyms used above (and in the code) are:
DPAA = Data Path Acceleration Architecture
FMan = DPAA Frame Manager
QMan = DPAA Queue Manager
BMan = DPAA Buffers Manager
QMI = QMan interface in FMan
BMI = BMan interface in FMan
FMan SP = FMan Storage Profiles
MURAM = Multi-user RAM in FMan
FQ = QMan Frame Queue
Rx Dfl FQ = default reception FQ
Rx Err FQ = Rx error frames FQ
Tx Cnf FQ = Tx confirmation FQ
Tx FQs = transmission frame queues
dtsec = datapath three speed Ethernet controller (10/100/1000 Mbps)
tgec = ten gigabit Ethernet controller (10 Gbps)
memac = multirate Ethernet MAC (10/100/1000/10000)

Changed from v6:
 - fixed an issue on an error path in dpaa_set_mac_address()
 - removed NDO operation definitions that were not needed
 - sorted the local variable declarations
 - cleaned up a few checkpatch checks
 - removed friendly network interface naming code

Changes from v5:
 - adapt to the latest Q/BMan drivers API
 - use build_skb() on Rx path instead of buffer pool refill path
 - proper support for multiple buffer pools
 - align function, variable names, code cleanup
 - driver file structure cleanup

Changes from v4:
 - addressed feedback from Scott Wood and Joe Perches
 - fixed spelling
 - fixed leak of uninitialized stack to userspace
 - fix prints
 - replace raw_cpu_ptr() with this_cpu_ptr()
 - remove _s from the end of structure names
 - remove underscores at start of functions, goto labels
 - remove likely in error paths
 - use container_of() instead of open casts
 - remove priv from the driver name
 - move return type on same line with function name
 - drop DPA_READ_SKB_PTR/DPA_WRITE_SKB_PTR

Changes from v3:
 - removed bogus delay and comment in .ndo_stop implementation
 - addressed minor issues reported by David Miller

Changes from v2:
 - removed debugfs, moved exports to ethtool statistics
 - removed congestion groups Kconfig params

Changes from v1:
 - bpool level Kconfig options removed
 - print format using pr_fmt, cleaned up prints
 - __hot/__cold removed
 - gratuitous unlikely() removed
 - code style aligned, consistent spacing for declarations
 - comment formatting

The changes are also available in the public git repository at
git://git.freescale.com/ppc/upstream/linux.git on the branch dpaa_eth-next.

Madalin Bucur (10):
  devres: add devm_alloc_percpu()
  dpaa_eth: add support for DPAA Ethernet
  dpaa_eth: add option to use one buffer pool set
  dpaa_eth: add ethtool functionality
  dpaa_eth: add ethtool statistics
  dpaa_eth: add sysfs exports
  dpaa_eth: add trace points
  arch/powerpc: Enable FSL_PAMU
  arch/powerpc: Enable FSL_FMAN
  arch/powerpc: Enable dpaa_eth

 Documentation/driver-model/devres.txt              |    4 +
 arch/powerpc/configs/dpaa.config                   |    3 +
 drivers/base/devres.c                              |   66 +
 drivers/net/ethernet/freescale/Kconfig             |    2 +
 drivers/net/ethernet/freescale/Makefile            |    1 +
 drivers/net/ethernet/freescale/dpaa/Kconfig        |   19 +
 drivers/net/ethernet/freescale/dpaa/Makefile       |   12 +
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c     | 2776 ++++++++++++++++++++
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h     |  185 ++
 .../net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c   |  165 ++
 .../net/ethernet/freescale/dpaa/dpaa_eth_trace.h   |  141 +
 drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c |  417 +++
 include/linux/device.h                             |   19 +
 13 files changed, 3810 insertions(+)
 create mode 100644 drivers/net/ethernet/freescale/dpaa/Kconfig
 create mode 100644 drivers/net/ethernet/freescale/dpaa/Makefile
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c

-- 
2.1.0

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH net-next v7 01/10] devres: add devm_alloc_percpu()
  2016-11-11  8:19 [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
@ 2016-11-11  8:19 ` Madalin Bucur
  2016-11-11  8:19 ` [PATCH net-next v7 02/10] dpaa_eth: add support for DPAA Ethernet Madalin Bucur
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Madalin Bucur @ 2016-11-11  8:19 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Introduce managed counterparts for alloc_percpu() and free_percpu().
Add devm_alloc_percpu() and devm_free_percpu() into the managed
interfaces list.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 Documentation/driver-model/devres.txt |  4 +++
 drivers/base/devres.c                 | 66 +++++++++++++++++++++++++++++++++++
 include/linux/device.h                | 19 ++++++++++
 3 files changed, 89 insertions(+)

diff --git a/Documentation/driver-model/devres.txt b/Documentation/driver-model/devres.txt
index 1670708..ca9d1eb 100644
--- a/Documentation/driver-model/devres.txt
+++ b/Documentation/driver-model/devres.txt
@@ -332,6 +332,10 @@ MEM
 MFD
  devm_mfd_add_devices()
 
+PER-CPU MEM
+  devm_alloc_percpu()
+  devm_free_percpu()
+
 PCI
   pcim_enable_device()	: after success, all PCI ops become managed
   pcim_pin_device()	: keep PCI device enabled after release
diff --git a/drivers/base/devres.c b/drivers/base/devres.c
index 8fc654f..71d5770 100644
--- a/drivers/base/devres.c
+++ b/drivers/base/devres.c
@@ -10,6 +10,7 @@
 #include <linux/device.h>
 #include <linux/module.h>
 #include <linux/slab.h>
+#include <linux/percpu.h>
 
 #include "base.h"
 
@@ -985,3 +986,68 @@ void devm_free_pages(struct device *dev, unsigned long addr)
 			       &devres));
 }
 EXPORT_SYMBOL_GPL(devm_free_pages);
+
+static void devm_percpu_release(struct device *dev, void *pdata)
+{
+	void __percpu *p;
+
+	p = *(void __percpu **)pdata;
+	free_percpu(p);
+}
+
+static int devm_percpu_match(struct device *dev, void *data, void *p)
+{
+	struct devres *devr = container_of(data, struct devres, data);
+
+	return *(void **)devr->data == p;
+}
+
+/**
+ * __devm_alloc_percpu - Resource-managed alloc_percpu
+ * @dev: Device to allocate per-cpu memory for
+ * @size: Size of per-cpu memory to allocate
+ * @align: Alignment of per-cpu memory to allocate
+ *
+ * Managed alloc_percpu. Per-cpu memory allocated with this function is
+ * automatically freed on driver detach.
+ *
+ * RETURNS:
+ * Pointer to allocated memory on success, NULL on failure.
+ */
+void __percpu *__devm_alloc_percpu(struct device *dev, size_t size,
+		size_t align)
+{
+	void *p;
+	void __percpu *pcpu;
+
+	pcpu = __alloc_percpu(size, align);
+	if (!pcpu)
+		return NULL;
+
+	p = devres_alloc(devm_percpu_release, sizeof(void *), GFP_KERNEL);
+	if (!p) {
+		free_percpu(pcpu);
+		return NULL;
+	}
+
+	*(void __percpu **)p = pcpu;
+
+	devres_add(dev, p);
+
+	return pcpu;
+}
+EXPORT_SYMBOL_GPL(__devm_alloc_percpu);
+
+/**
+ * devm_free_percpu - Resource-managed free_percpu
+ * @dev: Device this memory belongs to
+ * @pdata: Per-cpu memory to free
+ *
+ * Free memory allocated with devm_alloc_percpu().
+ */
+void devm_free_percpu(struct device *dev, void __percpu *pdata)
+{
+	WARN_ON(devres_destroy(dev, devm_percpu_release, devm_percpu_match,
+			       (void *)pdata));
+}
+EXPORT_SYMBOL_GPL(devm_free_percpu);
diff --git a/include/linux/device.h b/include/linux/device.h
index bc41e87..a00105c 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -698,6 +698,25 @@ static inline int devm_add_action_or_reset(struct device *dev,
 	return ret;
 }
 
+/**
+ * devm_alloc_percpu - Resource-managed alloc_percpu
+ * @dev: Device to allocate per-cpu memory for
+ * @type: Type to allocate per-cpu memory for
+ *
+ * Managed alloc_percpu. Per-cpu memory allocated with this function is
+ * automatically freed on driver detach.
+ *
+ * RETURNS:
+ * Pointer to allocated memory on success, NULL on failure.
+ */
+#define devm_alloc_percpu(dev, type)      \
+	((typeof(type) __percpu *)__devm_alloc_percpu((dev), sizeof(type), \
+						      __alignof__(type)))
+
+void __percpu *__devm_alloc_percpu(struct device *dev, size_t size,
+				   size_t align);
+void devm_free_percpu(struct device *dev, void __percpu *pdata);
+
 struct device_dma_parameters {
 	/*
 	 * a low level driver may set these to teach IOMMU code about
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v7 02/10] dpaa_eth: add support for DPAA Ethernet
  2016-11-11  8:19 [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
  2016-11-11  8:19 ` [PATCH net-next v7 01/10] devres: add devm_alloc_percpu() Madalin Bucur
@ 2016-11-11  8:19 ` Madalin Bucur
  2016-11-11  8:20 ` [PATCH net-next v7 03/10] dpaa_eth: add option to use one buffer pool set Madalin Bucur
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Madalin Bucur @ 2016-11-11  8:19 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

This introduces the Freescale Data Path Acceleration Architecture
(DPAA) Ethernet driver (dpaa_eth) that builds upon the DPAA QMan,
BMan, PAMU and FMan drivers to deliver Ethernet connectivity on
the Freescale DPAA QorIQ platforms.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/Kconfig         |    2 +
 drivers/net/ethernet/freescale/Makefile        |    1 +
 drivers/net/ethernet/freescale/dpaa/Kconfig    |   10 +
 drivers/net/ethernet/freescale/dpaa/Makefile   |   11 +
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 2682 ++++++++++++++++++++++++
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h |  144 ++
 6 files changed, 2850 insertions(+)
 create mode 100644 drivers/net/ethernet/freescale/dpaa/Kconfig
 create mode 100644 drivers/net/ethernet/freescale/dpaa/Makefile
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h

diff --git a/drivers/net/ethernet/freescale/Kconfig b/drivers/net/ethernet/freescale/Kconfig
index d1ca45f..aa3f615 100644
--- a/drivers/net/ethernet/freescale/Kconfig
+++ b/drivers/net/ethernet/freescale/Kconfig
@@ -93,4 +93,6 @@ config GIANFAR
 	  and MPC86xx family of chips, the eTSEC on LS1021A and the FEC
 	  on the 8540.
 
+source "drivers/net/ethernet/freescale/dpaa/Kconfig"
+
 endif # NET_VENDOR_FREESCALE
diff --git a/drivers/net/ethernet/freescale/Makefile b/drivers/net/ethernet/freescale/Makefile
index cbe21dc..4a13115 100644
--- a/drivers/net/ethernet/freescale/Makefile
+++ b/drivers/net/ethernet/freescale/Makefile
@@ -22,3 +22,4 @@ obj-$(CONFIG_UCC_GETH) += ucc_geth_driver.o
 ucc_geth_driver-objs := ucc_geth.o ucc_geth_ethtool.o
 
 obj-$(CONFIG_FSL_FMAN) += fman/
+obj-$(CONFIG_FSL_DPAA_ETH) += dpaa/
diff --git a/drivers/net/ethernet/freescale/dpaa/Kconfig b/drivers/net/ethernet/freescale/dpaa/Kconfig
new file mode 100644
index 0000000..f3a3454
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/Kconfig
@@ -0,0 +1,10 @@
+menuconfig FSL_DPAA_ETH
+	tristate "DPAA Ethernet"
+	depends on FSL_SOC && FSL_DPAA && FSL_FMAN
+	select PHYLIB
+	select FSL_FMAN_MAC
+	---help---
+	  Data Path Acceleration Architecture Ethernet driver,
+	  supporting the Freescale QorIQ chips.
+	  Depends on Freescale Buffer Manager and Queue Manager
+	  driver and Frame Manager Driver.
diff --git a/drivers/net/ethernet/freescale/dpaa/Makefile b/drivers/net/ethernet/freescale/dpaa/Makefile
new file mode 100644
index 0000000..fc76029
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/Makefile
@@ -0,0 +1,11 @@
+#
+# Makefile for the Freescale DPAA Ethernet controllers
+#
+
+# Include FMan headers
+FMAN        = $(srctree)/drivers/net/ethernet/freescale/fman
+ccflags-y += -I$(FMAN)
+
+obj-$(CONFIG_FSL_DPAA_ETH) += fsl_dpa.o
+
+fsl_dpa-objs += dpaa_eth.o
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
new file mode 100644
index 0000000..8b9b0720f
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -0,0 +1,2682 @@
+/* Copyright 2008 - 2016 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/of_platform.h>
+#include <linux/of_mdio.h>
+#include <linux/of_net.h>
+#include <linux/io.h>
+#include <linux/if_arp.h>
+#include <linux/if_vlan.h>
+#include <linux/icmp.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/udp.h>
+#include <linux/tcp.h>
+#include <linux/net.h>
+#include <linux/skbuff.h>
+#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
+#include <linux/highmem.h>
+#include <linux/percpu.h>
+#include <linux/dma-mapping.h>
+#include <linux/sort.h>
+#include <soc/fsl/bman.h>
+#include <soc/fsl/qman.h>
+
+#include "fman.h"
+#include "fman_port.h"
+#include "mac.h"
+#include "dpaa_eth.h"
+
+static int debug = -1;
+module_param(debug, int, 0444);
+MODULE_PARM_DESC(debug, "Module/Driver verbosity level (0=none,...,16=all)");
+
+static u16 tx_timeout = 1000;
+module_param(tx_timeout, ushort, 0444);
+MODULE_PARM_DESC(tx_timeout, "The Tx timeout in ms");
+
+#define FM_FD_STAT_RX_ERRORS						\
+	(FM_FD_ERR_DMA | FM_FD_ERR_PHYSICAL	| \
+	 FM_FD_ERR_SIZE | FM_FD_ERR_CLS_DISCARD | \
+	 FM_FD_ERR_EXTRACTION | FM_FD_ERR_NO_SCHEME	| \
+	 FM_FD_ERR_PRS_TIMEOUT | FM_FD_ERR_PRS_ILL_INSTRUCT | \
+	 FM_FD_ERR_PRS_HDR_ERR)
+
+#define FM_FD_STAT_TX_ERRORS \
+	(FM_FD_ERR_UNSUPPORTED_FORMAT | \
+	 FM_FD_ERR_LENGTH | FM_FD_ERR_DMA)
+
+#define DPAA_MSG_DEFAULT (NETIF_MSG_DRV | NETIF_MSG_PROBE | \
+			  NETIF_MSG_LINK | NETIF_MSG_IFUP | \
+			  NETIF_MSG_IFDOWN)
+
+#define DPAA_INGRESS_CS_THRESHOLD 0x10000000
+/* Ingress congestion threshold on FMan ports
+ * The size in bytes of the ingress tail-drop threshold on FMan ports.
+ * Traffic piling up above this value will be rejected by QMan and discarded
+ * by FMan.
+ */
+
+/* Size in bytes of the FQ taildrop threshold */
+#define DPAA_FQ_TD 0x200000
+
+#define DPAA_CS_THRESHOLD_1G 0x06000000
+/* Egress congestion threshold on 1G ports, range 0x1000 .. 0x10000000
+ * The size in bytes of the egress Congestion State notification threshold on
+ * 1G ports. The 1G dTSECs can quite easily be flooded by cores doing Tx in a
+ * tight loop (e.g. by sending UDP datagrams at "while(1) speed"),
+ * and the larger the frame size, the more acute the problem.
+ * So we have to find a balance between these factors:
+ * - avoiding the device staying congested for a prolonged time (risking
+ *   the netdev watchdog to fire - see also the tx_timeout module param);
+ * - affecting performance of protocols such as TCP, which otherwise
+ *   behave well under the congestion notification mechanism;
+ * - preventing the Tx cores from tightly-looping (as if the congestion
+ *   threshold was too low to be effective);
+ * - running out of memory if the CS threshold is set too high.
+ */
+
+#define DPAA_CS_THRESHOLD_10G 0x10000000
+/* The size in bytes of the egress Congestion State notification threshold on
+ * 10G ports, range 0x1000 .. 0x10000000
+ */
+
+/* Largest value that the FQD's OAL field can hold */
+#define FSL_QMAN_MAX_OAL	127
+
+/* Default alignment for start of data in an Rx FD */
+#define DPAA_FD_DATA_ALIGNMENT  16
+
+/* Values for the L3R field of the FM Parse Results
+ */
+/* L3 Type field: First IP Present IPv4 */
+#define FM_L3_PARSE_RESULT_IPV4	0x8000
+/* L3 Type field: First IP Present IPv6 */
+#define FM_L3_PARSE_RESULT_IPV6	0x4000
+/* Values for the L4R field of the FM Parse Results */
+/* L4 Type field: UDP */
+#define FM_L4_PARSE_RESULT_UDP	0x40
+/* L4 Type field: TCP */
+#define FM_L4_PARSE_RESULT_TCP	0x20
+
+#define DPAA_SGT_MAX_ENTRIES 16 /* maximum number of entries in SG Table */
+#define DPAA_BUFF_RELEASE_MAX 8 /* maximum number of buffers released at once */
+
+#define FSL_DPAA_BPID_INV		0xff
+#define FSL_DPAA_ETH_MAX_BUF_COUNT	128
+#define FSL_DPAA_ETH_REFILL_THRESHOLD	80
+
+#define DPAA_TX_PRIV_DATA_SIZE	16
+#define DPAA_PARSE_RESULTS_SIZE sizeof(struct fman_prs_result)
+#define DPAA_TIME_STAMP_SIZE 8
+#define DPAA_HASH_RESULTS_SIZE 8
+#define DPAA_RX_PRIV_DATA_SIZE	(u16)(DPAA_TX_PRIV_DATA_SIZE + \
+					dpaa_rx_extra_headroom)
+
+#define DPAA_ETH_RX_QUEUES	128
+
+#define DPAA_ENQUEUE_RETRIES	100000
+
+enum port_type {RX, TX};
+
+struct fm_port_fqs {
+	struct dpaa_fq *tx_defq;
+	struct dpaa_fq *tx_errq;
+	struct dpaa_fq *rx_defq;
+	struct dpaa_fq *rx_errq;
+};
+
+/* All the dpa bps in use at any moment */
+static struct dpaa_bp *dpaa_bp_array[BM_MAX_NUM_OF_POOLS];
+
+/* The raw buffer size must be cacheline aligned */
+#define DPAA_BP_RAW_SIZE 4096
+/* When using more than one buffer pool, the raw sizes are as follows:
+ * 1 bp: 4KB
+ * 2 bp: 2KB, 4KB
+ * 3 bp: 1KB, 2KB, 4KB
+ * 4 bp: 1KB, 2KB, 4KB, 8KB
+ */
+static inline size_t bpool_buffer_raw_size(u8 index, u8 cnt)
+{
+	size_t res = DPAA_BP_RAW_SIZE / 4;
+	u8 i;
+
+	for (i = (cnt < 3) ? cnt : 3; i < 3 + index; i++)
+		res *= 2;
+	return res;
+}
+
+/* FMan-DMA requires 16-byte alignment for Rx buffers, but SKB_DATA_ALIGN is
+ * even stronger (SMP_CACHE_BYTES-aligned), so we just get away with that,
+ * via SKB_WITH_OVERHEAD(). We can't rely on netdev_alloc_frag() giving us
+ * half-page-aligned buffers, so we reserve some more space for start-of-buffer
+ * alignment.
+ */
+#define dpaa_bp_size(raw_size) SKB_WITH_OVERHEAD((raw_size) - SMP_CACHE_BYTES)
+
+static int dpaa_max_frm;
+
+static int dpaa_rx_extra_headroom;
+
+#define dpaa_get_max_mtu()	\
+	(dpaa_max_frm - (VLAN_ETH_HLEN + ETH_FCS_LEN))
+
+static int dpaa_netdev_init(struct net_device *net_dev,
+			    const struct net_device_ops *dpaa_ops,
+			    u16 tx_timeout)
+{
+	struct dpaa_priv *priv = netdev_priv(net_dev);
+	struct device *dev = net_dev->dev.parent;
+	struct dpaa_percpu_priv *percpu_priv;
+	const u8 *mac_addr;
+	int i, err;
+
+	/* Although we access another CPU's private data here
+	 * we do it at initialization so it is safe
+	 */
+	for_each_possible_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+		percpu_priv->net_dev = net_dev;
+	}
+
+	net_dev->netdev_ops = dpaa_ops;
+	mac_addr = priv->mac_dev->addr;
+
+	net_dev->mem_start = priv->mac_dev->res->start;
+	net_dev->mem_end = priv->mac_dev->res->end;
+
+	net_dev->min_mtu = ETH_MIN_MTU;
+	net_dev->max_mtu = dpaa_get_max_mtu();
+
+	net_dev->hw_features |= (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+				 NETIF_F_LLTX);
+
+	net_dev->hw_features |= NETIF_F_SG | NETIF_F_HIGHDMA;
+	/* The kernels enables GSO automatically, if we declare NETIF_F_SG.
+	 * For conformity, we'll still declare GSO explicitly.
+	 */
+	net_dev->features |= NETIF_F_GSO;
+
+	net_dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+	/* we do not want shared skbs on TX */
+	net_dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+
+	net_dev->features |= net_dev->hw_features;
+	net_dev->vlan_features = net_dev->features;
+
+	memcpy(net_dev->perm_addr, mac_addr, net_dev->addr_len);
+	memcpy(net_dev->dev_addr, mac_addr, net_dev->addr_len);
+
+	net_dev->needed_headroom = priv->tx_headroom;
+	net_dev->watchdog_timeo = msecs_to_jiffies(tx_timeout);
+
+	/* start without the RUNNING flag, phylib controls it later */
+	netif_carrier_off(net_dev);
+
+	err = register_netdev(net_dev);
+	if (err < 0) {
+		dev_err(dev, "register_netdev() = %d\n", err);
+		return err;
+	}
+
+	return 0;
+}
+
+static int dpaa_stop(struct net_device *net_dev)
+{
+	struct mac_device *mac_dev;
+	struct dpaa_priv *priv;
+	int i, err, error;
+
+	priv = netdev_priv(net_dev);
+	mac_dev = priv->mac_dev;
+
+	netif_tx_stop_all_queues(net_dev);
+	/* Allow the Fman (Tx) port to process in-flight frames before we
+	 * try switching it off.
+	 */
+	usleep_range(5000, 10000);
+
+	err = mac_dev->stop(mac_dev);
+	if (err < 0)
+		netif_err(priv, ifdown, net_dev, "mac_dev->stop() = %d\n",
+			  err);
+
+	for (i = 0; i < ARRAY_SIZE(mac_dev->port); i++) {
+		error = fman_port_disable(mac_dev->port[i]);
+		if (error)
+			err = error;
+	}
+
+	if (net_dev->phydev)
+		phy_disconnect(net_dev->phydev);
+	net_dev->phydev = NULL;
+
+	return err;
+}
+
+static void dpaa_tx_timeout(struct net_device *net_dev)
+{
+	struct dpaa_percpu_priv *percpu_priv;
+	const struct dpaa_priv	*priv;
+
+	priv = netdev_priv(net_dev);
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+
+	netif_crit(priv, timer, net_dev, "Transmit timeout latency: %u ms\n",
+		   jiffies_to_msecs(jiffies - dev_trans_start(net_dev)));
+
+	percpu_priv->stats.tx_errors++;
+}
+
+/* Calculates the statistics for the given device by adding the statistics
+ * collected by each CPU.
+ */
+static struct rtnl_link_stats64 *dpaa_get_stats64(struct net_device *net_dev,
+						  struct rtnl_link_stats64 *s)
+{
+	int numstats = sizeof(struct rtnl_link_stats64) / sizeof(u64);
+	struct dpaa_priv *priv = netdev_priv(net_dev);
+	struct dpaa_percpu_priv *percpu_priv;
+	u64 *netstats = (u64 *)s;
+	u64 *cpustats;
+	int i, j;
+
+	for_each_possible_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+
+		cpustats = (u64 *)&percpu_priv->stats;
+
+		/* add stats from all CPUs */
+		for (j = 0; j < numstats; j++)
+			netstats[j] += cpustats[j];
+	}
+
+	return s;
+}
+
+static struct mac_device *dpaa_mac_dev_get(struct platform_device *pdev)
+{
+	struct platform_device *of_dev;
+	struct dpaa_eth_data *eth_data;
+	struct device *dpaa_dev, *dev;
+	struct device_node *mac_node;
+	struct mac_device *mac_dev;
+
+	dpaa_dev = &pdev->dev;
+	eth_data = dpaa_dev->platform_data;
+	if (!eth_data)
+		return ERR_PTR(-ENODEV);
+
+	mac_node = eth_data->mac_node;
+
+	of_dev = of_find_device_by_node(mac_node);
+	if (!of_dev) {
+		dev_err(dpaa_dev, "of_find_device_by_node(%s) failed\n",
+			mac_node->full_name);
+		of_node_put(mac_node);
+		return ERR_PTR(-EINVAL);
+	}
+	of_node_put(mac_node);
+
+	dev = &of_dev->dev;
+
+	mac_dev = dev_get_drvdata(dev);
+	if (!mac_dev) {
+		dev_err(dpaa_dev, "dev_get_drvdata(%s) failed\n",
+			dev_name(dev));
+		return ERR_PTR(-EINVAL);
+	}
+
+	return mac_dev;
+}
+
+static int dpaa_set_mac_address(struct net_device *net_dev, void *addr)
+{
+	const struct dpaa_priv *priv;
+	struct mac_device *mac_dev;
+	struct sockaddr old_addr;
+	int err;
+
+	priv = netdev_priv(net_dev);
+
+	memcpy(old_addr.sa_data, net_dev->dev_addr,  ETH_ALEN);
+
+	err = eth_mac_addr(net_dev, addr);
+	if (err < 0) {
+		netif_err(priv, drv, net_dev, "eth_mac_addr() = %d\n", err);
+		return err;
+	}
+
+	mac_dev = priv->mac_dev;
+
+	err = mac_dev->change_addr(mac_dev->fman_mac,
+				   (enet_addr_t *)net_dev->dev_addr);
+	if (err < 0) {
+		netif_err(priv, drv, net_dev, "mac_dev->change_addr() = %d\n",
+			  err);
+		/* reverting to previous address */
+		eth_mac_addr(net_dev, &old_addr);
+
+		return err;
+	}
+
+	return 0;
+}
+
+static void dpaa_set_rx_mode(struct net_device *net_dev)
+{
+	const struct dpaa_priv	*priv;
+	int err;
+
+	priv = netdev_priv(net_dev);
+
+	if (!!(net_dev->flags & IFF_PROMISC) != priv->mac_dev->promisc) {
+		priv->mac_dev->promisc = !priv->mac_dev->promisc;
+		err = priv->mac_dev->set_promisc(priv->mac_dev->fman_mac,
+						 priv->mac_dev->promisc);
+		if (err < 0)
+			netif_err(priv, drv, net_dev,
+				  "mac_dev->set_promisc() = %d\n",
+				  err);
+	}
+
+	err = priv->mac_dev->set_multi(net_dev, priv->mac_dev);
+	if (err < 0)
+		netif_err(priv, drv, net_dev, "mac_dev->set_multi() = %d\n",
+			  err);
+}
+
+static struct dpaa_bp *dpaa_bpid2pool(int bpid)
+{
+	if (WARN_ON(bpid < 0 || bpid >= BM_MAX_NUM_OF_POOLS))
+		return NULL;
+
+	return dpaa_bp_array[bpid];
+}
+
+/* checks if this bpool is already allocated */
+static bool dpaa_bpid2pool_use(int bpid)
+{
+	if (dpaa_bpid2pool(bpid)) {
+		atomic_inc(&dpaa_bp_array[bpid]->refs);
+		return true;
+	}
+
+	return false;
+}
+
+/* called only once per bpid by dpaa_bp_alloc_pool() */
+static void dpaa_bpid2pool_map(int bpid, struct dpaa_bp *dpaa_bp)
+{
+	dpaa_bp_array[bpid] = dpaa_bp;
+	atomic_set(&dpaa_bp->refs, 1);
+}
+
+static int dpaa_bp_alloc_pool(struct dpaa_bp *dpaa_bp)
+{
+	int err;
+
+	if (dpaa_bp->size == 0 || dpaa_bp->config_count == 0) {
+		pr_err("%s: Buffer pool is not properly initialized! Missing size or initial number of buffers\n",
+		       __func__);
+		return -EINVAL;
+	}
+
+	/* If the pool is already specified, we only create one per bpid */
+	if (dpaa_bp->bpid != FSL_DPAA_BPID_INV &&
+	    dpaa_bpid2pool_use(dpaa_bp->bpid))
+		return 0;
+
+	if (dpaa_bp->bpid == FSL_DPAA_BPID_INV) {
+		dpaa_bp->pool = bman_new_pool();
+		if (!dpaa_bp->pool) {
+			pr_err("%s: bman_new_pool() failed\n",
+			       __func__);
+			return -ENODEV;
+		}
+
+		dpaa_bp->bpid = (u8)bman_get_bpid(dpaa_bp->pool);
+	}
+
+	if (dpaa_bp->seed_cb) {
+		err = dpaa_bp->seed_cb(dpaa_bp);
+		if (err)
+			goto pool_seed_failed;
+	}
+
+	dpaa_bpid2pool_map(dpaa_bp->bpid, dpaa_bp);
+
+	return 0;
+
+pool_seed_failed:
+	pr_err("%s: pool seeding failed\n", __func__);
+	bman_free_pool(dpaa_bp->pool);
+
+	return err;
+}
+
+/* remove and free all the buffers from the given buffer pool */
+static void dpaa_bp_drain(struct dpaa_bp *bp)
+{
+	u8 num = 8;
+	int ret;
+
+	do {
+		struct bm_buffer bmb[8];
+		int i;
+
+		ret = bman_acquire(bp->pool, bmb, num);
+		if (ret < 0) {
+			if (num == 8) {
+				/* we have less than 8 buffers left;
+				 * drain them one by one
+				 */
+				num = 1;
+				ret = 1;
+				continue;
+			} else {
+				/* Pool is fully drained */
+				break;
+			}
+		}
+
+		if (bp->free_buf_cb)
+			for (i = 0; i < num; i++)
+				bp->free_buf_cb(bp, &bmb[i]);
+	} while (ret > 0);
+}
+
+static void dpaa_bp_free(struct dpaa_bp *dpaa_bp)
+{
+	struct dpaa_bp *bp = dpaa_bpid2pool(dpaa_bp->bpid);
+
+	/* the mapping between bpid and dpaa_bp is done very late in the
+	 * allocation procedure; if something failed before the mapping, the bp
+	 * was not configured, therefore we don't need the below instructions
+	 */
+	if (!bp)
+		return;
+
+	if (!atomic_dec_and_test(&bp->refs))
+		return;
+
+	if (bp->free_buf_cb)
+		dpaa_bp_drain(bp);
+
+	dpaa_bp_array[bp->bpid] = NULL;
+	bman_free_pool(bp->pool);
+}
+
+static void dpaa_bps_free(struct dpaa_priv *priv)
+{
+	int i;
+
+	for (i = 0; i < DPAA_BPS_NUM; i++)
+		dpaa_bp_free(priv->dpaa_bps[i]);
+}
+
+/* Use multiple WQs for FQ assignment:
+ *	- Tx Confirmation queues go to WQ1.
+ *	- Rx Error and Tx Error queues go to WQ2 (giving them a better chance
+ *	  to be scheduled, in case there are many more FQs in WQ3).
+ *	- Rx Default and Tx queues go to WQ3 (no differentiation between
+ *	  Rx and Tx traffic).
+ * This ensures that Tx-confirmed buffers are timely released. In particular,
+ * it avoids congestion on the Tx Confirm FQs, which can pile up PFDRs if they
+ * are greatly outnumbered by other FQs in the system, while
+ * dequeue scheduling is round-robin.
+ */
+static inline void dpaa_assign_wq(struct dpaa_fq *fq)
+{
+	switch (fq->fq_type) {
+	case FQ_TYPE_TX_CONFIRM:
+	case FQ_TYPE_TX_CONF_MQ:
+		fq->wq = 1;
+		break;
+	case FQ_TYPE_RX_ERROR:
+	case FQ_TYPE_TX_ERROR:
+		fq->wq = 2;
+		break;
+	case FQ_TYPE_RX_DEFAULT:
+	case FQ_TYPE_TX:
+		fq->wq = 3;
+		break;
+	default:
+		WARN(1, "Invalid FQ type %d for FQID %d!\n",
+		     fq->fq_type, fq->fqid);
+	}
+}
+
+static struct dpaa_fq *dpaa_fq_alloc(struct device *dev,
+				     u32 start, u32 count,
+				     struct list_head *list,
+				     enum dpaa_fq_type fq_type)
+{
+	struct dpaa_fq *dpaa_fq;
+	int i;
+
+	dpaa_fq = devm_kzalloc(dev, sizeof(*dpaa_fq) * count,
+			       GFP_KERNEL);
+	if (!dpaa_fq)
+		return NULL;
+
+	for (i = 0; i < count; i++) {
+		dpaa_fq[i].fq_type = fq_type;
+		dpaa_fq[i].fqid = start ? start + i : 0;
+		list_add_tail(&dpaa_fq[i].list, list);
+	}
+
+	for (i = 0; i < count; i++)
+		dpaa_assign_wq(dpaa_fq + i);
+
+	return dpaa_fq;
+}
+
+static int dpaa_alloc_all_fqs(struct device *dev, struct list_head *list,
+			      struct fm_port_fqs *port_fqs)
+{
+	struct dpaa_fq *dpaa_fq;
+
+	dpaa_fq = dpaa_fq_alloc(dev, 0, 1, list, FQ_TYPE_RX_ERROR);
+	if (!dpaa_fq)
+		goto fq_alloc_failed;
+
+	port_fqs->rx_errq = &dpaa_fq[0];
+
+	dpaa_fq = dpaa_fq_alloc(dev, 0, 1, list, FQ_TYPE_RX_DEFAULT);
+	if (!dpaa_fq)
+		goto fq_alloc_failed;
+
+	port_fqs->rx_defq = &dpaa_fq[0];
+
+	if (!dpaa_fq_alloc(dev, 0, DPAA_ETH_TXQ_NUM, list, FQ_TYPE_TX_CONF_MQ))
+		goto fq_alloc_failed;
+
+	dpaa_fq = dpaa_fq_alloc(dev, 0, 1, list, FQ_TYPE_TX_ERROR);
+	if (!dpaa_fq)
+		goto fq_alloc_failed;
+
+	port_fqs->tx_errq = &dpaa_fq[0];
+
+	dpaa_fq = dpaa_fq_alloc(dev, 0, 1, list, FQ_TYPE_TX_CONFIRM);
+	if (!dpaa_fq)
+		goto fq_alloc_failed;
+
+	port_fqs->tx_defq = &dpaa_fq[0];
+
+	if (!dpaa_fq_alloc(dev, 0, DPAA_ETH_TXQ_NUM, list, FQ_TYPE_TX))
+		goto fq_alloc_failed;
+
+	return 0;
+
+fq_alloc_failed:
+	dev_err(dev, "dpaa_fq_alloc() failed\n");
+	return -ENOMEM;
+}
+
+static u32 rx_pool_channel;
+static DEFINE_SPINLOCK(rx_pool_channel_init);
+
+static int dpaa_get_channel(void)
+{
+	spin_lock(&rx_pool_channel_init);
+	if (!rx_pool_channel) {
+		u32 pool;
+		int ret;
+
+		ret = qman_alloc_pool(&pool);
+
+		if (!ret)
+			rx_pool_channel = pool;
+	}
+	spin_unlock(&rx_pool_channel_init);
+	if (!rx_pool_channel)
+		return -ENOMEM;
+	return rx_pool_channel;
+}
+
+static void dpaa_release_channel(void)
+{
+	qman_release_pool(rx_pool_channel);
+}
+
+static void dpaa_eth_add_channel(u16 channel)
+{
+	u32 pool = QM_SDQCR_CHANNELS_POOL_CONV(channel);
+	const cpumask_t *cpus = qman_affine_cpus();
+	struct qman_portal *portal;
+	int cpu;
+
+	for_each_cpu(cpu, cpus) {
+		portal = qman_get_affine_portal(cpu);
+		qman_p_static_dequeue_add(portal, pool);
+	}
+}
+
+/* Congestion group state change notification callback.
+ * Stops the device's egress queues while they are congested and
+ * wakes them upon exiting congested state.
+ * Also updates some CGR-related stats.
+ */
+static void dpaa_eth_cgscn(struct qman_portal *qm, struct qman_cgr *cgr,
+			   int congested)
+{
+	struct dpaa_priv *priv = (struct dpaa_priv *)container_of(cgr,
+		struct dpaa_priv, cgr_data.cgr);
+
+	if (congested)
+		netif_tx_stop_all_queues(priv->net_dev);
+	else
+		netif_tx_wake_all_queues(priv->net_dev);
+}
+
+static int dpaa_eth_cgr_init(struct dpaa_priv *priv)
+{
+	struct qm_mcc_initcgr initcgr;
+	u32 cs_th;
+	int err;
+
+	err = qman_alloc_cgrid(&priv->cgr_data.cgr.cgrid);
+	if (err < 0) {
+		if (netif_msg_drv(priv))
+			pr_err("%s: Error %d allocating CGR ID\n",
+			       __func__, err);
+		goto out_error;
+	}
+	priv->cgr_data.cgr.cb = dpaa_eth_cgscn;
+
+	/* Enable Congestion State Change Notifications and CS taildrop */
+	initcgr.we_mask = QM_CGR_WE_CSCN_EN | QM_CGR_WE_CS_THRES;
+	initcgr.cgr.cscn_en = QM_CGR_EN;
+
+	/* Set different thresholds based on the MAC speed.
+	 * This may turn suboptimal if the MAC is reconfigured at a speed
+	 * lower than its max, e.g. if a dTSEC later negotiates a 100Mbps link.
+	 * In such cases, we ought to reconfigure the threshold, too.
+	 */
+	if (priv->mac_dev->if_support & SUPPORTED_10000baseT_Full)
+		cs_th = DPAA_CS_THRESHOLD_10G;
+	else
+		cs_th = DPAA_CS_THRESHOLD_1G;
+	qm_cgr_cs_thres_set64(&initcgr.cgr.cs_thres, cs_th, 1);
+
+	initcgr.we_mask |= QM_CGR_WE_CSTD_EN;
+	initcgr.cgr.cstd_en = QM_CGR_EN;
+
+	err = qman_create_cgr(&priv->cgr_data.cgr, QMAN_CGR_FLAG_USE_INIT,
+			      &initcgr);
+	if (err < 0) {
+		if (netif_msg_drv(priv))
+			pr_err("%s: Error %d creating CGR with ID %d\n",
+			       __func__, err, priv->cgr_data.cgr.cgrid);
+		qman_release_cgrid(priv->cgr_data.cgr.cgrid);
+		goto out_error;
+	}
+	if (netif_msg_drv(priv))
+		pr_debug("Created CGR %d for netdev with hwaddr %pM on QMan channel %d\n",
+			 priv->cgr_data.cgr.cgrid, priv->mac_dev->addr,
+			 priv->cgr_data.cgr.chan);
+
+out_error:
+	return err;
+}
+
+static inline void dpaa_setup_ingress(const struct dpaa_priv *priv,
+				      struct dpaa_fq *fq,
+				      const struct qman_fq *template)
+{
+	fq->fq_base = *template;
+	fq->net_dev = priv->net_dev;
+
+	fq->flags = QMAN_FQ_FLAG_NO_ENQUEUE;
+	fq->channel = priv->channel;
+}
+
+static inline void dpaa_setup_egress(const struct dpaa_priv *priv,
+				     struct dpaa_fq *fq,
+				     struct fman_port *port,
+				     const struct qman_fq *template)
+{
+	fq->fq_base = *template;
+	fq->net_dev = priv->net_dev;
+
+	if (port) {
+		fq->flags = QMAN_FQ_FLAG_TO_DCPORTAL;
+		fq->channel = (u16)fman_port_get_qman_channel_id(port);
+	} else {
+		fq->flags = QMAN_FQ_FLAG_NO_MODIFY;
+	}
+}
+
+static void dpaa_fq_setup(struct dpaa_priv *priv,
+			  const struct dpaa_fq_cbs *fq_cbs,
+			  struct fman_port *tx_port)
+{
+	int egress_cnt = 0, conf_cnt = 0, num_portals = 0, cpu;
+	const cpumask_t *affine_cpus = qman_affine_cpus();
+	u16 portals[NR_CPUS];
+	struct dpaa_fq *fq;
+
+	for_each_cpu(cpu, affine_cpus)
+		portals[num_portals++] = qman_affine_channel(cpu);
+	if (num_portals == 0)
+		dev_err(priv->net_dev->dev.parent,
+			"No Qman software (affine) channels found");
+
+	/* Initialize each FQ in the list */
+	list_for_each_entry(fq, &priv->dpaa_fq_list, list) {
+		switch (fq->fq_type) {
+		case FQ_TYPE_RX_DEFAULT:
+			dpaa_setup_ingress(priv, fq, &fq_cbs->rx_defq);
+			break;
+		case FQ_TYPE_RX_ERROR:
+			dpaa_setup_ingress(priv, fq, &fq_cbs->rx_errq);
+			break;
+		case FQ_TYPE_TX:
+			dpaa_setup_egress(priv, fq, tx_port,
+					  &fq_cbs->egress_ern);
+			/* If we have more Tx queues than the number of cores,
+			 * just ignore the extra ones.
+			 */
+			if (egress_cnt < DPAA_ETH_TXQ_NUM)
+				priv->egress_fqs[egress_cnt++] = &fq->fq_base;
+			break;
+		case FQ_TYPE_TX_CONF_MQ:
+			priv->conf_fqs[conf_cnt++] = &fq->fq_base;
+			/* fall through */
+		case FQ_TYPE_TX_CONFIRM:
+			dpaa_setup_ingress(priv, fq, &fq_cbs->tx_defq);
+			break;
+		case FQ_TYPE_TX_ERROR:
+			dpaa_setup_ingress(priv, fq, &fq_cbs->tx_errq);
+			break;
+		default:
+			dev_warn(priv->net_dev->dev.parent,
+				 "Unknown FQ type detected!\n");
+			break;
+		}
+	}
+
+	 /* Make sure all CPUs receive a corresponding Tx queue. */
+	while (egress_cnt < DPAA_ETH_TXQ_NUM) {
+		list_for_each_entry(fq, &priv->dpaa_fq_list, list) {
+			if (fq->fq_type != FQ_TYPE_TX)
+				continue;
+			priv->egress_fqs[egress_cnt++] = &fq->fq_base;
+			if (egress_cnt == DPAA_ETH_TXQ_NUM)
+				break;
+		}
+	}
+}
+
+static inline int dpaa_tx_fq_to_id(const struct dpaa_priv *priv,
+				   struct qman_fq *tx_fq)
+{
+	int i;
+
+	for (i = 0; i < DPAA_ETH_TXQ_NUM; i++)
+		if (priv->egress_fqs[i] == tx_fq)
+			return i;
+
+	return -EINVAL;
+}
+
+static int dpaa_fq_init(struct dpaa_fq *dpaa_fq, bool td_enable)
+{
+	const struct dpaa_priv	*priv;
+	struct qman_fq *confq = NULL;
+	struct qm_mcc_initfq initfq;
+	struct device *dev;
+	struct qman_fq *fq;
+	int queue_id;
+	int err;
+
+	priv = netdev_priv(dpaa_fq->net_dev);
+	dev = dpaa_fq->net_dev->dev.parent;
+
+	if (dpaa_fq->fqid == 0)
+		dpaa_fq->flags |= QMAN_FQ_FLAG_DYNAMIC_FQID;
+
+	dpaa_fq->init = !(dpaa_fq->flags & QMAN_FQ_FLAG_NO_MODIFY);
+
+	err = qman_create_fq(dpaa_fq->fqid, dpaa_fq->flags, &dpaa_fq->fq_base);
+	if (err) {
+		dev_err(dev, "qman_create_fq() failed\n");
+		return err;
+	}
+	fq = &dpaa_fq->fq_base;
+
+	if (dpaa_fq->init) {
+		memset(&initfq, 0, sizeof(initfq));
+
+		initfq.we_mask = QM_INITFQ_WE_FQCTRL;
+		/* Note: we may get to keep an empty FQ in cache */
+		initfq.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
+
+		/* Try to reduce the number of portal interrupts for
+		 * Tx Confirmation FQs.
+		 */
+		if (dpaa_fq->fq_type == FQ_TYPE_TX_CONFIRM)
+			initfq.fqd.fq_ctrl |= QM_FQCTRL_HOLDACTIVE;
+
+		/* FQ placement */
+		initfq.we_mask |= QM_INITFQ_WE_DESTWQ;
+
+		qm_fqd_set_destwq(&initfq.fqd, dpaa_fq->channel, dpaa_fq->wq);
+
+		/* Put all egress queues in a congestion group of their own.
+		 * Sensu stricto, the Tx confirmation queues are Rx FQs,
+		 * rather than Tx - but they nonetheless account for the
+		 * memory footprint on behalf of egress traffic. We therefore
+		 * place them in the netdev's CGR, along with the Tx FQs.
+		 */
+		if (dpaa_fq->fq_type == FQ_TYPE_TX ||
+		    dpaa_fq->fq_type == FQ_TYPE_TX_CONFIRM ||
+		    dpaa_fq->fq_type == FQ_TYPE_TX_CONF_MQ) {
+			initfq.we_mask |= QM_INITFQ_WE_CGID;
+			initfq.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+			initfq.fqd.cgid = (u8)priv->cgr_data.cgr.cgrid;
+			/* Set a fixed overhead accounting, in an attempt to
+			 * reduce the impact of fixed-size skb shells and the
+			 * driver's needed headroom on system memory. This is
+			 * especially the case when the egress traffic is
+			 * composed of small datagrams.
+			 * Unfortunately, QMan's OAL value is capped to an
+			 * insufficient value, but even that is better than
+			 * no overhead accounting at all.
+			 */
+			initfq.we_mask |= QM_INITFQ_WE_OAC;
+			qm_fqd_set_oac(&initfq.fqd, QM_OAC_CG);
+			qm_fqd_set_oal(&initfq.fqd,
+				       min(sizeof(struct sk_buff) +
+				       priv->tx_headroom,
+				       (size_t)FSL_QMAN_MAX_OAL));
+		}
+
+		if (td_enable) {
+			initfq.we_mask |= QM_INITFQ_WE_TDTHRESH;
+			qm_fqd_set_taildrop(&initfq.fqd, DPAA_FQ_TD, 1);
+			initfq.fqd.fq_ctrl = QM_FQCTRL_TDE;
+		}
+
+		if (dpaa_fq->fq_type == FQ_TYPE_TX) {
+			queue_id = dpaa_tx_fq_to_id(priv, &dpaa_fq->fq_base);
+			if (queue_id >= 0)
+				confq = priv->conf_fqs[queue_id];
+			if (confq) {
+				initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;
+			/* ContextA: OVOM=1(use contextA2 bits instead of ICAD)
+			 *	     A2V=1 (contextA A2 field is valid)
+			 *	     A0V=1 (contextA A0 field is valid)
+			 *	     B0V=1 (contextB field is valid)
+			 * ContextA A2: EBD=1 (deallocate buffers inside FMan)
+			 * ContextB B0(ASPID): 0 (absolute Virtual Storage ID)
+			 */
+				initfq.fqd.context_a.hi = 0x1e000000;
+				initfq.fqd.context_a.lo = 0x80000000;
+			}
+		}
+
+		/* Put all the ingress queues in our "ingress CGR". */
+		if (priv->use_ingress_cgr &&
+		    (dpaa_fq->fq_type == FQ_TYPE_RX_DEFAULT ||
+		     dpaa_fq->fq_type == FQ_TYPE_RX_ERROR)) {
+			initfq.we_mask |= QM_INITFQ_WE_CGID;
+			initfq.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+			initfq.fqd.cgid = (u8)priv->ingress_cgr.cgrid;
+			/* Set a fixed overhead accounting, just like for the
+			 * egress CGR.
+			 */
+			initfq.we_mask |= QM_INITFQ_WE_OAC;
+			qm_fqd_set_oac(&initfq.fqd, QM_OAC_CG);
+			qm_fqd_set_oal(&initfq.fqd,
+				       min(sizeof(struct sk_buff) +
+				       priv->tx_headroom,
+				       (size_t)FSL_QMAN_MAX_OAL));
+		}
+
+		/* Initialization common to all ingress queues */
+		if (dpaa_fq->flags & QMAN_FQ_FLAG_NO_ENQUEUE) {
+			initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;
+			initfq.fqd.fq_ctrl |=
+				QM_FQCTRL_HOLDACTIVE;
+			initfq.fqd.context_a.stashing.exclusive =
+				QM_STASHING_EXCL_DATA | QM_STASHING_EXCL_CTX |
+				QM_STASHING_EXCL_ANNOTATION;
+			qm_fqd_set_stashing(&initfq.fqd, 1, 2,
+					    DIV_ROUND_UP(sizeof(struct qman_fq),
+							 64));
+		}
+
+		err = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &initfq);
+		if (err < 0) {
+			dev_err(dev, "qman_init_fq(%u) = %d\n",
+				qman_fq_fqid(fq), err);
+			qman_destroy_fq(fq);
+			return err;
+		}
+	}
+
+	dpaa_fq->fqid = qman_fq_fqid(fq);
+
+	return 0;
+}
+
+static int dpaa_fq_free_entry(struct device *dev, struct qman_fq *fq)
+{
+	const struct dpaa_priv  *priv;
+	struct dpaa_fq *dpaa_fq;
+	int err, error;
+
+	err = 0;
+
+	dpaa_fq = container_of(fq, struct dpaa_fq, fq_base);
+	priv = netdev_priv(dpaa_fq->net_dev);
+
+	if (dpaa_fq->init) {
+		err = qman_retire_fq(fq, NULL);
+		if (err < 0 && netif_msg_drv(priv))
+			dev_err(dev, "qman_retire_fq(%u) = %d\n",
+				qman_fq_fqid(fq), err);
+
+		error = qman_oos_fq(fq);
+		if (error < 0 && netif_msg_drv(priv)) {
+			dev_err(dev, "qman_oos_fq(%u) = %d\n",
+				qman_fq_fqid(fq), error);
+			if (err >= 0)
+				err = error;
+		}
+	}
+
+	qman_destroy_fq(fq);
+	list_del(&dpaa_fq->list);
+
+	return err;
+}
+
+static int dpaa_fq_free(struct device *dev, struct list_head *list)
+{
+	struct dpaa_fq *dpaa_fq, *tmp;
+	int err, error;
+
+	err = 0;
+	list_for_each_entry_safe(dpaa_fq, tmp, list, list) {
+		error = dpaa_fq_free_entry(dev, (struct qman_fq *)dpaa_fq);
+		if (error < 0 && err >= 0)
+			err = error;
+	}
+
+	return err;
+}
+
+static void dpaa_eth_init_tx_port(struct fman_port *port, struct dpaa_fq *errq,
+				  struct dpaa_fq *defq,
+				  struct dpaa_buffer_layout *buf_layout)
+{
+	struct fman_buffer_prefix_content buf_prefix_content;
+	struct fman_port_params params;
+	int err;
+
+	memset(&params, 0, sizeof(params));
+	memset(&buf_prefix_content, 0, sizeof(buf_prefix_content));
+
+	buf_prefix_content.priv_data_size = buf_layout->priv_data_size;
+	buf_prefix_content.pass_prs_result = true;
+	buf_prefix_content.pass_hash_result = true;
+	buf_prefix_content.pass_time_stamp = false;
+	buf_prefix_content.data_align = DPAA_FD_DATA_ALIGNMENT;
+
+	params.specific_params.non_rx_params.err_fqid = errq->fqid;
+	params.specific_params.non_rx_params.dflt_fqid = defq->fqid;
+
+	err = fman_port_config(port, &params);
+	if (err)
+		pr_err("%s: fman_port_config failed\n", __func__);
+
+	err = fman_port_cfg_buf_prefix_content(port, &buf_prefix_content);
+	if (err)
+		pr_err("%s: fman_port_cfg_buf_prefix_content failed\n",
+		       __func__);
+
+	err = fman_port_init(port);
+	if (err)
+		pr_err("%s: fm_port_init failed\n", __func__);
+}
+
+static void dpaa_eth_init_rx_port(struct fman_port *port, struct dpaa_bp **bps,
+				  size_t count, struct dpaa_fq *errq,
+				  struct dpaa_fq *defq,
+				  struct dpaa_buffer_layout *buf_layout)
+{
+	struct fman_buffer_prefix_content buf_prefix_content;
+	struct fman_port_rx_params *rx_p;
+	struct fman_port_params params;
+	int i, err;
+
+	memset(&params, 0, sizeof(params));
+	memset(&buf_prefix_content, 0, sizeof(buf_prefix_content));
+
+	buf_prefix_content.priv_data_size = buf_layout->priv_data_size;
+	buf_prefix_content.pass_prs_result = true;
+	buf_prefix_content.pass_hash_result = true;
+	buf_prefix_content.pass_time_stamp = false;
+	buf_prefix_content.data_align = DPAA_FD_DATA_ALIGNMENT;
+
+	rx_p = &params.specific_params.rx_params;
+	rx_p->err_fqid = errq->fqid;
+	rx_p->dflt_fqid = defq->fqid;
+
+	count = min(ARRAY_SIZE(rx_p->ext_buf_pools.ext_buf_pool), count);
+	rx_p->ext_buf_pools.num_of_pools_used = (u8)count;
+	for (i = 0; i < count; i++) {
+		rx_p->ext_buf_pools.ext_buf_pool[i].id =  bps[i]->bpid;
+		rx_p->ext_buf_pools.ext_buf_pool[i].size = (u16)bps[i]->size;
+	}
+
+	err = fman_port_config(port, &params);
+	if (err)
+		pr_err("%s: fman_port_config failed\n", __func__);
+
+	err = fman_port_cfg_buf_prefix_content(port, &buf_prefix_content);
+	if (err)
+		pr_err("%s: fman_port_cfg_buf_prefix_content failed\n",
+		       __func__);
+
+	err = fman_port_init(port);
+	if (err)
+		pr_err("%s: fm_port_init failed\n", __func__);
+}
+
+static void dpaa_eth_init_ports(struct mac_device *mac_dev,
+				struct dpaa_bp **bps, size_t count,
+				struct fm_port_fqs *port_fqs,
+				struct dpaa_buffer_layout *buf_layout,
+				struct device *dev)
+{
+	struct fman_port *rxport = mac_dev->port[RX];
+	struct fman_port *txport = mac_dev->port[TX];
+
+	dpaa_eth_init_tx_port(txport, port_fqs->tx_errq,
+			      port_fqs->tx_defq, &buf_layout[TX]);
+	dpaa_eth_init_rx_port(rxport, bps, count, port_fqs->rx_errq,
+			      port_fqs->rx_defq, &buf_layout[RX]);
+}
+
+static int dpaa_bman_release(const struct dpaa_bp *dpaa_bp,
+			     struct bm_buffer *bmb, int cnt)
+{
+	int err;
+
+	err = bman_release(dpaa_bp->pool, bmb, cnt);
+	/* Should never occur, address anyway to avoid leaking the buffers */
+	if (unlikely(WARN_ON(err)) && dpaa_bp->free_buf_cb)
+		while (cnt-- > 0)
+			dpaa_bp->free_buf_cb(dpaa_bp, &bmb[cnt]);
+
+	return cnt;
+}
+
+static void dpaa_release_sgt_members(struct qm_sg_entry *sgt)
+{
+	struct bm_buffer bmb[DPAA_BUFF_RELEASE_MAX];
+	struct dpaa_bp *dpaa_bp;
+	int i = 0, j;
+
+	memset(bmb, 0, sizeof(bmb));
+
+	do {
+		dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
+		if (!dpaa_bp)
+			return;
+
+		j = 0;
+		do {
+			WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
+
+			bm_buffer_set64(&bmb[j], qm_sg_entry_get64(&sgt[i]));
+
+			j++; i++;
+		} while (j < ARRAY_SIZE(bmb) &&
+				!qm_sg_entry_is_final(&sgt[i - 1]) &&
+				sgt[i - 1].bpid == sgt[i].bpid);
+
+		dpaa_bman_release(dpaa_bp, bmb, j);
+	} while (!qm_sg_entry_is_final(&sgt[i - 1]));
+}
+
+static void dpaa_fd_release(const struct net_device *net_dev,
+			    const struct qm_fd *fd)
+{
+	struct qm_sg_entry *sgt;
+	struct dpaa_bp *dpaa_bp;
+	struct bm_buffer bmb;
+	dma_addr_t addr;
+	void *vaddr;
+
+	bmb.data = 0;
+	bm_buffer_set64(&bmb, qm_fd_addr(fd));
+
+	dpaa_bp = dpaa_bpid2pool(fd->bpid);
+	if (!dpaa_bp)
+		return;
+
+	if (qm_fd_get_format(fd) == qm_fd_sg) {
+		vaddr = phys_to_virt(qm_fd_addr(fd));
+		sgt = vaddr + qm_fd_get_offset(fd);
+
+		dma_unmap_single(dpaa_bp->dev, qm_fd_addr(fd), dpaa_bp->size,
+				 DMA_FROM_DEVICE);
+
+		dpaa_release_sgt_members(sgt);
+
+		addr = dma_map_single(dpaa_bp->dev, vaddr, dpaa_bp->size,
+				      DMA_FROM_DEVICE);
+		if (dma_mapping_error(dpaa_bp->dev, addr)) {
+			dev_err(dpaa_bp->dev, "DMA mapping failed");
+			return;
+		}
+		bm_buffer_set64(&bmb, addr);
+	}
+
+	dpaa_bman_release(dpaa_bp, &bmb, 1);
+}
+
+/* Turn on HW checksum computation for this outgoing frame.
+ * If the current protocol is not something we support in this regard
+ * (or if the stack has already computed the SW checksum), we do nothing.
+ *
+ * Returns 0 if all goes well (or HW csum doesn't apply), and a negative value
+ * otherwise.
+ *
+ * Note that this function may modify the fd->cmd field and the skb data buffer
+ * (the Parse Results area).
+ */
+static int dpaa_enable_tx_csum(struct dpaa_priv *priv,
+			       struct sk_buff *skb,
+			       struct qm_fd *fd,
+			       char *parse_results)
+{
+	struct fman_prs_result *parse_result;
+	u16 ethertype = ntohs(skb->protocol);
+	struct ipv6hdr *ipv6h = NULL;
+	struct iphdr *iph;
+	int retval = 0;
+	u8 l4_proto;
+
+	if (skb->ip_summed != CHECKSUM_PARTIAL)
+		return 0;
+
+	/* Note: L3 csum seems to be already computed in sw, but we can't choose
+	 * L4 alone from the FM configuration anyway.
+	 */
+
+	/* Fill in some fields of the Parse Results array, so the FMan
+	 * can find them as if they came from the FMan Parser.
+	 */
+	parse_result = (struct fman_prs_result *)parse_results;
+
+	/* If we're dealing with VLAN, get the real Ethernet type */
+	if (ethertype == ETH_P_8021Q) {
+		/* We can't always assume the MAC header is set correctly
+		 * by the stack, so reset to beginning of skb->data
+		 */
+		skb_reset_mac_header(skb);
+		ethertype = ntohs(vlan_eth_hdr(skb)->h_vlan_encapsulated_proto);
+	}
+
+	/* Fill in the relevant L3 parse result fields
+	 * and read the L4 protocol type
+	 */
+	switch (ethertype) {
+	case ETH_P_IP:
+		parse_result->l3r = cpu_to_be16(FM_L3_PARSE_RESULT_IPV4);
+		iph = ip_hdr(skb);
+		WARN_ON(!iph);
+		l4_proto = iph->protocol;
+		break;
+	case ETH_P_IPV6:
+		parse_result->l3r = cpu_to_be16(FM_L3_PARSE_RESULT_IPV6);
+		ipv6h = ipv6_hdr(skb);
+		WARN_ON(!ipv6h);
+		l4_proto = ipv6h->nexthdr;
+		break;
+	default:
+		/* We shouldn't even be here */
+		if (net_ratelimit())
+			netif_alert(priv, tx_err, priv->net_dev,
+				    "Can't compute HW csum for L3 proto 0x%x\n",
+				    ntohs(skb->protocol));
+		retval = -EIO;
+		goto return_error;
+	}
+
+	/* Fill in the relevant L4 parse result fields */
+	switch (l4_proto) {
+	case IPPROTO_UDP:
+		parse_result->l4r = FM_L4_PARSE_RESULT_UDP;
+		break;
+	case IPPROTO_TCP:
+		parse_result->l4r = FM_L4_PARSE_RESULT_TCP;
+		break;
+	default:
+		if (net_ratelimit())
+			netif_alert(priv, tx_err, priv->net_dev,
+				    "Can't compute HW csum for L4 proto 0x%x\n",
+				    l4_proto);
+		retval = -EIO;
+		goto return_error;
+	}
+
+	/* At index 0 is IPOffset_1 as defined in the Parse Results */
+	parse_result->ip_off[0] = (u8)skb_network_offset(skb);
+	parse_result->l4_off = (u8)skb_transport_offset(skb);
+
+	/* Enable L3 (and L4, if TCP or UDP) HW checksum. */
+	fd->cmd |= FM_FD_CMD_RPD | FM_FD_CMD_DTC;
+
+	/* On P1023 and similar platforms fd->cmd interpretation could
+	 * be disabled by setting CONTEXT_A bit ICMD; currently this bit
+	 * is not set so we do not need to check; in the future, if/when
+	 * using context_a we need to check this bit
+	 */
+
+return_error:
+	return retval;
+}
+
+static int dpaa_bp_add_8_bufs(const struct dpaa_bp *dpaa_bp)
+{
+	struct device *dev = dpaa_bp->dev;
+	struct bm_buffer bmb[8];
+	dma_addr_t addr;
+	void *new_buf;
+	u8 i;
+
+	for (i = 0; i < 8; i++) {
+		new_buf = netdev_alloc_frag(dpaa_bp->raw_size);
+		if (unlikely(!new_buf)) {
+			dev_err(dev, "netdev_alloc_frag() failed, size %zu\n",
+				dpaa_bp->raw_size);
+			goto release_previous_buffs;
+		}
+		new_buf = PTR_ALIGN(new_buf, SMP_CACHE_BYTES);
+
+		addr = dma_map_single(dev, new_buf,
+				      dpaa_bp->size, DMA_FROM_DEVICE);
+		if (unlikely(dma_mapping_error(dev, addr))) {
+			dev_err(dpaa_bp->dev, "DMA map failed");
+			goto release_previous_buffs;
+		}
+
+		bmb[i].data = 0;
+		bm_buffer_set64(&bmb[i], addr);
+	}
+
+release_bufs:
+	return dpaa_bman_release(dpaa_bp, bmb, i);
+
+release_previous_buffs:
+	WARN_ONCE(1, "dpaa_eth: failed to add buffers on Rx\n");
+
+	bm_buffer_set64(&bmb[i], 0);
+	/* Avoid releasing a completely null buffer; bman_release() requires
+	 * at least one buffer.
+	 */
+	if (likely(i))
+		goto release_bufs;
+
+	return 0;
+}
+
+static int dpaa_bp_seed(struct dpaa_bp *dpaa_bp)
+{
+	int i;
+
+	/* Give each CPU an allotment of "config_count" buffers */
+	for_each_possible_cpu(i) {
+		int *count_ptr = per_cpu_ptr(dpaa_bp->percpu_count, i);
+		int j;
+
+		/* Although we access another CPU's counters here
+		 * we do it at boot time so it is safe
+		 */
+		for (j = 0; j < dpaa_bp->config_count; j += 8)
+			*count_ptr += dpaa_bp_add_8_bufs(dpaa_bp);
+	}
+	return 0;
+}
+
+/* Add buffers/(pages) for Rx processing whenever bpool count falls below
+ * REFILL_THRESHOLD.
+ */
+static int dpaa_eth_refill_bpool(struct dpaa_bp *dpaa_bp, int *countptr)
+{
+	int count = *countptr;
+	int new_bufs;
+
+	if (unlikely(count < FSL_DPAA_ETH_REFILL_THRESHOLD)) {
+		do {
+			new_bufs = dpaa_bp_add_8_bufs(dpaa_bp);
+			if (unlikely(!new_bufs)) {
+				/* Avoid looping forever if we've temporarily
+				 * run out of memory. We'll try again at the
+				 * next NAPI cycle.
+				 */
+				break;
+			}
+			count += new_bufs;
+		} while (count < FSL_DPAA_ETH_MAX_BUF_COUNT);
+
+		*countptr = count;
+		if (unlikely(count < FSL_DPAA_ETH_MAX_BUF_COUNT))
+			return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static int dpaa_eth_refill_bpools(struct dpaa_priv *priv)
+{
+	struct dpaa_bp *dpaa_bp;
+	int *countptr;
+	int res, i;
+
+	for (i = 0; i < DPAA_BPS_NUM; i++) {
+		dpaa_bp = priv->dpaa_bps[i];
+		if (!dpaa_bp)
+			return -EINVAL;
+		countptr = this_cpu_ptr(dpaa_bp->percpu_count);
+		res  = dpaa_eth_refill_bpool(dpaa_bp, countptr);
+		if (res)
+			return res;
+	}
+	return 0;
+}
+
+/* Cleanup function for outgoing frame descriptors that were built on Tx path,
+ * either contiguous frames or scatter/gather ones.
+ * Skb freeing is not handled here.
+ *
+ * This function may be called on error paths in the Tx function, so guard
+ * against cases when not all fd relevant fields were filled in.
+ *
+ * Return the skb backpointer, since for S/G frames the buffer containing it
+ * gets freed here.
+ */
+static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv,
+					  const struct qm_fd *fd)
+{
+	const enum dma_data_direction dma_dir = DMA_TO_DEVICE;
+	struct device *dev = priv->net_dev->dev.parent;
+	dma_addr_t addr = qm_fd_addr(fd);
+	const struct qm_sg_entry *sgt;
+	struct sk_buff **skbh, *skb;
+	int nr_frags, i;
+
+	skbh = (struct sk_buff **)phys_to_virt(addr);
+	skb = *skbh;
+
+	if (unlikely(qm_fd_get_format(fd) == qm_fd_sg)) {
+		nr_frags = skb_shinfo(skb)->nr_frags;
+		dma_unmap_single(dev, addr, qm_fd_get_offset(fd) +
+				 sizeof(struct qm_sg_entry) * (1 + nr_frags),
+				 dma_dir);
+
+		/* The sgt buffer has been allocated with netdev_alloc_frag(),
+		 * it's from lowmem.
+		 */
+		sgt = phys_to_virt(addr + qm_fd_get_offset(fd));
+
+		/* sgt[0] is from lowmem, was dma_map_single()-ed */
+		dma_unmap_single(dev, qm_sg_addr(&sgt[0]),
+				 qm_sg_entry_get_len(&sgt[0]), dma_dir);
+
+		/* remaining pages were mapped with skb_frag_dma_map() */
+		for (i = 1; i < nr_frags; i++) {
+			WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
+
+			dma_unmap_page(dev, qm_sg_addr(&sgt[i]),
+				       qm_sg_entry_get_len(&sgt[i]), dma_dir);
+		}
+
+		/* Free the page frag that we allocated on Tx */
+		skb_free_frag(phys_to_virt(addr));
+	} else {
+		dma_unmap_single(dev, addr,
+				 skb_tail_pointer(skb) - (u8 *)skbh, dma_dir);
+	}
+
+	return skb;
+}
+
+/* Build a linear skb around the received buffer.
+ * We are guaranteed there is enough room at the end of the data buffer to
+ * accommodate the shared info area of the skb.
+ */
+static struct sk_buff *contig_fd_to_skb(const struct dpaa_priv *priv,
+					const struct qm_fd *fd)
+{
+	ssize_t fd_off = qm_fd_get_offset(fd);
+	dma_addr_t addr = qm_fd_addr(fd);
+	struct dpaa_bp *dpaa_bp;
+	struct sk_buff *skb;
+	void *vaddr;
+
+	vaddr = phys_to_virt(addr);
+	WARN_ON(!IS_ALIGNED((unsigned long)vaddr, SMP_CACHE_BYTES));
+
+	dpaa_bp = dpaa_bpid2pool(fd->bpid);
+	if (!dpaa_bp)
+		goto free_buffer;
+
+	skb = build_skb(vaddr, dpaa_bp->size +
+			SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
+	if (unlikely(!skb)) {
+		WARN_ONCE(1, "Build skb failure on Rx\n");
+		goto free_buffer;
+	}
+	WARN_ON(fd_off != priv->rx_headroom);
+	skb_reserve(skb, fd_off);
+	skb_put(skb, qm_fd_get_length(fd));
+
+	skb->ip_summed = CHECKSUM_NONE;
+
+	return skb;
+
+free_buffer:
+	skb_free_frag(vaddr);
+	return NULL;
+}
+
+/* Build an skb with the data of the first S/G entry in the linear portion and
+ * the rest of the frame as skb fragments.
+ *
+ * The page fragment holding the S/G Table is recycled here.
+ */
+static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv,
+				    const struct qm_fd *fd)
+{
+	ssize_t fd_off = qm_fd_get_offset(fd);
+	dma_addr_t addr = qm_fd_addr(fd);
+	const struct qm_sg_entry *sgt;
+	struct page *page, *head_page;
+	struct dpaa_bp *dpaa_bp;
+	void *vaddr, *sg_vaddr;
+	int frag_off, frag_len;
+	struct sk_buff *skb;
+	dma_addr_t sg_addr;
+	int page_offset;
+	unsigned int sz;
+	int *count_ptr;
+	int i;
+
+	vaddr = phys_to_virt(addr);
+	WARN_ON(!IS_ALIGNED((unsigned long)vaddr, SMP_CACHE_BYTES));
+
+	/* Iterate through the SGT entries and add data buffers to the skb */
+	sgt = vaddr + fd_off;
+	for (i = 0; i < DPAA_SGT_MAX_ENTRIES; i++) {
+		/* Extension bit is not supported */
+		WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
+
+		sg_addr = qm_sg_addr(&sgt[i]);
+		sg_vaddr = phys_to_virt(sg_addr);
+		WARN_ON(!IS_ALIGNED((unsigned long)sg_vaddr,
+				    SMP_CACHE_BYTES));
+
+		/* We may use multiple Rx pools */
+		dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
+		if (!dpaa_bp)
+			goto free_buffers;
+
+		count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
+		dma_unmap_single(dpaa_bp->dev, sg_addr, dpaa_bp->size,
+				 DMA_FROM_DEVICE);
+		if (i == 0) {
+			sz = dpaa_bp->size +
+				SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+			skb = build_skb(sg_vaddr, sz);
+			if (WARN_ON(unlikely(!skb)))
+				goto free_buffers;
+
+			skb->ip_summed = CHECKSUM_NONE;
+
+			/* Make sure forwarded skbs will have enough space
+			 * on Tx, if extra headers are added.
+			 */
+			WARN_ON(fd_off != priv->rx_headroom);
+			skb_reserve(skb, fd_off);
+			skb_put(skb, qm_sg_entry_get_len(&sgt[i]));
+		} else {
+			/* Not the first S/G entry; all data from buffer will
+			 * be added in an skb fragment; fragment index is offset
+			 * by one since first S/G entry was incorporated in the
+			 * linear part of the skb.
+			 *
+			 * Caution: 'page' may be a tail page.
+			 */
+			page = virt_to_page(sg_vaddr);
+			head_page = virt_to_head_page(sg_vaddr);
+
+			/* Compute offset in (possibly tail) page */
+			page_offset = ((unsigned long)sg_vaddr &
+					(PAGE_SIZE - 1)) +
+				(page_address(page) - page_address(head_page));
+			/* page_offset only refers to the beginning of sgt[i];
+			 * but the buffer itself may have an internal offset.
+			 */
+			frag_off = qm_sg_entry_get_off(&sgt[i]) + page_offset;
+			frag_len = qm_sg_entry_get_len(&sgt[i]);
+			/* skb_add_rx_frag() does no checking on the page; if
+			 * we pass it a tail page, we'll end up with
+			 * bad page accounting and eventually with segafults.
+			 */
+			skb_add_rx_frag(skb, i - 1, head_page, frag_off,
+					frag_len, dpaa_bp->size);
+		}
+		/* Update the pool count for the current {cpu x bpool} */
+		(*count_ptr)--;
+
+		if (qm_sg_entry_is_final(&sgt[i]))
+			break;
+	}
+	WARN_ONCE(i == DPAA_SGT_MAX_ENTRIES, "No final bit on SGT\n");
+
+	/* free the SG table buffer */
+	skb_free_frag(vaddr);
+
+	return skb;
+
+free_buffers:
+	/* compensate sw bpool counter changes */
+	for (i--; i > 0; i--) {
+		dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
+		if (dpaa_bp) {
+			count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
+			(*count_ptr)++;
+		}
+	}
+	/* free all the SG entries */
+	for (i = 0; i < DPAA_SGT_MAX_ENTRIES ; i++) {
+		sg_addr = qm_sg_addr(&sgt[i]);
+		sg_vaddr = phys_to_virt(sg_addr);
+		skb_free_frag(sg_vaddr);
+		dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
+		if (dpaa_bp) {
+			count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
+			(*count_ptr)--;
+		}
+
+		if (qm_sg_entry_is_final(&sgt[i]))
+			break;
+	}
+	/* free the SGT fragment */
+	skb_free_frag(vaddr);
+
+	return NULL;
+}
+
+static int skb_to_contig_fd(struct dpaa_priv *priv,
+			    struct sk_buff *skb, struct qm_fd *fd,
+			    int *offset)
+{
+	struct net_device *net_dev = priv->net_dev;
+	struct device *dev = net_dev->dev.parent;
+	enum dma_data_direction dma_dir;
+	unsigned char *buffer_start;
+	struct sk_buff **skbh;
+	dma_addr_t addr;
+	int err;
+
+	/* We are guaranteed to have at least tx_headroom bytes
+	 * available, so just use that for offset.
+	 */
+	fd->bpid = FSL_DPAA_BPID_INV;
+	buffer_start = skb->data - priv->tx_headroom;
+	dma_dir = DMA_TO_DEVICE;
+
+	skbh = (struct sk_buff **)buffer_start;
+	*skbh = skb;
+
+	/* Enable L3/L4 hardware checksum computation.
+	 *
+	 * We must do this before dma_map_single(DMA_TO_DEVICE), because we may
+	 * need to write into the skb.
+	 */
+	err = dpaa_enable_tx_csum(priv, skb, fd,
+				  ((char *)skbh) + DPAA_TX_PRIV_DATA_SIZE);
+	if (unlikely(err < 0)) {
+		if (net_ratelimit())
+			netif_err(priv, tx_err, net_dev, "HW csum error: %d\n",
+				  err);
+		return err;
+	}
+
+	/* Fill in the rest of the FD fields */
+	qm_fd_set_contig(fd, priv->tx_headroom, skb->len);
+	fd->cmd |= FM_FD_CMD_FCO;
+
+	/* Map the entire buffer size that may be seen by FMan, but no more */
+	addr = dma_map_single(dev, skbh,
+			      skb_tail_pointer(skb) - buffer_start, dma_dir);
+	if (unlikely(dma_mapping_error(dev, addr))) {
+		if (net_ratelimit())
+			netif_err(priv, tx_err, net_dev, "dma_map_single() failed\n");
+		return -EINVAL;
+	}
+	qm_fd_addr_set64(fd, addr);
+
+	return 0;
+}
+
+static int skb_to_sg_fd(struct dpaa_priv *priv,
+			struct sk_buff *skb, struct qm_fd *fd)
+{
+	const enum dma_data_direction dma_dir = DMA_TO_DEVICE;
+	const int nr_frags = skb_shinfo(skb)->nr_frags;
+	struct net_device *net_dev = priv->net_dev;
+	struct device *dev = net_dev->dev.parent;
+	struct qm_sg_entry *sgt;
+	struct sk_buff **skbh;
+	int i, j, err, sz;
+	void *buffer_start;
+	skb_frag_t *frag;
+	dma_addr_t addr;
+	size_t frag_len;
+	void *sgt_buf;
+
+	/* get a page frag to store the SGTable */
+	sz = SKB_DATA_ALIGN(priv->tx_headroom +
+		sizeof(struct qm_sg_entry) * (1 + nr_frags));
+	sgt_buf = netdev_alloc_frag(sz);
+	if (unlikely(!sgt_buf)) {
+		netdev_err(net_dev, "netdev_alloc_frag() failed for size %d\n",
+			   sz);
+		return -ENOMEM;
+	}
+
+	/* Enable L3/L4 hardware checksum computation.
+	 *
+	 * We must do this before dma_map_single(DMA_TO_DEVICE), because we may
+	 * need to write into the skb.
+	 */
+	err = dpaa_enable_tx_csum(priv, skb, fd,
+				  sgt_buf + DPAA_TX_PRIV_DATA_SIZE);
+	if (unlikely(err < 0)) {
+		if (net_ratelimit())
+			netif_err(priv, tx_err, net_dev, "HW csum error: %d\n",
+				  err);
+		goto csum_failed;
+	}
+
+	sgt = (struct qm_sg_entry *)(sgt_buf + priv->tx_headroom);
+	qm_sg_entry_set_len(&sgt[0], skb_headlen(skb));
+	sgt[0].bpid = FSL_DPAA_BPID_INV;
+	sgt[0].offset = 0;
+	addr = dma_map_single(dev, skb->data,
+			      skb_headlen(skb), dma_dir);
+	if (unlikely(dma_mapping_error(dev, addr))) {
+		dev_err(dev, "DMA mapping failed");
+		err = -EINVAL;
+		goto sg0_map_failed;
+	}
+	qm_sg_entry_set64(&sgt[0], addr);
+
+	/* populate the rest of SGT entries */
+	frag = &skb_shinfo(skb)->frags[0];
+	frag_len = frag->size;
+	for (i = 1; i <= nr_frags; i++, frag++) {
+		WARN_ON(!skb_frag_page(frag));
+		addr = skb_frag_dma_map(dev, frag, 0,
+					frag_len, dma_dir);
+		if (unlikely(dma_mapping_error(dev, addr))) {
+			dev_err(dev, "DMA mapping failed");
+			err = -EINVAL;
+			goto sg_map_failed;
+		}
+
+		qm_sg_entry_set_len(&sgt[i], frag_len);
+		sgt[i].bpid = FSL_DPAA_BPID_INV;
+		sgt[i].offset = 0;
+
+		/* keep the offset in the address */
+		qm_sg_entry_set64(&sgt[i], addr);
+		frag_len = frag->size;
+	}
+	qm_sg_entry_set_f(&sgt[i - 1], frag_len);
+
+	qm_fd_set_sg(fd, priv->tx_headroom, skb->len);
+
+	/* DMA map the SGT page */
+	buffer_start = (void *)sgt - priv->tx_headroom;
+	skbh = (struct sk_buff **)buffer_start;
+	*skbh = skb;
+
+	addr = dma_map_single(dev, buffer_start, priv->tx_headroom +
+			      sizeof(struct qm_sg_entry) * (1 + nr_frags),
+			      dma_dir);
+	if (unlikely(dma_mapping_error(dev, addr))) {
+		dev_err(dev, "DMA mapping failed");
+		err = -EINVAL;
+		goto sgt_map_failed;
+	}
+
+	fd->bpid = FSL_DPAA_BPID_INV;
+	fd->cmd |= FM_FD_CMD_FCO;
+	qm_fd_addr_set64(fd, addr);
+
+	return 0;
+
+sgt_map_failed:
+sg_map_failed:
+	for (j = 0; j < i; j++)
+		dma_unmap_page(dev, qm_sg_addr(&sgt[j]),
+			       qm_sg_entry_get_len(&sgt[j]), dma_dir);
+sg0_map_failed:
+csum_failed:
+	skb_free_frag(sgt_buf);
+
+	return err;
+}
+
+static inline int dpaa_xmit(struct dpaa_priv *priv,
+			    struct rtnl_link_stats64 *percpu_stats,
+			    int queue,
+			    struct qm_fd *fd)
+{
+	struct qman_fq *egress_fq;
+	int err, i;
+
+	egress_fq = priv->egress_fqs[queue];
+	if (fd->bpid == FSL_DPAA_BPID_INV)
+		fd->cmd |= qman_fq_fqid(priv->conf_fqs[queue]);
+
+	for (i = 0; i < DPAA_ENQUEUE_RETRIES; i++) {
+		err = qman_enqueue(egress_fq, fd);
+		if (err != -EBUSY)
+			break;
+	}
+
+	if (unlikely(err < 0)) {
+		percpu_stats->tx_errors++;
+		percpu_stats->tx_fifo_errors++;
+		return err;
+	}
+
+	percpu_stats->tx_packets++;
+	percpu_stats->tx_bytes += qm_fd_get_length(fd);
+
+	return 0;
+}
+
+static int dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
+{
+	const int queue_mapping = skb_get_queue_mapping(skb);
+	bool nonlinear = skb_is_nonlinear(skb);
+	struct rtnl_link_stats64 *percpu_stats;
+	struct dpaa_percpu_priv *percpu_priv;
+	struct dpaa_priv *priv;
+	struct qm_fd fd;
+	int offset = 0;
+	int err = 0;
+
+	priv = netdev_priv(net_dev);
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+	percpu_stats = &percpu_priv->stats;
+
+	qm_fd_clear_fd(&fd);
+
+	if (!nonlinear) {
+		/* We're going to store the skb backpointer at the beginning
+		 * of the data buffer, so we need a privately owned skb
+		 *
+		 * We've made sure skb is not shared in dev->priv_flags,
+		 * we need to verify the skb head is not cloned
+		 */
+		if (skb_cow_head(skb, priv->tx_headroom))
+			goto enomem;
+
+		WARN_ON(skb_is_nonlinear(skb));
+	}
+
+	/* MAX_SKB_FRAGS is equal or larger than our dpaa_SGT_MAX_ENTRIES;
+	 * make sure we don't feed FMan with more fragments than it supports.
+	 */
+	if (nonlinear &&
+	    likely(skb_shinfo(skb)->nr_frags < DPAA_SGT_MAX_ENTRIES)) {
+		/* Just create a S/G fd based on the skb */
+		err = skb_to_sg_fd(priv, skb, &fd);
+	} else {
+		/* If the egress skb contains more fragments than we support
+		 * we have no choice but to linearize it ourselves.
+		 */
+		if (unlikely(nonlinear) && __skb_linearize(skb))
+			goto enomem;
+
+		/* Finally, create a contig FD from this skb */
+		err = skb_to_contig_fd(priv, skb, &fd, &offset);
+	}
+	if (unlikely(err < 0))
+		goto skb_to_fd_failed;
+
+	if (likely(dpaa_xmit(priv, percpu_stats, queue_mapping, &fd) == 0))
+		return NETDEV_TX_OK;
+
+	dpaa_cleanup_tx_fd(priv, &fd);
+skb_to_fd_failed:
+enomem:
+	percpu_stats->tx_errors++;
+	dev_kfree_skb(skb);
+	return NETDEV_TX_OK;
+}
+
+static void dpaa_rx_error(struct net_device *net_dev,
+			  const struct dpaa_priv *priv,
+			  struct dpaa_percpu_priv *percpu_priv,
+			  const struct qm_fd *fd,
+			  u32 fqid)
+{
+	if (net_ratelimit())
+		netif_err(priv, hw, net_dev, "Err FD status = 0x%08x\n",
+			  fd->status & FM_FD_STAT_RX_ERRORS);
+
+	percpu_priv->stats.rx_errors++;
+
+	dpaa_fd_release(net_dev, fd);
+}
+
+static void dpaa_tx_error(struct net_device *net_dev,
+			  const struct dpaa_priv *priv,
+			  struct dpaa_percpu_priv *percpu_priv,
+			  const struct qm_fd *fd,
+			  u32 fqid)
+{
+	struct sk_buff *skb;
+
+	if (net_ratelimit())
+		netif_warn(priv, hw, net_dev, "FD status = 0x%08x\n",
+			   fd->status & FM_FD_STAT_TX_ERRORS);
+
+	percpu_priv->stats.tx_errors++;
+
+	skb = dpaa_cleanup_tx_fd(priv, fd);
+	dev_kfree_skb(skb);
+}
+
+static int dpaa_eth_poll(struct napi_struct *napi, int budget)
+{
+	struct dpaa_napi_portal *np =
+			container_of(napi, struct dpaa_napi_portal, napi);
+
+	int cleaned = qman_p_poll_dqrr(np->p, budget);
+
+	if (cleaned < budget) {
+		napi_complete(napi);
+		qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
+
+	} else if (np->down) {
+		qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
+	}
+
+	return cleaned;
+}
+
+static void dpaa_tx_conf(struct net_device *net_dev,
+			 const struct dpaa_priv *priv,
+			 struct dpaa_percpu_priv *percpu_priv,
+			 const struct qm_fd *fd,
+			 u32 fqid)
+{
+	struct sk_buff	*skb;
+
+	if (unlikely(fd->status & FM_FD_STAT_TX_ERRORS) != 0) {
+		if (net_ratelimit())
+			netif_warn(priv, hw, net_dev, "FD status = 0x%08x\n",
+				   fd->status & FM_FD_STAT_TX_ERRORS);
+
+		percpu_priv->stats.tx_errors++;
+	}
+
+	skb = dpaa_cleanup_tx_fd(priv, fd);
+
+	consume_skb(skb);
+}
+
+static inline int dpaa_eth_napi_schedule(struct dpaa_percpu_priv *percpu_priv,
+					 struct qman_portal *portal)
+{
+	if (unlikely(in_irq() || !in_serving_softirq())) {
+		/* Disable QMan IRQ and invoke NAPI */
+		qman_p_irqsource_remove(portal, QM_PIRQ_DQRI);
+
+		percpu_priv->np.p = portal;
+		napi_schedule(&percpu_priv->np.napi);
+		return 1;
+	}
+	return 0;
+}
+
+static enum qman_cb_dqrr_result rx_error_dqrr(struct qman_portal *portal,
+					      struct qman_fq *fq,
+					      const struct qm_dqrr_entry *dq)
+{
+	struct dpaa_fq *dpaa_fq = container_of(fq, struct dpaa_fq, fq_base);
+	struct dpaa_percpu_priv *percpu_priv;
+	struct net_device *net_dev;
+	struct dpaa_bp *dpaa_bp;
+	struct dpaa_priv *priv;
+
+	net_dev = dpaa_fq->net_dev;
+	priv = netdev_priv(net_dev);
+	dpaa_bp = dpaa_bpid2pool(dq->fd.bpid);
+	if (!dpaa_bp)
+		return qman_cb_dqrr_consume;
+
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+
+	if (dpaa_eth_napi_schedule(percpu_priv, portal))
+		return qman_cb_dqrr_stop;
+
+	if (dpaa_eth_refill_bpools(priv))
+		/* Unable to refill the buffer pool due to insufficient
+		 * system memory. Just release the frame back into the pool,
+		 * otherwise we'll soon end up with an empty buffer pool.
+		 */
+		dpaa_fd_release(net_dev, &dq->fd);
+	else
+		dpaa_rx_error(net_dev, priv, percpu_priv, &dq->fd, fq->fqid);
+
+	return qman_cb_dqrr_consume;
+}
+
+static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
+						struct qman_fq *fq,
+						const struct qm_dqrr_entry *dq)
+{
+	struct rtnl_link_stats64 *percpu_stats;
+	struct dpaa_percpu_priv *percpu_priv;
+	const struct qm_fd *fd = &dq->fd;
+	dma_addr_t addr = qm_fd_addr(fd);
+	enum qm_fd_format fd_format;
+	struct net_device *net_dev;
+	u32 fd_status = fd->status;
+	struct dpaa_bp *dpaa_bp;
+	struct dpaa_priv *priv;
+	unsigned int skb_len;
+	struct sk_buff *skb;
+	int *count_ptr;
+
+	net_dev = ((struct dpaa_fq *)fq)->net_dev;
+	priv = netdev_priv(net_dev);
+	dpaa_bp = dpaa_bpid2pool(dq->fd.bpid);
+	if (!dpaa_bp)
+		return qman_cb_dqrr_consume;
+
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+	percpu_stats = &percpu_priv->stats;
+
+	if (unlikely(dpaa_eth_napi_schedule(percpu_priv, portal)))
+		return qman_cb_dqrr_stop;
+
+	/* Make sure we didn't run out of buffers */
+	if (unlikely(dpaa_eth_refill_bpools(priv))) {
+		/* Unable to refill the buffer pool due to insufficient
+		 * system memory. Just release the frame back into the pool,
+		 * otherwise we'll soon end up with an empty buffer pool.
+		 */
+		dpaa_fd_release(net_dev, &dq->fd);
+		return qman_cb_dqrr_consume;
+	}
+
+	if (unlikely(fd_status & FM_FD_STAT_RX_ERRORS) != 0) {
+		if (net_ratelimit())
+			netif_warn(priv, hw, net_dev, "FD status = 0x%08x\n",
+				   fd_status & FM_FD_STAT_RX_ERRORS);
+
+		percpu_stats->rx_errors++;
+		dpaa_fd_release(net_dev, fd);
+		return qman_cb_dqrr_consume;
+	}
+
+	dpaa_bp = dpaa_bpid2pool(fd->bpid);
+	if (!dpaa_bp)
+		return qman_cb_dqrr_consume;
+
+	dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size, DMA_FROM_DEVICE);
+
+	/* prefetch the first 64 bytes of the frame or the SGT start */
+	prefetch(phys_to_virt(addr) + qm_fd_get_offset(fd));
+
+	fd_format = qm_fd_get_format(fd);
+	/* The only FD types that we may receive are contig and S/G */
+	WARN_ON((fd_format != qm_fd_contig) && (fd_format != qm_fd_sg));
+
+	/* Account for either the contig buffer or the SGT buffer (depending on
+	 * which case we were in) having been removed from the pool.
+	 */
+	count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
+	(*count_ptr)--;
+
+	if (likely(fd_format == qm_fd_contig))
+		skb = contig_fd_to_skb(priv, fd);
+	else
+		skb = sg_fd_to_skb(priv, fd);
+	if (!skb)
+		return qman_cb_dqrr_consume;
+
+	skb->protocol = eth_type_trans(skb, net_dev);
+
+	skb_len = skb->len;
+
+	if (unlikely(netif_receive_skb(skb) == NET_RX_DROP))
+		return qman_cb_dqrr_consume;
+
+	percpu_stats->rx_packets++;
+	percpu_stats->rx_bytes += skb_len;
+
+	return qman_cb_dqrr_consume;
+}
+
+static enum qman_cb_dqrr_result conf_error_dqrr(struct qman_portal *portal,
+						struct qman_fq *fq,
+						const struct qm_dqrr_entry *dq)
+{
+	struct dpaa_percpu_priv *percpu_priv;
+	struct net_device *net_dev;
+	struct dpaa_priv *priv;
+
+	net_dev = ((struct dpaa_fq *)fq)->net_dev;
+	priv = netdev_priv(net_dev);
+
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+
+	if (dpaa_eth_napi_schedule(percpu_priv, portal))
+		return qman_cb_dqrr_stop;
+
+	dpaa_tx_error(net_dev, priv, percpu_priv, &dq->fd, fq->fqid);
+
+	return qman_cb_dqrr_consume;
+}
+
+static enum qman_cb_dqrr_result conf_dflt_dqrr(struct qman_portal *portal,
+					       struct qman_fq *fq,
+					       const struct qm_dqrr_entry *dq)
+{
+	struct dpaa_percpu_priv *percpu_priv;
+	struct net_device *net_dev;
+	struct dpaa_priv *priv;
+
+	net_dev = ((struct dpaa_fq *)fq)->net_dev;
+	priv = netdev_priv(net_dev);
+
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+
+	if (dpaa_eth_napi_schedule(percpu_priv, portal))
+		return qman_cb_dqrr_stop;
+
+	dpaa_tx_conf(net_dev, priv, percpu_priv, &dq->fd, fq->fqid);
+
+	return qman_cb_dqrr_consume;
+}
+
+static void egress_ern(struct qman_portal *portal,
+		       struct qman_fq *fq,
+		       const union qm_mr_entry *msg)
+{
+	const struct qm_fd *fd = &msg->ern.fd;
+	struct dpaa_percpu_priv *percpu_priv;
+	const struct dpaa_priv *priv;
+	struct net_device *net_dev;
+	struct sk_buff *skb;
+
+	net_dev = ((struct dpaa_fq *)fq)->net_dev;
+	priv = netdev_priv(net_dev);
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+
+	percpu_priv->stats.tx_dropped++;
+	percpu_priv->stats.tx_fifo_errors++;
+
+	skb = dpaa_cleanup_tx_fd(priv, fd);
+	dev_kfree_skb_any(skb);
+}
+
+static const struct dpaa_fq_cbs dpaa_fq_cbs = {
+	.rx_defq = { .cb = { .dqrr = rx_default_dqrr } },
+	.tx_defq = { .cb = { .dqrr = conf_dflt_dqrr } },
+	.rx_errq = { .cb = { .dqrr = rx_error_dqrr } },
+	.tx_errq = { .cb = { .dqrr = conf_error_dqrr } },
+	.egress_ern = { .cb = { .ern = egress_ern } }
+};
+
+static void dpaa_eth_napi_enable(struct dpaa_priv *priv)
+{
+	struct dpaa_percpu_priv *percpu_priv;
+	int i;
+
+	for_each_possible_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+
+		percpu_priv->np.down = 0;
+		napi_enable(&percpu_priv->np.napi);
+	}
+}
+
+static void dpaa_eth_napi_disable(struct dpaa_priv *priv)
+{
+	struct dpaa_percpu_priv *percpu_priv;
+	int i;
+
+	for_each_possible_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+
+		percpu_priv->np.down = 1;
+		napi_disable(&percpu_priv->np.napi);
+	}
+}
+
+static int dpaa_open(struct net_device *net_dev)
+{
+	struct mac_device *mac_dev;
+	struct dpaa_priv *priv;
+	int err, i;
+
+	priv = netdev_priv(net_dev);
+	mac_dev = priv->mac_dev;
+	dpaa_eth_napi_enable(priv);
+
+	net_dev->phydev = mac_dev->init_phy(net_dev, priv->mac_dev);
+	if (!net_dev->phydev) {
+		netif_err(priv, ifup, net_dev, "init_phy() failed\n");
+		return -ENODEV;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(mac_dev->port); i++) {
+		err = fman_port_enable(mac_dev->port[i]);
+		if (err)
+			goto mac_start_failed;
+	}
+
+	err = priv->mac_dev->start(mac_dev);
+	if (err < 0) {
+		netif_err(priv, ifup, net_dev, "mac_dev->start() = %d\n", err);
+		goto mac_start_failed;
+	}
+
+	netif_tx_start_all_queues(net_dev);
+
+	return 0;
+
+mac_start_failed:
+	for (i = 0; i < ARRAY_SIZE(mac_dev->port); i++)
+		fman_port_disable(mac_dev->port[i]);
+
+	dpaa_eth_napi_disable(priv);
+
+	return err;
+}
+
+static int dpaa_eth_stop(struct net_device *net_dev)
+{
+	struct dpaa_priv *priv;
+	int err;
+
+	err = dpaa_stop(net_dev);
+
+	priv = netdev_priv(net_dev);
+	dpaa_eth_napi_disable(priv);
+
+	return err;
+}
+
+static const struct net_device_ops dpaa_ops = {
+	.ndo_open = dpaa_open,
+	.ndo_start_xmit = dpaa_start_xmit,
+	.ndo_stop = dpaa_eth_stop,
+	.ndo_tx_timeout = dpaa_tx_timeout,
+	.ndo_get_stats64 = dpaa_get_stats64,
+	.ndo_set_mac_address = dpaa_set_mac_address,
+	.ndo_validate_addr = eth_validate_addr,
+	.ndo_set_rx_mode = dpaa_set_rx_mode,
+};
+
+static int dpaa_napi_add(struct net_device *net_dev)
+{
+	struct dpaa_priv *priv = netdev_priv(net_dev);
+	struct dpaa_percpu_priv *percpu_priv;
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, cpu);
+
+		netif_napi_add(net_dev, &percpu_priv->np.napi,
+			       dpaa_eth_poll, NAPI_POLL_WEIGHT);
+	}
+
+	return 0;
+}
+
+static void dpaa_napi_del(struct net_device *net_dev)
+{
+	struct dpaa_priv *priv = netdev_priv(net_dev);
+	struct dpaa_percpu_priv *percpu_priv;
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, cpu);
+
+		netif_napi_del(&percpu_priv->np.napi);
+	}
+}
+
+static inline void dpaa_bp_free_pf(const struct dpaa_bp *bp,
+				   struct bm_buffer *bmb)
+{
+	dma_addr_t addr = bm_buf_addr(bmb);
+
+	dma_unmap_single(bp->dev, addr, bp->size, DMA_FROM_DEVICE);
+
+	skb_free_frag(phys_to_virt(addr));
+}
+
+/* Alloc the dpaa_bp struct and configure default values */
+static struct dpaa_bp *dpaa_bp_alloc(struct device *dev)
+{
+	struct dpaa_bp *dpaa_bp;
+
+	dpaa_bp = devm_kzalloc(dev, sizeof(*dpaa_bp), GFP_KERNEL);
+	if (!dpaa_bp)
+		return ERR_PTR(-ENOMEM);
+
+	dpaa_bp->bpid = FSL_DPAA_BPID_INV;
+	dpaa_bp->percpu_count = devm_alloc_percpu(dev, *dpaa_bp->percpu_count);
+	dpaa_bp->config_count = FSL_DPAA_ETH_MAX_BUF_COUNT;
+
+	dpaa_bp->seed_cb = dpaa_bp_seed;
+	dpaa_bp->free_buf_cb = dpaa_bp_free_pf;
+
+	return dpaa_bp;
+}
+
+/* Place all ingress FQs (Rx Default, Rx Error) in a dedicated CGR.
+ * We won't be sending congestion notifications to FMan; for now, we just use
+ * this CGR to generate enqueue rejections to FMan in order to drop the frames
+ * before they reach our ingress queues and eat up memory.
+ */
+static int dpaa_ingress_cgr_init(struct dpaa_priv *priv)
+{
+	struct qm_mcc_initcgr initcgr;
+	u32 cs_th;
+	int err;
+
+	err = qman_alloc_cgrid(&priv->ingress_cgr.cgrid);
+	if (err < 0) {
+		if (netif_msg_drv(priv))
+			pr_err("Error %d allocating CGR ID\n", err);
+		goto out_error;
+	}
+
+	/* Enable CS TD, but disable Congestion State Change Notifications. */
+	initcgr.we_mask = QM_CGR_WE_CS_THRES;
+	initcgr.cgr.cscn_en = QM_CGR_EN;
+	cs_th = DPAA_INGRESS_CS_THRESHOLD;
+	qm_cgr_cs_thres_set64(&initcgr.cgr.cs_thres, cs_th, 1);
+
+	initcgr.we_mask |= QM_CGR_WE_CSTD_EN;
+	initcgr.cgr.cstd_en = QM_CGR_EN;
+
+	/* This CGR will be associated with the SWP affined to the current CPU.
+	 * However, we'll place all our ingress FQs in it.
+	 */
+	err = qman_create_cgr(&priv->ingress_cgr, QMAN_CGR_FLAG_USE_INIT,
+			      &initcgr);
+	if (err < 0) {
+		if (netif_msg_drv(priv))
+			pr_err("Error %d creating ingress CGR with ID %d\n",
+			       err, priv->ingress_cgr.cgrid);
+		qman_release_cgrid(priv->ingress_cgr.cgrid);
+		goto out_error;
+	}
+	if (netif_msg_drv(priv))
+		pr_debug("Created ingress CGR %d for netdev with hwaddr %pM\n",
+			 priv->ingress_cgr.cgrid, priv->mac_dev->addr);
+
+	priv->use_ingress_cgr = true;
+
+out_error:
+	return err;
+}
+
+static const struct of_device_id dpaa_match[];
+
+static inline u16 dpaa_get_headroom(struct dpaa_buffer_layout *bl)
+{
+	u16 headroom;
+
+	/* The frame headroom must accommodate:
+	 * - the driver private data area
+	 * - parse results, hash results, timestamp if selected
+	 * If either hash results or time stamp are selected, both will
+	 * be copied to/from the frame headroom, as TS is located between PR and
+	 * HR in the IC and IC copy size has a granularity of 16bytes
+	 * (see description of FMBM_RICP and FMBM_TICP registers in DPAARM)
+	 *
+	 * Also make sure the headroom is a multiple of data_align bytes
+	 */
+	headroom = (u16)(bl->priv_data_size + DPAA_PARSE_RESULTS_SIZE +
+		DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE);
+
+	return DPAA_FD_DATA_ALIGNMENT ? ALIGN(headroom,
+					      DPAA_FD_DATA_ALIGNMENT) :
+					headroom;
+}
+
+static int dpaa_eth_probe(struct platform_device *pdev)
+{
+	struct dpaa_bp *dpaa_bps[DPAA_BPS_NUM] = {NULL};
+	struct dpaa_percpu_priv *percpu_priv;
+	struct net_device *net_dev = NULL;
+	struct dpaa_fq *dpaa_fq, *tmp;
+	struct dpaa_priv *priv = NULL;
+	struct fm_port_fqs port_fqs;
+	struct mac_device *mac_dev;
+	int err = 0, i, channel;
+	struct device *dev;
+
+	dev = &pdev->dev;
+
+	/* Allocate this early, so we can store relevant information in
+	 * the private area
+	 */
+	net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA_ETH_TXQ_NUM);
+	if (!net_dev) {
+		dev_err(dev, "alloc_etherdev_mq() failed\n");
+		goto alloc_etherdev_mq_failed;
+	}
+
+	/* Do this here, so we can be verbose early */
+	SET_NETDEV_DEV(net_dev, dev);
+	dev_set_drvdata(dev, net_dev);
+
+	priv = netdev_priv(net_dev);
+	priv->net_dev = net_dev;
+
+	priv->msg_enable = netif_msg_init(debug, DPAA_MSG_DEFAULT);
+
+	mac_dev = dpaa_mac_dev_get(pdev);
+	if (IS_ERR(mac_dev)) {
+		dev_err(dev, "dpaa_mac_dev_get() failed\n");
+		err = PTR_ERR(mac_dev);
+		goto mac_probe_failed;
+	}
+
+	/* If fsl_fm_max_frm is set to a higher value than the all-common 1500,
+	 * we choose conservatively and let the user explicitly set a higher
+	 * MTU via ifconfig. Otherwise, the user may end up with different MTUs
+	 * in the same LAN.
+	 * If on the other hand fsl_fm_max_frm has been chosen below 1500,
+	 * start with the maximum allowed.
+	 */
+	net_dev->mtu = min(dpaa_get_max_mtu(), ETH_DATA_LEN);
+
+	netdev_dbg(net_dev, "Setting initial MTU on net device: %d\n",
+		   net_dev->mtu);
+
+	priv->buf_layout[RX].priv_data_size = DPAA_RX_PRIV_DATA_SIZE; /* Rx */
+	priv->buf_layout[TX].priv_data_size = DPAA_TX_PRIV_DATA_SIZE; /* Tx */
+
+	/* device used for DMA mapping */
+	arch_setup_dma_ops(dev, 0, 0, NULL, false);
+	err = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(40));
+	if (err) {
+		dev_err(dev, "dma_coerce_mask_and_coherent() failed\n");
+		goto dev_mask_failed;
+	}
+
+	/* bp init */
+	for (i = 0; i < DPAA_BPS_NUM; i++) {
+		int err;
+
+		dpaa_bps[i] = dpaa_bp_alloc(dev);
+		if (IS_ERR(dpaa_bps[i]))
+			return PTR_ERR(dpaa_bps[i]);
+		/* the raw size of the buffers used for reception */
+		dpaa_bps[i]->raw_size = bpool_buffer_raw_size(i, DPAA_BPS_NUM);
+		/* avoid runtime computations by keeping the usable size here */
+		dpaa_bps[i]->size = dpaa_bp_size(dpaa_bps[i]->raw_size);
+		dpaa_bps[i]->dev = dev;
+
+		err = dpaa_bp_alloc_pool(dpaa_bps[i]);
+		if (err < 0) {
+			dpaa_bps_free(priv);
+			priv->dpaa_bps[i] = NULL;
+			goto bp_create_failed;
+		}
+		priv->dpaa_bps[i] = dpaa_bps[i];
+	}
+
+	INIT_LIST_HEAD(&priv->dpaa_fq_list);
+
+	memset(&port_fqs, 0, sizeof(port_fqs));
+
+	err = dpaa_alloc_all_fqs(dev, &priv->dpaa_fq_list, &port_fqs);
+	if (err < 0) {
+		dev_err(dev, "dpaa_alloc_all_fqs() failed\n");
+		goto fq_probe_failed;
+	}
+
+	priv->mac_dev = mac_dev;
+
+	channel = dpaa_get_channel();
+	if (channel < 0) {
+		dev_err(dev, "dpaa_get_channel() failed\n");
+		err = channel;
+		goto get_channel_failed;
+	}
+
+	priv->channel = (u16)channel;
+
+	/* Start a thread that will walk the CPUs with affine portals
+	 * and add this pool channel to each's dequeue mask.
+	 */
+	dpaa_eth_add_channel(priv->channel);
+
+	dpaa_fq_setup(priv, &dpaa_fq_cbs, priv->mac_dev->port[TX]);
+
+	/* Create a congestion group for this netdev, with
+	 * dynamically-allocated CGR ID.
+	 * Must be executed after probing the MAC, but before
+	 * assigning the egress FQs to the CGRs.
+	 */
+	err = dpaa_eth_cgr_init(priv);
+	if (err < 0) {
+		dev_err(dev, "Error initializing CGR\n");
+		goto tx_cgr_init_failed;
+	}
+
+	err = dpaa_ingress_cgr_init(priv);
+	if (err < 0) {
+		dev_err(dev, "Error initializing ingress CGR\n");
+		goto rx_cgr_init_failed;
+	}
+
+	/* Add the FQs to the interface, and make them active */
+	list_for_each_entry_safe(dpaa_fq, tmp, &priv->dpaa_fq_list, list) {
+		err = dpaa_fq_init(dpaa_fq, false);
+		if (err < 0)
+			goto fq_alloc_failed;
+	}
+
+	priv->tx_headroom = dpaa_get_headroom(&priv->buf_layout[TX]);
+	priv->rx_headroom = dpaa_get_headroom(&priv->buf_layout[RX]);
+
+	/* All real interfaces need their ports initialized */
+	dpaa_eth_init_ports(mac_dev, dpaa_bps, DPAA_BPS_NUM, &port_fqs,
+			    &priv->buf_layout[0], dev);
+
+	priv->percpu_priv = devm_alloc_percpu(dev, *priv->percpu_priv);
+	if (!priv->percpu_priv) {
+		dev_err(dev, "devm_alloc_percpu() failed\n");
+		err = -ENOMEM;
+		goto alloc_percpu_failed;
+	}
+	for_each_possible_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+		memset(percpu_priv, 0, sizeof(*percpu_priv));
+	}
+
+	/* Initialize NAPI */
+	err = dpaa_napi_add(net_dev);
+	if (err < 0)
+		goto napi_add_failed;
+
+	err = dpaa_netdev_init(net_dev, &dpaa_ops, tx_timeout);
+	if (err < 0)
+		goto netdev_init_failed;
+
+	netif_info(priv, probe, net_dev, "Probed interface %s\n",
+		   net_dev->name);
+
+	return 0;
+
+netdev_init_failed:
+napi_add_failed:
+	dpaa_napi_del(net_dev);
+alloc_percpu_failed:
+	dpaa_fq_free(dev, &priv->dpaa_fq_list);
+fq_alloc_failed:
+	qman_delete_cgr_safe(&priv->ingress_cgr);
+	qman_release_cgrid(priv->ingress_cgr.cgrid);
+rx_cgr_init_failed:
+	qman_delete_cgr_safe(&priv->cgr_data.cgr);
+	qman_release_cgrid(priv->cgr_data.cgr.cgrid);
+tx_cgr_init_failed:
+get_channel_failed:
+	dpaa_bps_free(priv);
+bp_create_failed:
+fq_probe_failed:
+dev_mask_failed:
+mac_probe_failed:
+	dev_set_drvdata(dev, NULL);
+	free_netdev(net_dev);
+alloc_etherdev_mq_failed:
+	for (i = 0; i < DPAA_BPS_NUM && dpaa_bps[i]; i++) {
+		if (atomic_read(&dpaa_bps[i]->refs) == 0)
+			devm_kfree(dev, dpaa_bps[i]);
+	}
+	return err;
+}
+
+static int dpaa_remove(struct platform_device *pdev)
+{
+	struct net_device *net_dev;
+	struct dpaa_priv *priv;
+	struct device *dev;
+	int err;
+
+	dev = &pdev->dev;
+	net_dev = dev_get_drvdata(dev);
+
+	priv = netdev_priv(net_dev);
+
+	dev_set_drvdata(dev, NULL);
+	unregister_netdev(net_dev);
+
+	err = dpaa_fq_free(dev, &priv->dpaa_fq_list);
+
+	qman_delete_cgr_safe(&priv->ingress_cgr);
+	qman_release_cgrid(priv->ingress_cgr.cgrid);
+	qman_delete_cgr_safe(&priv->cgr_data.cgr);
+	qman_release_cgrid(priv->cgr_data.cgr.cgrid);
+
+	dpaa_napi_del(net_dev);
+
+	dpaa_bps_free(priv);
+
+	free_netdev(net_dev);
+
+	return err;
+}
+
+static struct platform_device_id dpaa_devtype[] = {
+	{
+		.name = "dpaa-ethernet",
+		.driver_data = 0,
+	}, {
+	}
+};
+MODULE_DEVICE_TABLE(platform, dpaa_devtype);
+
+static struct platform_driver dpaa_driver = {
+	.driver = {
+		.name = KBUILD_MODNAME,
+	},
+	.id_table = dpaa_devtype,
+	.probe = dpaa_eth_probe,
+	.remove = dpaa_remove
+};
+
+static int __init dpaa_load(void)
+{
+	int err;
+
+	pr_debug("FSL DPAA Ethernet driver\n");
+
+	/* initialize dpaa_eth mirror values */
+	dpaa_rx_extra_headroom = fman_get_rx_extra_headroom();
+	dpaa_max_frm = fman_get_max_frm();
+
+	err = platform_driver_register(&dpaa_driver);
+	if (err < 0)
+		pr_err("Error, platform_driver_register() = %d\n", err);
+
+	return err;
+}
+module_init(dpaa_load);
+
+static void __exit dpaa_unload(void)
+{
+	platform_driver_unregister(&dpaa_driver);
+
+	/* Only one channel is used and needs to be released after all
+	 * interfaces are removed
+	 */
+	dpaa_release_channel();
+}
+module_exit(dpaa_unload);
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_DESCRIPTION("FSL DPAA Ethernet driver");
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
new file mode 100644
index 0000000..fe98e08
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
@@ -0,0 +1,144 @@
+/* Copyright 2008 - 2016 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_H
+#define __DPAA_H
+
+#include <linux/netdevice.h>
+#include <soc/fsl/qman.h>
+#include <soc/fsl/bman.h>
+
+#include "fman.h"
+#include "mac.h"
+
+#define DPAA_ETH_TXQ_NUM	NR_CPUS
+
+#define DPAA_BPS_NUM 3 /* number of bpools per interface */
+
+/* More detailed FQ types - used for fine-grained WQ assignments */
+enum dpaa_fq_type {
+	FQ_TYPE_RX_DEFAULT = 1, /* Rx Default FQs */
+	FQ_TYPE_RX_ERROR,	/* Rx Error FQs */
+	FQ_TYPE_TX,		/* "Real" Tx FQs */
+	FQ_TYPE_TX_CONFIRM,	/* Tx default Conf FQ (actually an Rx FQ) */
+	FQ_TYPE_TX_CONF_MQ,	/* Tx conf FQs (one for each Tx FQ) */
+	FQ_TYPE_TX_ERROR,	/* Tx Error FQs (these are actually Rx FQs) */
+};
+
+struct dpaa_fq {
+	struct qman_fq fq_base;
+	struct list_head list;
+	struct net_device *net_dev;
+	bool init;
+	u32 fqid;
+	u32 flags;
+	u16 channel;
+	u8 wq;
+	enum dpaa_fq_type fq_type;
+};
+
+struct dpaa_fq_cbs {
+	struct qman_fq rx_defq;
+	struct qman_fq tx_defq;
+	struct qman_fq rx_errq;
+	struct qman_fq tx_errq;
+	struct qman_fq egress_ern;
+};
+
+struct dpaa_bp {
+	/* device used in the DMA mapping operations */
+	struct device *dev;
+	/* current number of buffers in the buffer pool alloted to each CPU */
+	int __percpu *percpu_count;
+	/* all buffers allocated for this pool have this raw size */
+	size_t raw_size;
+	/* all buffers in this pool have this same usable size */
+	size_t size;
+	/* the buffer pools are initialized with config_count buffers for each
+	 * CPU; at runtime the number of buffers per CPU is constantly brought
+	 * back to this level
+	 */
+	u16 config_count;
+	u8 bpid;
+	struct bman_pool *pool;
+	/* bpool can be seeded before use by this cb */
+	int (*seed_cb)(struct dpaa_bp *);
+	/* bpool can be emptied before freeing by this cb */
+	void (*free_buf_cb)(const struct dpaa_bp *, struct bm_buffer *);
+	atomic_t refs;
+};
+
+struct dpaa_napi_portal {
+	struct napi_struct napi;
+	struct qman_portal *p;
+	bool down;
+};
+
+struct dpaa_percpu_priv {
+	struct net_device *net_dev;
+	struct dpaa_napi_portal np;
+	struct rtnl_link_stats64 stats;
+};
+
+struct dpaa_buffer_layout {
+	u16 priv_data_size;
+};
+
+struct dpaa_priv {
+	struct dpaa_percpu_priv __percpu *percpu_priv;
+	struct dpaa_bp *dpaa_bps[DPAA_BPS_NUM];
+	/* Store here the needed Tx headroom for convenience and speed
+	 * (even though it can be computed based on the fields of buf_layout)
+	 */
+	u16 tx_headroom;
+	struct net_device *net_dev;
+	struct mac_device *mac_dev;
+	struct qman_fq *egress_fqs[DPAA_ETH_TXQ_NUM];
+	struct qman_fq *conf_fqs[DPAA_ETH_TXQ_NUM];
+
+	u16 channel;
+	struct list_head dpaa_fq_list;
+
+	u32 msg_enable;	/* net_device message level */
+
+	struct {
+		/* All egress queues to a given net device belong to one
+		 * (and the same) congestion group.
+		 */
+		struct qman_cgr cgr;
+	} cgr_data;
+	/* Use a per-port CGR for ingress traffic. */
+	bool use_ingress_cgr;
+	struct qman_cgr ingress_cgr;
+
+	struct dpaa_buffer_layout buf_layout[2];
+	u16 rx_headroom;
+};
+#endif	/* __DPAA_H */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v7 03/10] dpaa_eth: add option to use one buffer pool set
  2016-11-11  8:19 [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
  2016-11-11  8:19 ` [PATCH net-next v7 01/10] devres: add devm_alloc_percpu() Madalin Bucur
  2016-11-11  8:19 ` [PATCH net-next v7 02/10] dpaa_eth: add support for DPAA Ethernet Madalin Bucur
@ 2016-11-11  8:20 ` Madalin Bucur
  2016-11-13 17:46   ` David Miller
  2016-11-11  8:20 ` [PATCH net-next v7 04/10] dpaa_eth: add ethtool functionality Madalin Bucur
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 14+ messages in thread
From: Madalin Bucur @ 2016-11-11  8:20 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/Kconfig    |  9 +++++++++
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 23 +++++++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/drivers/net/ethernet/freescale/dpaa/Kconfig b/drivers/net/ethernet/freescale/dpaa/Kconfig
index f3a3454..c0ebc92 100644
--- a/drivers/net/ethernet/freescale/dpaa/Kconfig
+++ b/drivers/net/ethernet/freescale/dpaa/Kconfig
@@ -8,3 +8,12 @@ menuconfig FSL_DPAA_ETH
 	  supporting the Freescale QorIQ chips.
 	  Depends on Freescale Buffer Manager and Queue Manager
 	  driver and Frame Manager Driver.
+
+if FSL_DPAA_ETH
+config FSL_DPAA_ETH_COMMON_BPOOL
+	bool "Use a common buffer pool set for all the interfaces"
+	---help---
+	  The DPAA Ethernet netdevices require buffer pools for storing the buffers
+	  used by the FMan hardware for reception. One can use a single buffer pool
+	  set for all interfaces or a dedicated buffer pool set for each interface.
+endif # FSL_DPAA_ETH
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 8b9b0720f..aa3a155 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -158,6 +158,11 @@ struct fm_port_fqs {
 	struct dpaa_fq *rx_errq;
 };
 
+#ifdef CONFIG_FSL_DPAA_ETH_COMMON_BPOOL
+/* These bpools are shared by all the dpaa interfaces */
+static u8 dpaa_common_bpids[DPAA_BPS_NUM];
+#endif
+
 /* All the dpa bps in use at any moment */
 static struct dpaa_bp *dpaa_bp_array[BM_MAX_NUM_OF_POOLS];
 
@@ -2470,6 +2475,12 @@ static int dpaa_eth_probe(struct platform_device *pdev)
 	for (i = 0; i < DPAA_BPS_NUM; i++) {
 		int err;
 
+#ifdef CONFIG_FSL_DPAA_ETH_COMMON_BPOOL
+		/* if another interface probed the bps reuse those */
+		dpaa_bps[i] = (dpaa_common_bpids[i] != FSL_DPAA_BPID_INV) ?
+				dpaa_bpid2pool(dpaa_common_bpids[i]) : NULL;
+		if (!dpaa_bps[i]) {
+#endif
 		dpaa_bps[i] = dpaa_bp_alloc(dev);
 		if (IS_ERR(dpaa_bps[i]))
 			return PTR_ERR(dpaa_bps[i]);
@@ -2485,6 +2496,11 @@ static int dpaa_eth_probe(struct platform_device *pdev)
 			priv->dpaa_bps[i] = NULL;
 			goto bp_create_failed;
 		}
+#ifdef CONFIG_FSL_DPAA_ETH_COMMON_BPOOL
+		}
+		dpaa_common_bpids[i] = dpaa_bps[i]->bpid;
+		dpaa_bps[i] = (dpaa_bpid2pool(dpaa_common_bpids[i]));
+#endif
 		priv->dpaa_bps[i] = dpaa_bps[i];
 	}
 
@@ -2659,6 +2675,13 @@ static int __init dpaa_load(void)
 	dpaa_rx_extra_headroom = fman_get_rx_extra_headroom();
 	dpaa_max_frm = fman_get_max_frm();
 
+#ifdef CONFIG_FSL_DPAA_ETH_COMMON_BPOOL
+	/* set initial invalid values, first interface probe will set correct
+	 * values that will be shared by the other interfaces
+	 */
+	memset(dpaa_common_bpids, FSL_DPAA_BPID_INV, sizeof(dpaa_common_bpids));
+#endif
+
 	err = platform_driver_register(&dpaa_driver);
 	if (err < 0)
 		pr_err("Error, platform_driver_register() = %d\n", err);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v7 04/10] dpaa_eth: add ethtool functionality
  2016-11-11  8:19 [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (2 preceding siblings ...)
  2016-11-11  8:20 ` [PATCH net-next v7 03/10] dpaa_eth: add option to use one buffer pool set Madalin Bucur
@ 2016-11-11  8:20 ` Madalin Bucur
  2016-11-11  8:20 ` [PATCH net-next v7 05/10] dpaa_eth: add ethtool statistics Madalin Bucur
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Madalin Bucur @ 2016-11-11  8:20 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Add support for basic ethtool operations.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/Makefile       |   2 +-
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c     |   2 +
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h     |   3 +
 drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c | 218 +++++++++++++++++++++
 4 files changed, 224 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c

diff --git a/drivers/net/ethernet/freescale/dpaa/Makefile b/drivers/net/ethernet/freescale/dpaa/Makefile
index fc76029..43a4cfd 100644
--- a/drivers/net/ethernet/freescale/dpaa/Makefile
+++ b/drivers/net/ethernet/freescale/dpaa/Makefile
@@ -8,4 +8,4 @@ ccflags-y += -I$(FMAN)
 
 obj-$(CONFIG_FSL_DPAA_ETH) += fsl_dpa.o
 
-fsl_dpa-objs += dpaa_eth.o
+fsl_dpa-objs += dpaa_eth.o dpaa_ethtool.o
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index aa3a155..84aa0a77 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -245,6 +245,8 @@ static int dpaa_netdev_init(struct net_device *net_dev,
 	memcpy(net_dev->perm_addr, mac_addr, net_dev->addr_len);
 	memcpy(net_dev->dev_addr, mac_addr, net_dev->addr_len);
 
+	net_dev->ethtool_ops = &dpaa_ethtool_ops;
+
 	net_dev->needed_headroom = priv->tx_headroom;
 	net_dev->watchdog_timeo = msecs_to_jiffies(tx_timeout);
 
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
index fe98e08..d6ab335 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
@@ -141,4 +141,7 @@ struct dpaa_priv {
 	struct dpaa_buffer_layout buf_layout[2];
 	u16 rx_headroom;
 };
+
+/* from dpaa_ethtool.c */
+extern const struct ethtool_ops dpaa_ethtool_ops;
 #endif	/* __DPAA_H */
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
new file mode 100644
index 0000000..3580a62
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
@@ -0,0 +1,218 @@
+/* Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/string.h>
+
+#include "dpaa_eth.h"
+#include "mac.h"
+
+static int dpaa_get_settings(struct net_device *net_dev,
+			     struct ethtool_cmd *et_cmd)
+{
+	int err;
+
+	if (!net_dev->phydev) {
+		netdev_dbg(net_dev, "phy device not initialized\n");
+		return 0;
+	}
+
+	err = phy_ethtool_gset(net_dev->phydev, et_cmd);
+
+	return err;
+}
+
+static int dpaa_set_settings(struct net_device *net_dev,
+			     struct ethtool_cmd *et_cmd)
+{
+	int err;
+
+	if (!net_dev->phydev) {
+		netdev_err(net_dev, "phy device not initialized\n");
+		return -ENODEV;
+	}
+
+	err = phy_ethtool_sset(net_dev->phydev, et_cmd);
+	if (err < 0)
+		netdev_err(net_dev, "phy_ethtool_sset() = %d\n", err);
+
+	return err;
+}
+
+static void dpaa_get_drvinfo(struct net_device *net_dev,
+			     struct ethtool_drvinfo *drvinfo)
+{
+	int len;
+
+	strlcpy(drvinfo->driver, KBUILD_MODNAME,
+		sizeof(drvinfo->driver));
+	len = snprintf(drvinfo->version, sizeof(drvinfo->version),
+		       "%X", 0);
+	len = snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
+		       "%X", 0);
+
+	if (len >= sizeof(drvinfo->fw_version)) {
+		/* Truncated output */
+		netdev_notice(net_dev, "snprintf() = %d\n", len);
+	}
+	strlcpy(drvinfo->bus_info, dev_name(net_dev->dev.parent->parent),
+		sizeof(drvinfo->bus_info));
+}
+
+static u32 dpaa_get_msglevel(struct net_device *net_dev)
+{
+	return ((struct dpaa_priv *)netdev_priv(net_dev))->msg_enable;
+}
+
+static void dpaa_set_msglevel(struct net_device *net_dev,
+			      u32 msg_enable)
+{
+	((struct dpaa_priv *)netdev_priv(net_dev))->msg_enable = msg_enable;
+}
+
+static int dpaa_nway_reset(struct net_device *net_dev)
+{
+	int err;
+
+	if (!net_dev->phydev) {
+		netdev_err(net_dev, "phy device not initialized\n");
+		return -ENODEV;
+	}
+
+	err = 0;
+	if (net_dev->phydev->autoneg) {
+		err = phy_start_aneg(net_dev->phydev);
+		if (err < 0)
+			netdev_err(net_dev, "phy_start_aneg() = %d\n",
+				   err);
+	}
+
+	return err;
+}
+
+static void dpaa_get_pauseparam(struct net_device *net_dev,
+				struct ethtool_pauseparam *epause)
+{
+	struct mac_device *mac_dev;
+	struct dpaa_priv *priv;
+
+	priv = netdev_priv(net_dev);
+	mac_dev = priv->mac_dev;
+
+	if (!net_dev->phydev) {
+		netdev_err(net_dev, "phy device not initialized\n");
+		return;
+	}
+
+	epause->autoneg = mac_dev->autoneg_pause;
+	epause->rx_pause = mac_dev->rx_pause_active;
+	epause->tx_pause = mac_dev->tx_pause_active;
+}
+
+static int dpaa_set_pauseparam(struct net_device *net_dev,
+			       struct ethtool_pauseparam *epause)
+{
+	struct mac_device *mac_dev;
+	struct phy_device *phydev;
+	bool rx_pause, tx_pause;
+	struct dpaa_priv *priv;
+	u32 newadv, oldadv;
+	int err;
+
+	priv = netdev_priv(net_dev);
+	mac_dev = priv->mac_dev;
+
+	phydev = net_dev->phydev;
+	if (!phydev) {
+		netdev_err(net_dev, "phy device not initialized\n");
+		return -ENODEV;
+	}
+
+	if (!(phydev->supported & SUPPORTED_Pause) ||
+	    (!(phydev->supported & SUPPORTED_Asym_Pause) &&
+	    (epause->rx_pause != epause->tx_pause)))
+		return -EINVAL;
+
+	/* The MAC should know how to handle PAUSE frame autonegotiation before
+	 * adjust_link is triggered by a forced renegotiation of sym/asym PAUSE
+	 * settings.
+	 */
+	mac_dev->autoneg_pause = !!epause->autoneg;
+	mac_dev->rx_pause_req = !!epause->rx_pause;
+	mac_dev->tx_pause_req = !!epause->tx_pause;
+
+	/* Determine the sym/asym advertised PAUSE capabilities from the desired
+	 * rx/tx pause settings.
+	 */
+	newadv = 0;
+	if (epause->rx_pause)
+		newadv = ADVERTISED_Pause | ADVERTISED_Asym_Pause;
+	if (epause->tx_pause)
+		newadv |= ADVERTISED_Asym_Pause;
+
+	oldadv = phydev->advertising &
+			(ADVERTISED_Pause | ADVERTISED_Asym_Pause);
+
+	/* If there are differences between the old and the new advertised
+	 * values, restart PHY autonegotiation and advertise the new values.
+	 */
+	if (oldadv != newadv) {
+		phydev->advertising &= ~(ADVERTISED_Pause
+				| ADVERTISED_Asym_Pause);
+		phydev->advertising |= newadv;
+		if (phydev->autoneg) {
+			err = phy_start_aneg(phydev);
+			if (err < 0)
+				netdev_err(net_dev, "phy_start_aneg() = %d\n",
+					   err);
+		}
+	}
+
+	fman_get_pause_cfg(mac_dev, &rx_pause, &tx_pause);
+	err = fman_set_mac_active_pause(mac_dev, rx_pause, tx_pause);
+	if (err < 0)
+		netdev_err(net_dev, "set_mac_active_pause() = %d\n", err);
+
+	return err;
+}
+
+const struct ethtool_ops dpaa_ethtool_ops = {
+	.get_settings = dpaa_get_settings,
+	.set_settings = dpaa_set_settings,
+	.get_drvinfo = dpaa_get_drvinfo,
+	.get_msglevel = dpaa_get_msglevel,
+	.set_msglevel = dpaa_set_msglevel,
+	.nway_reset = dpaa_nway_reset,
+	.get_pauseparam = dpaa_get_pauseparam,
+	.set_pauseparam = dpaa_set_pauseparam,
+	.get_link = ethtool_op_get_link,
+};
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v7 05/10] dpaa_eth: add ethtool statistics
  2016-11-11  8:19 [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (3 preceding siblings ...)
  2016-11-11  8:20 ` [PATCH net-next v7 04/10] dpaa_eth: add ethtool functionality Madalin Bucur
@ 2016-11-11  8:20 ` Madalin Bucur
  2016-11-11  8:20 ` [PATCH net-next v7 06/10] dpaa_eth: add sysfs exports Madalin Bucur
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Madalin Bucur @ 2016-11-11  8:20 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Add a series of counters to be exported through ethtool:
- add detailed counters for reception errors;
- add detailed counters for QMan enqueue reject events;
- count the number of fragmented skbs received from the stack;
- count all frames received on the Tx confirmation path;
- add congestion group statistics;
- count the number of interrupts for each CPU.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c     |  54 +++++-
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h     |  33 ++++
 drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c | 199 +++++++++++++++++++++
 3 files changed, 284 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 84aa0a77..d59a08c 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -705,10 +705,15 @@ static void dpaa_eth_cgscn(struct qman_portal *qm, struct qman_cgr *cgr,
 	struct dpaa_priv *priv = (struct dpaa_priv *)container_of(cgr,
 		struct dpaa_priv, cgr_data.cgr);
 
-	if (congested)
+	if (congested) {
+		priv->cgr_data.congestion_start_jiffies = jiffies;
 		netif_tx_stop_all_queues(priv->net_dev);
-	else
+		priv->cgr_data.cgr_congested_count++;
+	} else {
+		priv->cgr_data.congested_jiffies +=
+			(jiffies - priv->cgr_data.congestion_start_jiffies);
 		netif_tx_wake_all_queues(priv->net_dev);
+	}
 }
 
 static int dpaa_eth_cgr_init(struct dpaa_priv *priv)
@@ -1222,6 +1227,37 @@ static void dpaa_fd_release(const struct net_device *net_dev,
 	dpaa_bman_release(dpaa_bp, &bmb, 1);
 }
 
+static void count_ern(struct dpaa_percpu_priv *percpu_priv,
+		      const union qm_mr_entry *msg)
+{
+	switch (msg->ern.rc & QM_MR_RC_MASK) {
+	case QM_MR_RC_CGR_TAILDROP:
+		percpu_priv->ern_cnt.cg_tdrop++;
+		break;
+	case QM_MR_RC_WRED:
+		percpu_priv->ern_cnt.wred++;
+		break;
+	case QM_MR_RC_ERROR:
+		percpu_priv->ern_cnt.err_cond++;
+		break;
+	case QM_MR_RC_ORPWINDOW_EARLY:
+		percpu_priv->ern_cnt.early_window++;
+		break;
+	case QM_MR_RC_ORPWINDOW_LATE:
+		percpu_priv->ern_cnt.late_window++;
+		break;
+	case QM_MR_RC_FQ_TAILDROP:
+		percpu_priv->ern_cnt.fq_tdrop++;
+		break;
+	case QM_MR_RC_ORPWINDOW_RETIRED:
+		percpu_priv->ern_cnt.fq_retired++;
+		break;
+	case QM_MR_RC_ORP_ZERO:
+		percpu_priv->ern_cnt.orp_zero++;
+		break;
+	}
+}
+
 /* Turn on HW checksum computation for this outgoing frame.
  * If the current protocol is not something we support in this regard
  * (or if the stack has already computed the SW checksum), we do nothing.
@@ -1887,6 +1923,7 @@ static int dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
 	    likely(skb_shinfo(skb)->nr_frags < DPAA_SGT_MAX_ENTRIES)) {
 		/* Just create a S/G fd based on the skb */
 		err = skb_to_sg_fd(priv, skb, &fd);
+		percpu_priv->tx_frag_skbuffs++;
 	} else {
 		/* If the egress skb contains more fragments than we support
 		 * we have no choice but to linearize it ourselves.
@@ -1923,6 +1960,15 @@ static void dpaa_rx_error(struct net_device *net_dev,
 
 	percpu_priv->stats.rx_errors++;
 
+	if (fd->status & FM_FD_ERR_DMA)
+		percpu_priv->rx_errors.dme++;
+	if (fd->status & FM_FD_ERR_PHYSICAL)
+		percpu_priv->rx_errors.fpe++;
+	if (fd->status & FM_FD_ERR_SIZE)
+		percpu_priv->rx_errors.fse++;
+	if (fd->status & FM_FD_ERR_PRS_HDR_ERR)
+		percpu_priv->rx_errors.phe++;
+
 	dpaa_fd_release(net_dev, fd);
 }
 
@@ -1978,6 +2024,8 @@ static void dpaa_tx_conf(struct net_device *net_dev,
 		percpu_priv->stats.tx_errors++;
 	}
 
+	percpu_priv->tx_confirm++;
+
 	skb = dpaa_cleanup_tx_fd(priv, fd);
 
 	consume_skb(skb);
@@ -1992,6 +2040,7 @@ static inline int dpaa_eth_napi_schedule(struct dpaa_percpu_priv *percpu_priv,
 
 		percpu_priv->np.p = portal;
 		napi_schedule(&percpu_priv->np.napi);
+		percpu_priv->in_interrupt++;
 		return 1;
 	}
 	return 0;
@@ -2176,6 +2225,7 @@ static void egress_ern(struct qman_portal *portal,
 
 	percpu_priv->stats.tx_dropped++;
 	percpu_priv->stats.tx_fifo_errors++;
+	count_ern(percpu_priv, msg);
 
 	skb = dpaa_cleanup_tx_fd(priv, fd);
 	dev_kfree_skb_any(skb);
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
index d6ab335..711fb06 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
@@ -95,6 +95,25 @@ struct dpaa_bp {
 	atomic_t refs;
 };
 
+struct dpaa_rx_errors {
+	u64 dme;		/* DMA Error */
+	u64 fpe;		/* Frame Physical Error */
+	u64 fse;		/* Frame Size Error */
+	u64 phe;		/* Header Error */
+};
+
+/* Counters for QMan ERN frames - one counter per rejection code */
+struct dpaa_ern_cnt {
+	u64 cg_tdrop;		/* Congestion group taildrop */
+	u64 wred;		/* WRED congestion */
+	u64 err_cond;		/* Error condition */
+	u64 early_window;	/* Order restoration, frame too early */
+	u64 late_window;	/* Order restoration, frame too late */
+	u64 fq_tdrop;		/* FQ taildrop */
+	u64 fq_retired;		/* FQ is retired */
+	u64 orp_zero;		/* ORP disabled */
+};
+
 struct dpaa_napi_portal {
 	struct napi_struct napi;
 	struct qman_portal *p;
@@ -104,7 +123,13 @@ struct dpaa_napi_portal {
 struct dpaa_percpu_priv {
 	struct net_device *net_dev;
 	struct dpaa_napi_portal np;
+	u64 in_interrupt;
+	u64 tx_confirm;
+	/* fragmented (non-linear) skbuffs received from the stack */
+	u64 tx_frag_skbuffs;
 	struct rtnl_link_stats64 stats;
+	struct dpaa_rx_errors rx_errors;
+	struct dpaa_ern_cnt ern_cnt;
 };
 
 struct dpaa_buffer_layout {
@@ -133,6 +158,14 @@ struct dpaa_priv {
 		 * (and the same) congestion group.
 		 */
 		struct qman_cgr cgr;
+		/* If congested, when it began. Used for performance stats. */
+		u32 congestion_start_jiffies;
+		/* Number of jiffies the Tx port was congested. */
+		u32 congested_jiffies;
+		/* Counter for the number of times the CGR
+		 * entered congestion state
+		 */
+		u32 cgr_congested_count;
 	} cgr_data;
 	/* Use a per-port CGR for ingress traffic. */
 	bool use_ingress_cgr;
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
index 3580a62..27e7044 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
@@ -36,6 +36,42 @@
 #include "dpaa_eth.h"
 #include "mac.h"
 
+static const char dpaa_stats_percpu[][ETH_GSTRING_LEN] = {
+	"interrupts",
+	"rx packets",
+	"tx packets",
+	"tx confirm",
+	"tx S/G",
+	"tx error",
+	"rx error",
+};
+
+static char dpaa_stats_global[][ETH_GSTRING_LEN] = {
+	/* dpa rx errors */
+	"rx dma error",
+	"rx frame physical error",
+	"rx frame size error",
+	"rx header error",
+
+	/* demultiplexing errors */
+	"qman cg_tdrop",
+	"qman wred",
+	"qman error cond",
+	"qman early window",
+	"qman late window",
+	"qman fq tdrop",
+	"qman fq retired",
+	"qman orp disabled",
+
+	/* congestion related stats */
+	"congestion time (ms)",
+	"entered congestion",
+	"congested (0/1)"
+};
+
+#define DPAA_STATS_PERCPU_LEN ARRAY_SIZE(dpaa_stats_percpu)
+#define DPAA_STATS_GLOBAL_LEN ARRAY_SIZE(dpaa_stats_global)
+
 static int dpaa_get_settings(struct net_device *net_dev,
 			     struct ethtool_cmd *et_cmd)
 {
@@ -205,6 +241,166 @@ static int dpaa_set_pauseparam(struct net_device *net_dev,
 	return err;
 }
 
+static int dpaa_get_sset_count(struct net_device *net_dev, int type)
+{
+	unsigned int total_stats, num_stats;
+
+	num_stats   = num_online_cpus() + 1;
+	total_stats = num_stats * (DPAA_STATS_PERCPU_LEN + DPAA_BPS_NUM) +
+			DPAA_STATS_GLOBAL_LEN;
+
+	switch (type) {
+	case ETH_SS_STATS:
+		return total_stats;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static void copy_stats(struct dpaa_percpu_priv *percpu_priv, int num_cpus,
+		       int crr_cpu, u64 *bp_count, u64 *data)
+{
+	int num_values = num_cpus + 1;
+	int crr = 0, j;
+
+	/* update current CPU's stats and also add them to the total values */
+	data[crr * num_values + crr_cpu] = percpu_priv->in_interrupt;
+	data[crr++ * num_values + num_cpus] += percpu_priv->in_interrupt;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->stats.rx_packets;
+	data[crr++ * num_values + num_cpus] += percpu_priv->stats.rx_packets;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->stats.tx_packets;
+	data[crr++ * num_values + num_cpus] += percpu_priv->stats.tx_packets;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->tx_confirm;
+	data[crr++ * num_values + num_cpus] += percpu_priv->tx_confirm;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->tx_frag_skbuffs;
+	data[crr++ * num_values + num_cpus] += percpu_priv->tx_frag_skbuffs;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->stats.tx_errors;
+	data[crr++ * num_values + num_cpus] += percpu_priv->stats.tx_errors;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->stats.rx_errors;
+	data[crr++ * num_values + num_cpus] += percpu_priv->stats.rx_errors;
+
+	for (j = 0; j < DPAA_BPS_NUM; j++) {
+		data[crr * num_values + crr_cpu] = bp_count[j];
+		data[crr++ * num_values + num_cpus] += bp_count[j];
+	}
+}
+
+static void dpaa_get_ethtool_stats(struct net_device *net_dev,
+				   struct ethtool_stats *stats, u64 *data)
+{
+	u64 bp_count[DPAA_BPS_NUM], cg_time, cg_num;
+	struct dpaa_percpu_priv *percpu_priv;
+	struct dpaa_rx_errors rx_errors;
+	unsigned int num_cpus, offset;
+	struct dpaa_ern_cnt ern_cnt;
+	struct dpaa_bp *dpaa_bp;
+	struct dpaa_priv *priv;
+	int total_stats, i, j;
+	bool cg_status;
+
+	total_stats = dpaa_get_sset_count(net_dev, ETH_SS_STATS);
+	priv     = netdev_priv(net_dev);
+	num_cpus = num_online_cpus();
+
+	memset(&bp_count, 0, sizeof(bp_count));
+	memset(&rx_errors, 0, sizeof(struct dpaa_rx_errors));
+	memset(&ern_cnt, 0, sizeof(struct dpaa_ern_cnt));
+	memset(data, 0, total_stats * sizeof(u64));
+
+	for_each_online_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+		for (j = 0; j < DPAA_BPS_NUM; j++) {
+			dpaa_bp = priv->dpaa_bps[j];
+			if (!dpaa_bp->percpu_count)
+				continue;
+			bp_count[j] = *(per_cpu_ptr(dpaa_bp->percpu_count, i));
+		}
+		rx_errors.dme += percpu_priv->rx_errors.dme;
+		rx_errors.fpe += percpu_priv->rx_errors.fpe;
+		rx_errors.fse += percpu_priv->rx_errors.fse;
+		rx_errors.phe += percpu_priv->rx_errors.phe;
+
+		ern_cnt.cg_tdrop     += percpu_priv->ern_cnt.cg_tdrop;
+		ern_cnt.wred         += percpu_priv->ern_cnt.wred;
+		ern_cnt.err_cond     += percpu_priv->ern_cnt.err_cond;
+		ern_cnt.early_window += percpu_priv->ern_cnt.early_window;
+		ern_cnt.late_window  += percpu_priv->ern_cnt.late_window;
+		ern_cnt.fq_tdrop     += percpu_priv->ern_cnt.fq_tdrop;
+		ern_cnt.fq_retired   += percpu_priv->ern_cnt.fq_retired;
+		ern_cnt.orp_zero     += percpu_priv->ern_cnt.orp_zero;
+
+		copy_stats(percpu_priv, num_cpus, i, bp_count, data);
+	}
+
+	offset = (num_cpus + 1) * (DPAA_STATS_PERCPU_LEN + DPAA_BPS_NUM);
+	memcpy(data + offset, &rx_errors, sizeof(struct dpaa_rx_errors));
+
+	offset += sizeof(struct dpaa_rx_errors) / sizeof(u64);
+	memcpy(data + offset, &ern_cnt, sizeof(struct dpaa_ern_cnt));
+
+	/* gather congestion related counters */
+	cg_num    = 0;
+	cg_status = 0;
+	cg_time   = jiffies_to_msecs(priv->cgr_data.congested_jiffies);
+	if (qman_query_cgr_congested(&priv->cgr_data.cgr, &cg_status) == 0) {
+		cg_num    = priv->cgr_data.cgr_congested_count;
+
+		/* reset congestion stats (like QMan API does */
+		priv->cgr_data.congested_jiffies   = 0;
+		priv->cgr_data.cgr_congested_count = 0;
+	}
+
+	offset += sizeof(struct dpaa_ern_cnt) / sizeof(u64);
+	data[offset++] = cg_time;
+	data[offset++] = cg_num;
+	data[offset++] = cg_status;
+}
+
+static void dpaa_get_strings(struct net_device *net_dev, u32 stringset,
+			     u8 *data)
+{
+	unsigned int i, j, num_cpus, size;
+	char string_cpu[ETH_GSTRING_LEN];
+	u8 *strings;
+
+	memset(string_cpu, 0, sizeof(string_cpu));
+	strings   = data;
+	num_cpus  = num_online_cpus();
+	size      = DPAA_STATS_GLOBAL_LEN * ETH_GSTRING_LEN;
+
+	for (i = 0; i < DPAA_STATS_PERCPU_LEN; i++) {
+		for (j = 0; j < num_cpus; j++) {
+			snprintf(string_cpu, ETH_GSTRING_LEN, "%s [CPU %d]",
+				 dpaa_stats_percpu[i], j);
+			memcpy(strings, string_cpu, ETH_GSTRING_LEN);
+			strings += ETH_GSTRING_LEN;
+		}
+		snprintf(string_cpu, ETH_GSTRING_LEN, "%s [TOTAL]",
+			 dpaa_stats_percpu[i]);
+		memcpy(strings, string_cpu, ETH_GSTRING_LEN);
+		strings += ETH_GSTRING_LEN;
+	}
+	for (i = 0; i < DPAA_BPS_NUM; i++) {
+		for (j = 0; j < num_cpus; j++) {
+			snprintf(string_cpu, ETH_GSTRING_LEN,
+				 "bpool %c [CPU %d]", 'a' + i, j);
+			memcpy(strings, string_cpu, ETH_GSTRING_LEN);
+			strings += ETH_GSTRING_LEN;
+		}
+		snprintf(string_cpu, ETH_GSTRING_LEN, "bpool %c [TOTAL]",
+			 'a' + i);
+		memcpy(strings, string_cpu, ETH_GSTRING_LEN);
+		strings += ETH_GSTRING_LEN;
+	}
+	memcpy(strings, dpaa_stats_global, size);
+}
+
 const struct ethtool_ops dpaa_ethtool_ops = {
 	.get_settings = dpaa_get_settings,
 	.set_settings = dpaa_set_settings,
@@ -215,4 +411,7 @@ const struct ethtool_ops dpaa_ethtool_ops = {
 	.get_pauseparam = dpaa_get_pauseparam,
 	.set_pauseparam = dpaa_set_pauseparam,
 	.get_link = ethtool_op_get_link,
+	.get_sset_count = dpaa_get_sset_count,
+	.get_ethtool_stats = dpaa_get_ethtool_stats,
+	.get_strings = dpaa_get_strings,
 };
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v7 06/10] dpaa_eth: add sysfs exports
  2016-11-11  8:19 [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (4 preceding siblings ...)
  2016-11-11  8:20 ` [PATCH net-next v7 05/10] dpaa_eth: add ethtool statistics Madalin Bucur
@ 2016-11-11  8:20 ` Madalin Bucur
  2016-11-11  8:20 ` [PATCH net-next v7 07/10] dpaa_eth: add trace points Madalin Bucur
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Madalin Bucur @ 2016-11-11  8:20 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Export Frame Queue and Buffer Pool IDs through sysfs.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/Makefile       |   2 +-
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c     |   4 +
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h     |   4 +
 .../net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c   | 165 +++++++++++++++++++++
 4 files changed, 174 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c

diff --git a/drivers/net/ethernet/freescale/dpaa/Makefile b/drivers/net/ethernet/freescale/dpaa/Makefile
index 43a4cfd..bfb03d4 100644
--- a/drivers/net/ethernet/freescale/dpaa/Makefile
+++ b/drivers/net/ethernet/freescale/dpaa/Makefile
@@ -8,4 +8,4 @@ ccflags-y += -I$(FMAN)
 
 obj-$(CONFIG_FSL_DPAA_ETH) += fsl_dpa.o
 
-fsl_dpa-objs += dpaa_eth.o dpaa_ethtool.o
+fsl_dpa-objs += dpaa_eth.o dpaa_ethtool.o dpaa_eth_sysfs.o
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index d59a08c..3a610fd 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -2635,6 +2635,8 @@ static int dpaa_eth_probe(struct platform_device *pdev)
 	if (err < 0)
 		goto netdev_init_failed;
 
+	dpaa_eth_sysfs_init(&net_dev->dev);
+
 	netif_info(priv, probe, net_dev, "Probed interface %s\n",
 		   net_dev->name);
 
@@ -2680,6 +2682,8 @@ static int dpaa_remove(struct platform_device *pdev)
 
 	priv = netdev_priv(net_dev);
 
+	dpaa_eth_sysfs_remove(dev);
+
 	dev_set_drvdata(dev, NULL);
 	unregister_netdev(net_dev);
 
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
index 711fb06..44323e2 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
@@ -177,4 +177,8 @@ struct dpaa_priv {
 
 /* from dpaa_ethtool.c */
 extern const struct ethtool_ops dpaa_ethtool_ops;
+
+/* from dpaa_eth_sysfs.c */
+void dpaa_eth_sysfs_remove(struct device *dev);
+void dpaa_eth_sysfs_init(struct device *dev);
 #endif	/* __DPAA_H */
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c
new file mode 100644
index 0000000..ec75d1c
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c
@@ -0,0 +1,165 @@
+/* Copyright 2008-2016 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/io.h>
+#include <linux/of_net.h>
+#include "dpaa_eth.h"
+#include "mac.h"
+
+static ssize_t dpaa_eth_show_addr(struct device *dev,
+				  struct device_attribute *attr, char *buf)
+{
+	struct dpaa_priv *priv = netdev_priv(to_net_dev(dev));
+	struct mac_device *mac_dev = priv->mac_dev;
+
+	if (mac_dev)
+		return sprintf(buf, "%llx",
+				(unsigned long long)mac_dev->res->start);
+	else
+		return sprintf(buf, "none");
+}
+
+static ssize_t dpaa_eth_show_fqids(struct device *dev,
+				   struct device_attribute *attr, char *buf)
+{
+	struct dpaa_priv *priv = netdev_priv(to_net_dev(dev));
+	struct dpaa_fq *prev = NULL;
+	char *prevstr = NULL;
+	struct dpaa_fq *tmp;
+	struct dpaa_fq *fq;
+	u32 first_fqid = 0;
+	u32 last_fqid = 0;
+	ssize_t bytes = 0;
+	char *str;
+	int i = 0;
+
+	list_for_each_entry_safe(fq, tmp, &priv->dpaa_fq_list, list) {
+		switch (fq->fq_type) {
+		case FQ_TYPE_RX_DEFAULT:
+			str = "Rx default";
+			break;
+		case FQ_TYPE_RX_ERROR:
+			str = "Rx error";
+			break;
+		case FQ_TYPE_TX_CONFIRM:
+			str = "Tx default confirmation";
+			break;
+		case FQ_TYPE_TX_CONF_MQ:
+			str = "Tx confirmation (mq)";
+			break;
+		case FQ_TYPE_TX_ERROR:
+			str = "Tx error";
+			break;
+		case FQ_TYPE_TX:
+			str = "Tx";
+			break;
+		default:
+			str = "Unknown";
+		}
+
+		if (prev && (abs(fq->fqid - prev->fqid) != 1 ||
+			     str != prevstr)) {
+			if (last_fqid == first_fqid)
+				bytes += sprintf(buf + bytes,
+					"%s: %d\n", prevstr, prev->fqid);
+			else
+				bytes += sprintf(buf + bytes,
+					"%s: %d - %d\n", prevstr,
+					first_fqid, last_fqid);
+		}
+
+		if (prev && abs(fq->fqid - prev->fqid) == 1 &&
+		    str == prevstr) {
+			last_fqid = fq->fqid;
+		} else {
+			first_fqid = fq->fqid;
+			last_fqid = fq->fqid;
+		}
+
+		prev = fq;
+		prevstr = str;
+		i++;
+	}
+
+	if (prev) {
+		if (last_fqid == first_fqid)
+			bytes += sprintf(buf + bytes, "%s: %d\n", prevstr,
+					prev->fqid);
+		else
+			bytes += sprintf(buf + bytes, "%s: %d - %d\n", prevstr,
+					first_fqid, last_fqid);
+	}
+
+	return bytes;
+}
+
+static ssize_t dpaa_eth_show_bpids(struct device *dev,
+				   struct device_attribute *attr, char *buf)
+{
+	struct dpaa_priv *priv = netdev_priv(to_net_dev(dev));
+	ssize_t bytes = 0;
+	int i = 0;
+
+	for (i = 0; i < DPAA_BPS_NUM; i++)
+		bytes += snprintf(buf + bytes, PAGE_SIZE - bytes, "%u\n",
+				  priv->dpaa_bps[i]->bpid);
+
+	return bytes;
+}
+
+static struct device_attribute dpaa_eth_attrs[] = {
+	__ATTR(device_addr, 0444, dpaa_eth_show_addr, NULL),
+	__ATTR(fqids, 0444, dpaa_eth_show_fqids, NULL),
+	__ATTR(bpids, 0444, dpaa_eth_show_bpids, NULL),
+};
+
+void dpaa_eth_sysfs_init(struct device *dev)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(dpaa_eth_attrs); i++)
+		if (device_create_file(dev, &dpaa_eth_attrs[i])) {
+			dev_err(dev, "Error creating sysfs file\n");
+			while (i > 0)
+				device_remove_file(dev, &dpaa_eth_attrs[--i]);
+			return;
+		}
+}
+
+void dpaa_eth_sysfs_remove(struct device *dev)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(dpaa_eth_attrs); i++)
+		device_remove_file(dev, &dpaa_eth_attrs[i]);
+}
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v7 07/10] dpaa_eth: add trace points
  2016-11-11  8:19 [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (5 preceding siblings ...)
  2016-11-11  8:20 ` [PATCH net-next v7 06/10] dpaa_eth: add sysfs exports Madalin Bucur
@ 2016-11-11  8:20 ` Madalin Bucur
  2016-11-11  8:20 ` [PATCH net-next v7 08/10] arch/powerpc: Enable FSL_PAMU Madalin Bucur
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Madalin Bucur @ 2016-11-11  8:20 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Add trace points on the hot processing path.

Signed-off-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/Makefile       |   1 +
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c     |  15 +++
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h     |   1 +
 .../net/ethernet/freescale/dpaa/dpaa_eth_trace.h   | 141 +++++++++++++++++++++
 4 files changed, 158 insertions(+)
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h

diff --git a/drivers/net/ethernet/freescale/dpaa/Makefile b/drivers/net/ethernet/freescale/dpaa/Makefile
index bfb03d4..7db50bc 100644
--- a/drivers/net/ethernet/freescale/dpaa/Makefile
+++ b/drivers/net/ethernet/freescale/dpaa/Makefile
@@ -9,3 +9,4 @@ ccflags-y += -I$(FMAN)
 obj-$(CONFIG_FSL_DPAA_ETH) += fsl_dpa.o
 
 fsl_dpa-objs += dpaa_eth.o dpaa_ethtool.o dpaa_eth_sysfs.o
+CFLAGS_dpaa_eth.o := -I$(src)
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 3a610fd..375c392 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -59,6 +59,12 @@
 #include "mac.h"
 #include "dpaa_eth.h"
 
+/* CREATE_TRACE_POINTS only needs to be defined once. Other dpaa files
+ * using trace events only need to #include <trace/events/sched.h>
+ */
+#define CREATE_TRACE_POINTS
+#include "dpaa_eth_trace.h"
+
 static int debug = -1;
 module_param(debug, int, 0444);
 MODULE_PARM_DESC(debug, "Module/Driver verbosity level (0=none,...,16=all)");
@@ -1868,6 +1874,9 @@ static inline int dpaa_xmit(struct dpaa_priv *priv,
 	if (fd->bpid == FSL_DPAA_BPID_INV)
 		fd->cmd |= qman_fq_fqid(priv->conf_fqs[queue]);
 
+	/* Trace this Tx fd */
+	trace_dpaa_tx_fd(priv->net_dev, egress_fq, fd);
+
 	for (i = 0; i < DPAA_ENQUEUE_RETRIES; i++) {
 		err = qman_enqueue(egress_fq, fd);
 		if (err != -EBUSY)
@@ -2102,6 +2111,9 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
 	if (!dpaa_bp)
 		return qman_cb_dqrr_consume;
 
+	/* Trace the Rx fd */
+	trace_dpaa_rx_fd(net_dev, fq, &dq->fd);
+
 	percpu_priv = this_cpu_ptr(priv->percpu_priv);
 	percpu_stats = &percpu_priv->stats;
 
@@ -2199,6 +2211,9 @@ static enum qman_cb_dqrr_result conf_dflt_dqrr(struct qman_portal *portal,
 	net_dev = ((struct dpaa_fq *)fq)->net_dev;
 	priv = netdev_priv(net_dev);
 
+	/* Trace the fd */
+	trace_dpaa_tx_conf_fd(net_dev, fq, &dq->fd);
+
 	percpu_priv = this_cpu_ptr(priv->percpu_priv);
 
 	if (dpaa_eth_napi_schedule(percpu_priv, portal))
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
index 44323e2..1f9aebf 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
@@ -37,6 +37,7 @@
 
 #include "fman.h"
 #include "mac.h"
+#include "dpaa_eth_trace.h"
 
 #define DPAA_ETH_TXQ_NUM	NR_CPUS
 
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h
new file mode 100644
index 0000000..409c1dc
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h
@@ -0,0 +1,141 @@
+/* Copyright 2013-2015 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM	dpaa_eth
+
+#if !defined(_DPAA_ETH_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _DPAA_ETH_TRACE_H
+
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include "dpaa_eth.h"
+#include <linux/tracepoint.h>
+
+#define fd_format_name(format)	{ qm_fd_##format, #format }
+#define fd_format_list	\
+	fd_format_name(contig),	\
+	fd_format_name(sg)
+
+/* This is used to declare a class of events.
+ * individual events of this type will be defined below.
+ */
+
+/* Store details about a frame descriptor and the FQ on which it was
+ * transmitted/received.
+ */
+DECLARE_EVENT_CLASS(dpaa_eth_fd,
+	/* Trace function prototype */
+	TP_PROTO(struct net_device *netdev,
+		 struct qman_fq *fq,
+		 const struct qm_fd *fd),
+
+	/* Repeat argument list here */
+	TP_ARGS(netdev, fq, fd),
+
+	/* A structure containing the relevant information we want to record.
+	 * Declare name and type for each normal element, name, type and size
+	 * for arrays. Use __string for variable length strings.
+	 */
+	TP_STRUCT__entry(
+		__field(u32,	fqid)
+		__field(u64,	fd_addr)
+		__field(u8,	fd_format)
+		__field(u16,	fd_offset)
+		__field(u32,	fd_length)
+		__field(u32,	fd_status)
+		__string(name,	netdev->name)
+	),
+
+	/* The function that assigns values to the above declared fields */
+	TP_fast_assign(
+		__entry->fqid = fq->fqid;
+		__entry->fd_addr = qm_fd_addr_get64(fd);
+		__entry->fd_format = qm_fd_get_format(fd);
+		__entry->fd_offset = qm_fd_get_offset(fd);
+		__entry->fd_length = qm_fd_get_length(fd);
+		__entry->fd_status = fd->status;
+		__assign_str(name, netdev->name);
+	),
+
+	/* This is what gets printed when the trace event is triggered */
+	TP_printk("[%s] fqid=%d, fd: addr=0x%llx, format=%s, off=%u, len=%u, status=0x%08x",
+		  __get_str(name), __entry->fqid, __entry->fd_addr,
+		  __print_symbolic(__entry->fd_format, fd_format_list),
+		  __entry->fd_offset, __entry->fd_length, __entry->fd_status)
+);
+
+/* Now declare events of the above type. Format is:
+ * DEFINE_EVENT(class, name, proto, args), with proto and args same as for class
+ */
+
+/* Tx (egress) fd */
+DEFINE_EVENT(dpaa_eth_fd, dpaa_tx_fd,
+
+	TP_PROTO(struct net_device *netdev,
+		 struct qman_fq *fq,
+		 const struct qm_fd *fd),
+
+	TP_ARGS(netdev, fq, fd)
+);
+
+/* Rx fd */
+DEFINE_EVENT(dpaa_eth_fd, dpaa_rx_fd,
+
+	TP_PROTO(struct net_device *netdev,
+		 struct qman_fq *fq,
+		 const struct qm_fd *fd),
+
+	TP_ARGS(netdev, fq, fd)
+);
+
+/* Tx confirmation fd */
+DEFINE_EVENT(dpaa_eth_fd, dpaa_tx_conf_fd,
+
+	TP_PROTO(struct net_device *netdev,
+		 struct qman_fq *fq,
+		 const struct qm_fd *fd),
+
+	TP_ARGS(netdev, fq, fd)
+);
+
+/* If only one event of a certain type needs to be declared, use TRACE_EVENT().
+ * The syntax is the same as for DECLARE_EVENT_CLASS().
+ */
+
+#endif /* _DPAA_ETH_TRACE_H */
+
+/* This must be outside ifdef _DPAA_ETH_TRACE_H */
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE	dpaa_eth_trace
+#include <trace/define_trace.h>
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v7 08/10] arch/powerpc: Enable FSL_PAMU
  2016-11-11  8:19 [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (6 preceding siblings ...)
  2016-11-11  8:20 ` [PATCH net-next v7 07/10] dpaa_eth: add trace points Madalin Bucur
@ 2016-11-11  8:20 ` Madalin Bucur
  2016-11-11  8:20 ` [PATCH net-next v7 09/10] arch/powerpc: Enable FSL_FMAN Madalin Bucur
  2016-11-11  8:20 ` [PATCH net-next v7 10/10] arch/powerpc: Enable dpaa_eth Madalin Bucur
  9 siblings, 0 replies; 14+ messages in thread
From: Madalin Bucur @ 2016-11-11  8:20 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 arch/powerpc/configs/dpaa.config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/configs/dpaa.config b/arch/powerpc/configs/dpaa.config
index efa99c0..f124ee1 100644
--- a/arch/powerpc/configs/dpaa.config
+++ b/arch/powerpc/configs/dpaa.config
@@ -1 +1,2 @@
 CONFIG_FSL_DPAA=y
+CONFIG_FSL_PAMU=y
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v7 09/10] arch/powerpc: Enable FSL_FMAN
  2016-11-11  8:19 [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (7 preceding siblings ...)
  2016-11-11  8:20 ` [PATCH net-next v7 08/10] arch/powerpc: Enable FSL_PAMU Madalin Bucur
@ 2016-11-11  8:20 ` Madalin Bucur
  2016-11-11  8:20 ` [PATCH net-next v7 10/10] arch/powerpc: Enable dpaa_eth Madalin Bucur
  9 siblings, 0 replies; 14+ messages in thread
From: Madalin Bucur @ 2016-11-11  8:20 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 arch/powerpc/configs/dpaa.config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/configs/dpaa.config b/arch/powerpc/configs/dpaa.config
index f124ee1..9ad9bc0 100644
--- a/arch/powerpc/configs/dpaa.config
+++ b/arch/powerpc/configs/dpaa.config
@@ -1,2 +1,3 @@
 CONFIG_FSL_DPAA=y
 CONFIG_FSL_PAMU=y
+CONFIG_FSL_FMAN=y
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v7 10/10] arch/powerpc: Enable dpaa_eth
  2016-11-11  8:19 [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (8 preceding siblings ...)
  2016-11-11  8:20 ` [PATCH net-next v7 09/10] arch/powerpc: Enable FSL_FMAN Madalin Bucur
@ 2016-11-11  8:20 ` Madalin Bucur
  9 siblings, 0 replies; 14+ messages in thread
From: Madalin Bucur @ 2016-11-11  8:20 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 arch/powerpc/configs/dpaa.config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/configs/dpaa.config b/arch/powerpc/configs/dpaa.config
index 9ad9bc0..2fe76f5 100644
--- a/arch/powerpc/configs/dpaa.config
+++ b/arch/powerpc/configs/dpaa.config
@@ -1,3 +1,4 @@
 CONFIG_FSL_DPAA=y
 CONFIG_FSL_PAMU=y
 CONFIG_FSL_FMAN=y
+CONFIG_FSL_DPAA_ETH=y
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next v7 03/10] dpaa_eth: add option to use one buffer pool set
  2016-11-11  8:20 ` [PATCH net-next v7 03/10] dpaa_eth: add option to use one buffer pool set Madalin Bucur
@ 2016-11-13 17:46   ` David Miller
  2016-11-14 10:25     ` Madalin-Cristian Bucur
  0 siblings, 1 reply; 14+ messages in thread
From: David Miller @ 2016-11-13 17:46 UTC (permalink / raw)
  To: madalin.bucur
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, joe, pebolle,
	joakim.tjernlund

From: Madalin Bucur <madalin.bucur@nxp.com>
Date: Fri, 11 Nov 2016 10:20:00 +0200

> @@ -8,3 +8,12 @@ menuconfig FSL_DPAA_ETH
>  	  supporting the Freescale QorIQ chips.
>  	  Depends on Freescale Buffer Manager and Queue Manager
>  	  driver and Frame Manager Driver.
> +
> +if FSL_DPAA_ETH
> +config FSL_DPAA_ETH_COMMON_BPOOL
> +	bool "Use a common buffer pool set for all the interfaces"
> +	---help---
> +	  The DPAA Ethernet netdevices require buffer pools for storing the buffers
> +	  used by the FMan hardware for reception. One can use a single buffer pool
> +	  set for all interfaces or a dedicated buffer pool set for each interface.
> +endif # FSL_DPAA_ETH

This in no way belongs in Kconfig.  If you want to support this,
support it wit a run time configuration choice via ethtool flags
or similar.  Do not use debugfs, do not use sysfs, do not use
module options.

If you put it in Kconfig, distributions will have to pick one way or
another which means that users who want the other choice lose.  This
never works.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH net-next v7 03/10] dpaa_eth: add option to use one buffer pool set
  2016-11-13 17:46   ` David Miller
@ 2016-11-14 10:25     ` Madalin-Cristian Bucur
  2016-11-14 17:31       ` David Miller
  0 siblings, 1 reply; 14+ messages in thread
From: Madalin-Cristian Bucur @ 2016-11-14 10:25 UTC (permalink / raw)
  To: David Miller
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, joe, pebolle,
	joakim.tjernlund

> From: David Miller [mailto:davem@davemloft.net]
> Sent: Sunday, November 13, 2016 7:46 PM
> 
> From: Madalin Bucur <madalin.bucur@nxp.com>
> Date: Fri, 11 Nov 2016 10:20:00 +0200
> 
> > @@ -8,3 +8,12 @@ menuconfig FSL_DPAA_ETH
> >  	  supporting the Freescale QorIQ chips.
> >  	  Depends on Freescale Buffer Manager and Queue Manager
> >  	  driver and Frame Manager Driver.
> > +
> > +if FSL_DPAA_ETH
> > +config FSL_DPAA_ETH_COMMON_BPOOL
> > +	bool "Use a common buffer pool set for all the interfaces"
> > +	---help---
> > +	  The DPAA Ethernet netdevices require buffer pools for storing the
> buffers
> > +	  used by the FMan hardware for reception. One can use a single
> buffer pool
> > +	  set for all interfaces or a dedicated buffer pool set for each
> interface.
> > +endif # FSL_DPAA_ETH
> 
> This in no way belongs in Kconfig.  If you want to support this,
> support it wit a run time configuration choice via ethtool flags
> or similar.  Do not use debugfs, do not use sysfs, do not use
> module options.
> 
> If you put it in Kconfig, distributions will have to pick one way or
> another which means that users who want the other choice lose.  This
> never works.

I've introduced this Kconfig option as a backwards compatible option, to
be able to run comparative tests between the independent buffer pool setup
and the previous common buffer pool setup. There are not so many reasons
to use the same buffer pool besides "having the old setup", the memory
saving is marginal, in all other aspects the separate buffer pools setup
fares better.

I'll remove this patch from the next submission. Should anyone care for
this I can add an entry to the feature backlog to add runtime support but
it will be quite low in priority.

Thank you for your review. 

Madalin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next v7 03/10] dpaa_eth: add option to use one buffer pool set
  2016-11-14 10:25     ` Madalin-Cristian Bucur
@ 2016-11-14 17:31       ` David Miller
  0 siblings, 0 replies; 14+ messages in thread
From: David Miller @ 2016-11-14 17:31 UTC (permalink / raw)
  To: madalin.bucur
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, joe, pebolle,
	joakim.tjernlund

From: Madalin-Cristian Bucur <madalin.bucur@nxp.com>
Date: Mon, 14 Nov 2016 10:25:13 +0000

> I've introduced this Kconfig option as a backwards compatible option, to
> be able to run comparative tests between the independent buffer pool setup
> and the previous common buffer pool setup. There are not so many reasons
> to use the same buffer pool besides "having the old setup", the memory
> saving is marginal, in all other aspects the separate buffer pools setup
> fares better.
> 
> I'll remove this patch from the next submission. Should anyone care for
> this I can add an entry to the feature backlog to add runtime support but
> it will be quite low in priority.

If it's a debugging feature then that's certainly how this should be
handled.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2016-11-14 17:31 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-11  8:19 [PATCH net-next v7 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
2016-11-11  8:19 ` [PATCH net-next v7 01/10] devres: add devm_alloc_percpu() Madalin Bucur
2016-11-11  8:19 ` [PATCH net-next v7 02/10] dpaa_eth: add support for DPAA Ethernet Madalin Bucur
2016-11-11  8:20 ` [PATCH net-next v7 03/10] dpaa_eth: add option to use one buffer pool set Madalin Bucur
2016-11-13 17:46   ` David Miller
2016-11-14 10:25     ` Madalin-Cristian Bucur
2016-11-14 17:31       ` David Miller
2016-11-11  8:20 ` [PATCH net-next v7 04/10] dpaa_eth: add ethtool functionality Madalin Bucur
2016-11-11  8:20 ` [PATCH net-next v7 05/10] dpaa_eth: add ethtool statistics Madalin Bucur
2016-11-11  8:20 ` [PATCH net-next v7 06/10] dpaa_eth: add sysfs exports Madalin Bucur
2016-11-11  8:20 ` [PATCH net-next v7 07/10] dpaa_eth: add trace points Madalin Bucur
2016-11-11  8:20 ` [PATCH net-next v7 08/10] arch/powerpc: Enable FSL_PAMU Madalin Bucur
2016-11-11  8:20 ` [PATCH net-next v7 09/10] arch/powerpc: Enable FSL_FMAN Madalin Bucur
2016-11-11  8:20 ` [PATCH net-next v7 10/10] arch/powerpc: Enable dpaa_eth Madalin Bucur

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).