linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver
@ 2016-11-02 20:17 Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 01/10] devres: add devm_alloc_percpu() Madalin Bucur
                   ` (9 more replies)
  0 siblings, 10 replies; 29+ messages in thread
From: Madalin Bucur @ 2016-11-02 20:17 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

This patch series adds the Ethernet driver for the Freescale
QorIQ Data Path Acceleration Architecture (DPAA).

This version includes changes following the feedback received
on previous versions from Eric Dumazet, Bob Cochran, Joe Perches,
Paul Bolle, Joakim Tjernlund, Scott Wood, David Miller - thank you.

Together with the driver a managed version of alloc_percpu
is provided that simplifies the release of per-CPU memory.

The Freescale DPAA architecture consists in a series of hardware
blocks that support the Ethernet connectivity. The Ethernet driver
depends upon the following drivers that are currently in the Linux
kernel:
 - Peripheral Access Memory Unit (PAMU)
    drivers/iommu/fsl_*
 - Frame Manager (FMan) added in v4.4
    drivers/net/ethernet/freescale/fman
 - Queue Manager (QMan), Buffer Manager (BMan) added in v4.9-rc1
    drivers/soc/fsl/qbman

dpaa_eth interfaces mapping to FMan MACs:

  dpaa_eth       /eth0\     ...       /ethN\
  driver        |      |             |      |
  -------------   ----   -----------   ----   -------------
       -Ports  / Tx  Rx \    ...    / Tx  Rx \
  FMan        |          |         |          |
       -MACs  |   MAC0   |         |   MACN   |
             /   dtsec0   \  ...  /   dtsecN   \ (or tgec)
            /              \     /              \(or memac)
  ---------  --------------  ---  --------------  ---------
      FMan, FMan Port, FMan SP, FMan MURAM drivers
  ---------------------------------------------------------
      FMan HW blocks: MURAM, MACs, Ports, SP
  ---------------------------------------------------------

dpaa_eth relation to QMan, FMan:
              ________________________________
  dpaa_eth   /            eth0                \
  driver    /                                  \
  ---------   -^-   -^-   -^-   ---    ---------
  QMan driver / \   / \   / \  \   /  | BMan    |
             |Rx | |Rx | |Tx | |Tx |  | driver  |
  ---------  |Dfl| |Err| |Cnf| |FQs|  |         |
  QMan HW    |FQ | |FQ | |FQ | |   |  |         |
             /   \ /   \ /   \  \ /   |         |
  ---------   ---   ---   ---   -v-    ---------
            |        FMan QMI         |         |
            | FMan HW       FMan BMI  | BMan HW |
              -----------------------   --------

where the acronyms used above (and in the code) are:
DPAA = Data Path Acceleration Architecture
FMan = DPAA Frame Manager
QMan = DPAA Queue Manager
BMan = DPAA Buffers Manager
QMI = QMan interface in FMan
BMI = BMan interface in FMan
FMan SP = FMan Storage Profiles
MURAM = Multi-user RAM in FMan
FQ = QMan Frame Queue
Rx Dfl FQ = default reception FQ
Rx Err FQ = Rx error frames FQ
Tx Cnf FQ = Tx confirmation FQ
Tx FQs = transmission frame queues
dtsec = datapath three speed Ethernet controller (10/100/1000 Mbps)
tgec = ten gigabit Ethernet controller (10 Gbps)
memac = multirate Ethernet MAC (10/100/1000/10000)

Changes from v5:
 - adapt to the latest Q/BMan drivers API
 - use build_skb() on Rx path instead of buffer pool refill path
 - proper support for multiple buffer pools
 - align function, variable names, code cleanup
 - driver file structure cleanup

Changes from v4:
 - addressed feedback from Scott Wood and Joe Perches
 - fixed spelling
 - fixed leak of uninitialized stack to userspace
 - fix prints
 - replace raw_cpu_ptr() with this_cpu_ptr()
 - remove _s from the end of structure names
 - remove underscores at start of functions, goto labels
 - remove likely in error paths
 - use container_of() instead of open casts
 - remove priv from the driver name
 - move return type on same line with function name
 - drop DPA_READ_SKB_PTR/DPA_WRITE_SKB_PTR

Changes from v3:
 - removed bogus delay and comment in .ndo_stop implementation
 - addressed minor issues reported by David Miller

Changes from v2:
 - removed debugfs, moved exports to ethtool statistics
 - removed congestion groups Kconfig params

Changes from v1:
 - bpool level Kconfig options removed
 - print format using pr_fmt, cleaned up prints
 - __hot/__cold removed
 - gratuitous unlikely() removed
 - code style aligned, consistent spacing for declarations
 - comment formatting

The changes are also available in the public git repository at
git://git.freescale.com/ppc/upstream/linux.git on the branch dpaa_eth-next.

Madalin Bucur (10):
  devres: add devm_alloc_percpu()
  dpaa_eth: add support for DPAA Ethernet
  dpaa_eth: add option to use one buffer pool set
  dpaa_eth: add ethtool functionality
  dpaa_eth: add ethtool statistics
  dpaa_eth: add sysfs exports
  dpaa_eth: add trace points
  arch/powerpc: Enable FSL_PAMU
  arch/powerpc: Enable FSL_FMAN
  arch/powerpc: Enable dpaa_eth

 Documentation/driver-model/devres.txt              |    4 +
 arch/powerpc/configs/dpaa.config                   |    3 +
 drivers/base/devres.c                              |   66 +
 drivers/net/ethernet/freescale/Kconfig             |    2 +
 drivers/net/ethernet/freescale/Makefile            |    1 +
 drivers/net/ethernet/freescale/dpaa/Kconfig        |   27 +
 drivers/net/ethernet/freescale/dpaa/Makefile       |   12 +
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c     | 2833 ++++++++++++++++++++
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h     |  185 ++
 .../net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c   |  165 ++
 .../net/ethernet/freescale/dpaa/dpaa_eth_trace.h   |  141 +
 drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c |  417 +++
 include/linux/device.h                             |   19 +
 13 files changed, 3875 insertions(+)
 create mode 100644 drivers/net/ethernet/freescale/dpaa/Kconfig
 create mode 100644 drivers/net/ethernet/freescale/dpaa/Makefile
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c

-- 
2.1.0

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH net-next v6 01/10] devres: add devm_alloc_percpu()
  2016-11-02 20:17 [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
@ 2016-11-02 20:17 ` Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet Madalin Bucur
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Madalin Bucur @ 2016-11-02 20:17 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Introduce managed counterparts for alloc_percpu() and free_percpu().
Add devm_alloc_percpu() and devm_free_percpu() into the managed
interfaces list.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 Documentation/driver-model/devres.txt |  4 +++
 drivers/base/devres.c                 | 66 +++++++++++++++++++++++++++++++++++
 include/linux/device.h                | 19 ++++++++++
 3 files changed, 89 insertions(+)

diff --git a/Documentation/driver-model/devres.txt b/Documentation/driver-model/devres.txt
index 1670708..ca9d1eb 100644
--- a/Documentation/driver-model/devres.txt
+++ b/Documentation/driver-model/devres.txt
@@ -332,6 +332,10 @@ MEM
 MFD
  devm_mfd_add_devices()
 
+PER-CPU MEM
+  devm_alloc_percpu()
+  devm_free_percpu()
+
 PCI
   pcim_enable_device()	: after success, all PCI ops become managed
   pcim_pin_device()	: keep PCI device enabled after release
diff --git a/drivers/base/devres.c b/drivers/base/devres.c
index 8fc654f..71d5770 100644
--- a/drivers/base/devres.c
+++ b/drivers/base/devres.c
@@ -10,6 +10,7 @@
 #include <linux/device.h>
 #include <linux/module.h>
 #include <linux/slab.h>
+#include <linux/percpu.h>
 
 #include "base.h"
 
@@ -985,3 +986,68 @@ void devm_free_pages(struct device *dev, unsigned long addr)
 			       &devres));
 }
 EXPORT_SYMBOL_GPL(devm_free_pages);
+
+static void devm_percpu_release(struct device *dev, void *pdata)
+{
+	void __percpu *p;
+
+	p = *(void __percpu **)pdata;
+	free_percpu(p);
+}
+
+static int devm_percpu_match(struct device *dev, void *data, void *p)
+{
+	struct devres *devr = container_of(data, struct devres, data);
+
+	return *(void **)devr->data == p;
+}
+
+/**
+ * __devm_alloc_percpu - Resource-managed alloc_percpu
+ * @dev: Device to allocate per-cpu memory for
+ * @size: Size of per-cpu memory to allocate
+ * @align: Alignment of per-cpu memory to allocate
+ *
+ * Managed alloc_percpu. Per-cpu memory allocated with this function is
+ * automatically freed on driver detach.
+ *
+ * RETURNS:
+ * Pointer to allocated memory on success, NULL on failure.
+ */
+void __percpu *__devm_alloc_percpu(struct device *dev, size_t size,
+		size_t align)
+{
+	void *p;
+	void __percpu *pcpu;
+
+	pcpu = __alloc_percpu(size, align);
+	if (!pcpu)
+		return NULL;
+
+	p = devres_alloc(devm_percpu_release, sizeof(void *), GFP_KERNEL);
+	if (!p) {
+		free_percpu(pcpu);
+		return NULL;
+	}
+
+	*(void __percpu **)p = pcpu;
+
+	devres_add(dev, p);
+
+	return pcpu;
+}
+EXPORT_SYMBOL_GPL(__devm_alloc_percpu);
+
+/**
+ * devm_free_percpu - Resource-managed free_percpu
+ * @dev: Device this memory belongs to
+ * @pdata: Per-cpu memory to free
+ *
+ * Free memory allocated with devm_alloc_percpu().
+ */
+void devm_free_percpu(struct device *dev, void __percpu *pdata)
+{
+	WARN_ON(devres_destroy(dev, devm_percpu_release, devm_percpu_match,
+			       (void *)pdata));
+}
+EXPORT_SYMBOL_GPL(devm_free_percpu);
diff --git a/include/linux/device.h b/include/linux/device.h
index bc41e87..043ffce 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -698,6 +698,25 @@ static inline int devm_add_action_or_reset(struct device *dev,
 	return ret;
 }
 
+/**
+ * devm_alloc_percpu - Resource-managed alloc_percpu
+ * @dev: Device to allocate per-cpu memory for
+ * @type: Type to allocate per-cpu memory for
+ *
+ * Managed alloc_percpu. Per-cpu memory allocated with this function is
+ * automatically freed on driver detach.
+ *
+ * RETURNS:
+ * Pointer to allocated memory on success, NULL on failure.
+ */
+#define devm_alloc_percpu(dev, type)      \
+	(typeof(type) __percpu *)__devm_alloc_percpu(dev, sizeof(type), \
+						     __alignof__(type))
+
+void __percpu *__devm_alloc_percpu(struct device *dev, size_t size,
+				   size_t align);
+void devm_free_percpu(struct device *dev, void __percpu *pdata);
+
 struct device_dma_parameters {
 	/*
 	 * a low level driver may set these to teach IOMMU code about
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet
  2016-11-02 20:17 [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 01/10] devres: add devm_alloc_percpu() Madalin Bucur
@ 2016-11-02 20:17 ` Madalin Bucur
  2016-11-03 19:58   ` David Miller
  2016-11-07 16:25   ` Joakim Tjernlund
  2016-11-02 20:17 ` [PATCH net-next v6 03/10] dpaa_eth: add option to use one buffer pool set Madalin Bucur
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 29+ messages in thread
From: Madalin Bucur @ 2016-11-02 20:17 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

This introduces the Freescale Data Path Acceleration Architecture
(DPAA) Ethernet driver (dpaa_eth) that builds upon the DPAA QMan,
BMan, PAMU and FMan drivers to deliver Ethernet connectivity on
the Freescale DPAA QorIQ platforms.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/Kconfig         |    2 +
 drivers/net/ethernet/freescale/Makefile        |    1 +
 drivers/net/ethernet/freescale/dpaa/Kconfig    |   21 +
 drivers/net/ethernet/freescale/dpaa/Makefile   |   11 +
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 2739 ++++++++++++++++++++++++
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h |  144 ++
 6 files changed, 2918 insertions(+)
 create mode 100644 drivers/net/ethernet/freescale/dpaa/Kconfig
 create mode 100644 drivers/net/ethernet/freescale/dpaa/Makefile
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h

diff --git a/drivers/net/ethernet/freescale/Kconfig b/drivers/net/ethernet/freescale/Kconfig
index d1ca45f..aa3f615 100644
--- a/drivers/net/ethernet/freescale/Kconfig
+++ b/drivers/net/ethernet/freescale/Kconfig
@@ -93,4 +93,6 @@ config GIANFAR
 	  and MPC86xx family of chips, the eTSEC on LS1021A and the FEC
 	  on the 8540.
 
+source "drivers/net/ethernet/freescale/dpaa/Kconfig"
+
 endif # NET_VENDOR_FREESCALE
diff --git a/drivers/net/ethernet/freescale/Makefile b/drivers/net/ethernet/freescale/Makefile
index cbe21dc..4a13115 100644
--- a/drivers/net/ethernet/freescale/Makefile
+++ b/drivers/net/ethernet/freescale/Makefile
@@ -22,3 +22,4 @@ obj-$(CONFIG_UCC_GETH) += ucc_geth_driver.o
 ucc_geth_driver-objs := ucc_geth.o ucc_geth_ethtool.o
 
 obj-$(CONFIG_FSL_FMAN) += fman/
+obj-$(CONFIG_FSL_DPAA_ETH) += dpaa/
diff --git a/drivers/net/ethernet/freescale/dpaa/Kconfig b/drivers/net/ethernet/freescale/dpaa/Kconfig
new file mode 100644
index 0000000..670e039
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/Kconfig
@@ -0,0 +1,21 @@
+menuconfig FSL_DPAA_ETH
+	tristate "DPAA Ethernet"
+	depends on FSL_SOC && FSL_DPAA && FSL_FMAN
+	select PHYLIB
+	select FSL_FMAN_MAC
+	---help---
+	  Data Path Acceleration Architecture Ethernet driver,
+	  supporting the Freescale QorIQ chips.
+	  Depends on Freescale Buffer Manager and Queue Manager
+	  driver and Frame Manager Driver.
+
+if FSL_DPAA_ETH
+
+config FSL_DPAA_ETH_FRIENDLY_IF_NAME
+	bool "Use fmX-macY names for the DPAA interfaces"
+	default y
+	---help---
+	  The DPAA Ethernet netdevices are created for each FMan port available
+	  on a certain board. Enable this to get interface names derived from
+	  the underlying FMan hardware for a simple identification.
+endif # FSL_DPAA_ETH
diff --git a/drivers/net/ethernet/freescale/dpaa/Makefile b/drivers/net/ethernet/freescale/dpaa/Makefile
new file mode 100644
index 0000000..fc76029
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/Makefile
@@ -0,0 +1,11 @@
+#
+# Makefile for the Freescale DPAA Ethernet controllers
+#
+
+# Include FMan headers
+FMAN        = $(srctree)/drivers/net/ethernet/freescale/fman
+ccflags-y += -I$(FMAN)
+
+obj-$(CONFIG_FSL_DPAA_ETH) += fsl_dpa.o
+
+fsl_dpa-objs += dpaa_eth.o
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
new file mode 100644
index 0000000..55e89b7
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -0,0 +1,2739 @@
+/* Copyright 2008 - 2016 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/of_platform.h>
+#include <linux/of_mdio.h>
+#include <linux/of_net.h>
+#include <linux/io.h>
+#include <linux/if_arp.h>
+#include <linux/if_vlan.h>
+#include <linux/icmp.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/udp.h>
+#include <linux/tcp.h>
+#include <linux/net.h>
+#include <linux/skbuff.h>
+#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
+#include <linux/highmem.h>
+#include <linux/percpu.h>
+#include <linux/dma-mapping.h>
+#include <linux/sort.h>
+#include <soc/fsl/bman.h>
+#include <soc/fsl/qman.h>
+
+#include "fman.h"
+#include "fman_port.h"
+#include "mac.h"
+#include "dpaa_eth.h"
+
+static int debug = -1;
+module_param(debug, int, S_IRUGO);
+MODULE_PARM_DESC(debug, "Module/Driver verbosity level (0=none,...,16=all)");
+
+static u16 tx_timeout = 1000;
+module_param(tx_timeout, ushort, S_IRUGO);
+MODULE_PARM_DESC(tx_timeout, "The Tx timeout in ms");
+
+#define FM_FD_STAT_RX_ERRORS						\
+	(FM_FD_ERR_DMA | FM_FD_ERR_PHYSICAL	| \
+	 FM_FD_ERR_SIZE | FM_FD_ERR_CLS_DISCARD | \
+	 FM_FD_ERR_EXTRACTION | FM_FD_ERR_NO_SCHEME	| \
+	 FM_FD_ERR_PRS_TIMEOUT | FM_FD_ERR_PRS_ILL_INSTRUCT | \
+	 FM_FD_ERR_PRS_HDR_ERR)
+
+#define FM_FD_STAT_TX_ERRORS \
+	(FM_FD_ERR_UNSUPPORTED_FORMAT | \
+	 FM_FD_ERR_LENGTH | FM_FD_ERR_DMA)
+
+#define DPAA_MSG_DEFAULT (NETIF_MSG_DRV | NETIF_MSG_PROBE | \
+			  NETIF_MSG_LINK | NETIF_MSG_IFUP | \
+			  NETIF_MSG_IFDOWN)
+
+#define DPAA_INGRESS_CS_THRESHOLD 0x10000000
+/* Ingress congestion threshold on FMan ports
+ * The size in bytes of the ingress tail-drop threshold on FMan ports.
+ * Traffic piling up above this value will be rejected by QMan and discarded
+ * by FMan.
+ */
+
+/* Size in bytes of the FQ taildrop threshold */
+#define DPAA_FQ_TD 0x200000
+
+#define DPAA_CS_THRESHOLD_1G 0x06000000
+/* Egress congestion threshold on 1G ports, range 0x1000 .. 0x10000000
+ * The size in bytes of the egress Congestion State notification threshold on
+ * 1G ports. The 1G dTSECs can quite easily be flooded by cores doing Tx in a
+ * tight loop (e.g. by sending UDP datagrams at "while(1) speed"),
+ * and the larger the frame size, the more acute the problem.
+ * So we have to find a balance between these factors:
+ * - avoiding the device staying congested for a prolonged time (risking
+ *   the netdev watchdog to fire - see also the tx_timeout module param);
+ * - affecting performance of protocols such as TCP, which otherwise
+ *   behave well under the congestion notification mechanism;
+ * - preventing the Tx cores from tightly-looping (as if the congestion
+ *   threshold was too low to be effective);
+ * - running out of memory if the CS threshold is set too high.
+ */
+
+#define DPAA_CS_THRESHOLD_10G 0x10000000
+/* The size in bytes of the egress Congestion State notification threshold on
+ * 10G ports, range 0x1000 .. 0x10000000
+ */
+
+/* Largest value that the FQD's OAL field can hold */
+#define FSL_QMAN_MAX_OAL	127
+
+/* Default alignment for start of data in an Rx FD */
+#define DPAA_FD_DATA_ALIGNMENT  16
+
+/* Values for the L3R field of the FM Parse Results
+ */
+/* L3 Type field: First IP Present IPv4 */
+#define FM_L3_PARSE_RESULT_IPV4	0x8000
+/* L3 Type field: First IP Present IPv6 */
+#define FM_L3_PARSE_RESULT_IPV6	0x4000
+/* Values for the L4R field of the FM Parse Results */
+/* L4 Type field: UDP */
+#define FM_L4_PARSE_RESULT_UDP	0x40
+/* L4 Type field: TCP */
+#define FM_L4_PARSE_RESULT_TCP	0x20
+
+#define DPAA_SGT_MAX_ENTRIES 16 /* maximum number of entries in SG Table */
+#define DPAA_BUFF_RELEASE_MAX 8 /* maximum number of buffers released at once */
+
+#define FSL_DPAA_BPID_INV		0xff
+#define FSL_DPAA_ETH_MAX_BUF_COUNT	128
+#define FSL_DPAA_ETH_REFILL_THRESHOLD	80
+
+#define DPAA_TX_PRIV_DATA_SIZE	16
+#define DPAA_PARSE_RESULTS_SIZE sizeof(struct fman_prs_result)
+#define DPAA_TIME_STAMP_SIZE 8
+#define DPAA_HASH_RESULTS_SIZE 8
+#define DPAA_RX_PRIV_DATA_SIZE	(u16)(DPAA_TX_PRIV_DATA_SIZE + \
+					dpaa_rx_extra_headroom)
+
+#define DPAA_ETH_RX_QUEUES	128
+
+#define DPAA_ENQUEUE_RETRIES	100000
+
+enum port_type {RX, TX};
+
+struct fm_port_fqs {
+	struct dpaa_fq *tx_defq;
+	struct dpaa_fq *tx_errq;
+	struct dpaa_fq *rx_defq;
+	struct dpaa_fq *rx_errq;
+};
+
+/* All the dpa bps in use at any moment */
+static struct dpaa_bp *dpaa_bp_array[BM_MAX_NUM_OF_POOLS];
+
+/* The raw buffer size must be cacheline aligned. Normally we use 2K buffers */
+#define DPAA_BP_RAW_SIZE 2048
+/* When using more than one buffer pool, the raw sizes are as follows:
+ * 1 bp: 4KB
+ * 2 bp: 2KB, 4KB
+ * 3 bp: 1KB, 2KB, 4KB
+ * 4 bp: 1KB, 2KB, 4KB, 8KB
+ */
+static inline size_t bpool_buffer_raw_size(u8 index, u8 cnt)
+{
+	u8 i;
+	size_t res = DPAA_BP_RAW_SIZE / 2;
+
+	for (i = (cnt < 3) ? cnt : 3; i < 3 + index; i++)
+		res *= 2;
+	return res;
+}
+
+/* FMan-DMA requires 16-byte alignment for Rx buffers, but SKB_DATA_ALIGN is
+ * even stronger (SMP_CACHE_BYTES-aligned), so we just get away with that,
+ * via SKB_WITH_OVERHEAD(). We can't rely on netdev_alloc_frag() giving us
+ * half-page-aligned buffers, so we reserve some more space for start-of-buffer
+ * alignment.
+ */
+#define dpaa_bp_size(raw_size) SKB_WITH_OVERHEAD(raw_size - SMP_CACHE_BYTES)
+
+static int dpaa_max_frm;
+
+static int dpaa_rx_extra_headroom;
+
+#define dpaa_get_max_mtu()	\
+	(dpaa_max_frm - (VLAN_ETH_HLEN + ETH_FCS_LEN))
+
+static int dpaa_netdev_init(struct net_device *net_dev,
+			    const struct net_device_ops *dpaa_ops,
+			    u16 tx_timeout)
+{
+	int i, err;
+	struct dpaa_priv *priv = netdev_priv(net_dev);
+	struct dpaa_percpu_priv *percpu_priv;
+	const u8 *mac_addr;
+	struct device *dev = net_dev->dev.parent;
+
+	/* Although we access another CPU's private data here
+	 * we do it at initialization so it is safe
+	 */
+	for_each_possible_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+		percpu_priv->net_dev = net_dev;
+	}
+
+	net_dev->netdev_ops = dpaa_ops;
+	mac_addr = priv->mac_dev->addr;
+
+	net_dev->mem_start = priv->mac_dev->res->start;
+	net_dev->mem_end = priv->mac_dev->res->end;
+
+	net_dev->hw_features |= (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+				 NETIF_F_LLTX);
+
+	net_dev->hw_features |= NETIF_F_SG | NETIF_F_HIGHDMA;
+	/* The kernels enables GSO automatically, if we declare NETIF_F_SG.
+	 * For conformity, we'll still declare GSO explicitly.
+	 */
+	net_dev->features |= NETIF_F_GSO;
+
+	net_dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+	/* we do not want shared skbs on TX */
+	net_dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+
+	net_dev->features |= net_dev->hw_features;
+	net_dev->vlan_features = net_dev->features;
+
+	memcpy(net_dev->perm_addr, mac_addr, net_dev->addr_len);
+	memcpy(net_dev->dev_addr, mac_addr, net_dev->addr_len);
+
+	net_dev->needed_headroom = priv->tx_headroom;
+	net_dev->watchdog_timeo = msecs_to_jiffies(tx_timeout);
+
+	/* start without the RUNNING flag, phylib controls it later */
+	netif_carrier_off(net_dev);
+
+	err = register_netdev(net_dev);
+	if (err < 0) {
+		dev_err(dev, "register_netdev() = %d\n", err);
+		return err;
+	}
+
+	return 0;
+}
+
+static int dpaa_stop(struct net_device *net_dev)
+{
+	int i, err, error;
+	struct dpaa_priv *priv;
+	struct mac_device *mac_dev;
+
+	priv = netdev_priv(net_dev);
+	mac_dev = priv->mac_dev;
+
+	netif_tx_stop_all_queues(net_dev);
+	/* Allow the Fman (Tx) port to process in-flight frames before we
+	 * try switching it off.
+	 */
+	usleep_range(5000, 10000);
+
+	err = mac_dev->stop(mac_dev);
+	if (err < 0)
+		netif_err(priv, ifdown, net_dev, "mac_dev->stop() = %d\n",
+			  err);
+
+	for (i = 0; i < ARRAY_SIZE(mac_dev->port); i++) {
+		error = fman_port_disable(mac_dev->port[i]);
+		if (error)
+			err = error;
+	}
+
+	if (net_dev->phydev)
+		phy_disconnect(net_dev->phydev);
+	net_dev->phydev = NULL;
+
+	return err;
+}
+
+static void dpaa_tx_timeout(struct net_device *net_dev)
+{
+	const struct dpaa_priv	*priv;
+	struct dpaa_percpu_priv *percpu_priv;
+
+	priv = netdev_priv(net_dev);
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+
+	netif_crit(priv, timer, net_dev, "Transmit timeout latency: %u ms\n",
+		   jiffies_to_msecs(jiffies - dev_trans_start(net_dev)));
+
+	percpu_priv->stats.tx_errors++;
+}
+
+/* Calculates the statistics for the given device by adding the statistics
+ * collected by each CPU.
+ */
+static struct rtnl_link_stats64 *dpaa_get_stats64(struct net_device *net_dev,
+						  struct rtnl_link_stats64 *s)
+{
+	struct dpaa_priv *priv = netdev_priv(net_dev);
+	u64 *cpustats;
+	u64 *netstats = (u64 *)s;
+	int i, j;
+	struct dpaa_percpu_priv *percpu_priv;
+	int numstats = sizeof(struct rtnl_link_stats64) / sizeof(u64);
+
+	for_each_possible_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+
+		cpustats = (u64 *)&percpu_priv->stats;
+
+		for (j = 0; j < numstats; j++)
+			netstats[j] += cpustats[j];
+	}
+
+	return s;
+}
+
+static int dpaa_change_mtu(struct net_device *net_dev, int new_mtu)
+{
+	const int max_mtu = dpaa_get_max_mtu();
+
+	/* Make sure we don't exceed the Ethernet controller's MAXFRM */
+	if (new_mtu < 68 || new_mtu > max_mtu) {
+		netdev_err(net_dev, "Invalid L3 mtu %d (must be between %d and %d).\n",
+			   new_mtu, 68, max_mtu);
+		return -EINVAL;
+	}
+	net_dev->mtu = new_mtu;
+
+	return 0;
+}
+
+static int dpaa_set_features(struct net_device *dev, netdev_features_t features)
+{
+	/* Not much to do here for now */
+	dev->features = features;
+	return 0;
+}
+
+static netdev_features_t dpaa_fix_features(struct net_device *dev,
+					   netdev_features_t features)
+{
+	netdev_features_t unsupported_features = 0;
+
+	/* In theory we should never be requested to enable features that
+	 * we didn't set in netdev->features and netdev->hw_features at probe
+	 * time, but double check just to be on the safe side.
+	 */
+	unsupported_features |= NETIF_F_RXCSUM;
+
+	features &= ~unsupported_features;
+
+	return features;
+}
+
+static struct mac_device *dpaa_mac_dev_get(struct platform_device *pdev)
+{
+	struct device *dpaa_dev, *dev;
+	struct device_node *mac_node;
+	struct platform_device *of_dev;
+	struct mac_device *mac_dev;
+	struct dpaa_eth_data *eth_data;
+
+	dpaa_dev = &pdev->dev;
+	eth_data = dpaa_dev->platform_data;
+	if (!eth_data)
+		return ERR_PTR(-ENODEV);
+
+	mac_node = eth_data->mac_node;
+
+	of_dev = of_find_device_by_node(mac_node);
+	if (!of_dev) {
+		dev_err(dpaa_dev, "of_find_device_by_node(%s) failed\n",
+			mac_node->full_name);
+		of_node_put(mac_node);
+		return ERR_PTR(-EINVAL);
+	}
+	of_node_put(mac_node);
+
+	dev = &of_dev->dev;
+
+	mac_dev = dev_get_drvdata(dev);
+	if (!mac_dev) {
+		dev_err(dpaa_dev, "dev_get_drvdata(%s) failed\n",
+			dev_name(dev));
+		return ERR_PTR(-EINVAL);
+	}
+
+	return mac_dev;
+}
+
+#ifdef CONFIG_FSL_DPAA_ETH_FRIENDLY_IF_NAME
+static int dpaa_mac_hw_index_get(struct platform_device *pdev)
+{
+	struct device *dpaa_dev;
+	struct dpaa_eth_data *eth_data;
+
+	dpaa_dev = &pdev->dev;
+	eth_data = dpaa_dev->platform_data;
+
+	return eth_data->mac_hw_id;
+}
+
+static int dpaa_mac_fman_index_get(struct platform_device *pdev)
+{
+	struct device *dpaa_dev;
+	struct dpaa_eth_data *eth_data;
+
+	dpaa_dev = &pdev->dev;
+	eth_data = dpaa_dev->platform_data;
+
+	return eth_data->fman_hw_id;
+}
+#endif
+
+static int dpaa_set_mac_address(struct net_device *net_dev, void *addr)
+{
+	const struct dpaa_priv	*priv;
+	int err;
+	struct mac_device *mac_dev;
+
+	priv = netdev_priv(net_dev);
+
+	err = eth_mac_addr(net_dev, addr);
+	if (err < 0) {
+		netif_err(priv, drv, net_dev, "eth_mac_addr() = %d\n", err);
+		return err;
+	}
+
+	mac_dev = priv->mac_dev;
+
+	err = mac_dev->change_addr(mac_dev->fman_mac,
+				   (enet_addr_t *)net_dev->dev_addr);
+	if (err < 0) {
+		netif_err(priv, drv, net_dev, "mac_dev->change_addr() = %d\n",
+			  err);
+		return err;
+	}
+
+	return 0;
+}
+
+static void dpaa_set_rx_mode(struct net_device *net_dev)
+{
+	int err;
+	const struct dpaa_priv	*priv;
+
+	priv = netdev_priv(net_dev);
+
+	if (!!(net_dev->flags & IFF_PROMISC) != priv->mac_dev->promisc) {
+		priv->mac_dev->promisc = !priv->mac_dev->promisc;
+		err = priv->mac_dev->set_promisc(priv->mac_dev->fman_mac,
+						 priv->mac_dev->promisc);
+		if (err < 0)
+			netif_err(priv, drv, net_dev,
+				  "mac_dev->set_promisc() = %d\n",
+				  err);
+	}
+
+	err = priv->mac_dev->set_multi(net_dev, priv->mac_dev);
+	if (err < 0)
+		netif_err(priv, drv, net_dev, "mac_dev->set_multi() = %d\n",
+			  err);
+}
+
+static struct dpaa_bp *dpaa_bpid2pool(int bpid)
+{
+	if (WARN_ON(bpid < 0 || bpid >= BM_MAX_NUM_OF_POOLS))
+		return NULL;
+
+	return dpaa_bp_array[bpid];
+}
+
+/* checks if this bpool is already allocated */
+static bool dpaa_bpid2pool_use(int bpid)
+{
+	if (dpaa_bpid2pool(bpid)) {
+		atomic_inc(&dpaa_bp_array[bpid]->refs);
+		return true;
+	}
+
+	return false;
+}
+
+/* called only once per bpid by dpaa_bp_alloc_pool() */
+static void dpaa_bpid2pool_map(int bpid, struct dpaa_bp *dpaa_bp)
+{
+	dpaa_bp_array[bpid] = dpaa_bp;
+	atomic_set(&dpaa_bp->refs, 1);
+}
+
+static int dpaa_bp_alloc_pool(struct dpaa_bp *dpaa_bp)
+{
+	int err;
+
+	if (dpaa_bp->size == 0 || dpaa_bp->config_count == 0) {
+		pr_err("%s: Buffer pool is not properly initialized! Missing size or initial number of buffers\n",
+		       __func__);
+		return -EINVAL;
+	}
+
+	/* If the pool is already specified, we only create one per bpid */
+	if (dpaa_bp->bpid != FSL_DPAA_BPID_INV &&
+	    dpaa_bpid2pool_use(dpaa_bp->bpid))
+		return 0;
+
+	if (dpaa_bp->bpid == FSL_DPAA_BPID_INV) {
+		dpaa_bp->pool = bman_new_pool();
+		if (!dpaa_bp->pool) {
+			pr_err("%s: bman_new_pool() failed\n",
+			       __func__);
+			return -ENODEV;
+		}
+
+		dpaa_bp->bpid = (u8)bman_get_bpid(dpaa_bp->pool);
+	}
+
+	if (dpaa_bp->seed_cb) {
+		err = dpaa_bp->seed_cb(dpaa_bp);
+		if (err)
+			goto pool_seed_failed;
+	}
+
+	dpaa_bpid2pool_map(dpaa_bp->bpid, dpaa_bp);
+
+	return 0;
+
+pool_seed_failed:
+	pr_err("%s: pool seeding failed\n", __func__);
+	bman_free_pool(dpaa_bp->pool);
+
+	return err;
+}
+
+/* remove and free all the buffers from the given buffer pool */
+static void dpaa_bp_drain(struct dpaa_bp *bp)
+{
+	int ret;
+	u8 num = 8;
+
+	do {
+		struct bm_buffer bmb[8];
+		int i;
+
+		ret = bman_acquire(bp->pool, bmb, num);
+		if (ret < 0) {
+			if (num == 8) {
+				/* we have less than 8 buffers left;
+				 * drain them one by one
+				 */
+				num = 1;
+				ret = 1;
+				continue;
+			} else {
+				/* Pool is fully drained */
+				break;
+			}
+		}
+
+		if (bp->free_buf_cb)
+			for (i = 0; i < num; i++)
+				bp->free_buf_cb(bp, &bmb[i]);
+	} while (ret > 0);
+}
+
+static void dpaa_bp_free(struct dpaa_bp *dpaa_bp)
+{
+	struct dpaa_bp *bp = dpaa_bpid2pool(dpaa_bp->bpid);
+
+	/* the mapping between bpid and dpaa_bp is done very late in the
+	 * allocation procedure; if something failed before the mapping, the bp
+	 * was not configured, therefore we don't need the below instructions
+	 */
+	if (!bp)
+		return;
+
+	if (!atomic_dec_and_test(&bp->refs))
+		return;
+
+	if (bp->free_buf_cb)
+		dpaa_bp_drain(bp);
+
+	dpaa_bp_array[bp->bpid] = NULL;
+	bman_free_pool(bp->pool);
+}
+
+static void dpaa_bps_free(struct dpaa_priv *priv)
+{
+	int i;
+
+	for (i = 0; i < DPAA_BPS_NUM; i++)
+		dpaa_bp_free(priv->dpaa_bps[i]);
+}
+
+/* Use multiple WQs for FQ assignment:
+ *	- Tx Confirmation queues go to WQ1.
+ *	- Rx Error and Tx Error queues go to WQ2 (giving them a better chance
+ *	  to be scheduled, in case there are many more FQs in WQ3).
+ *	- Rx Default and Tx queues go to WQ3 (no differentiation between
+ *	  Rx and Tx traffic).
+ * This ensures that Tx-confirmed buffers are timely released. In particular,
+ * it avoids congestion on the Tx Confirm FQs, which can pile up PFDRs if they
+ * are greatly outnumbered by other FQs in the system, while
+ * dequeue scheduling is round-robin.
+ */
+static inline void dpaa_assign_wq(struct dpaa_fq *fq)
+{
+	switch (fq->fq_type) {
+	case FQ_TYPE_TX_CONFIRM:
+	case FQ_TYPE_TX_CONF_MQ:
+		fq->wq = 1;
+		break;
+	case FQ_TYPE_RX_ERROR:
+	case FQ_TYPE_TX_ERROR:
+		fq->wq = 2;
+		break;
+	case FQ_TYPE_RX_DEFAULT:
+	case FQ_TYPE_TX:
+		fq->wq = 3;
+		break;
+	default:
+		WARN(1, "Invalid FQ type %d for FQID %d!\n",
+		     fq->fq_type, fq->fqid);
+	}
+}
+
+static struct dpaa_fq *dpaa_fq_alloc(struct device *dev,
+				     u32 start, u32 count,
+				     struct list_head *list,
+				     enum dpaa_fq_type fq_type)
+{
+	int i;
+	struct dpaa_fq *dpaa_fq;
+
+	dpaa_fq = devm_kzalloc(dev, sizeof(*dpaa_fq) * count,
+			       GFP_KERNEL);
+	if (!dpaa_fq)
+		return NULL;
+
+	for (i = 0; i < count; i++) {
+		dpaa_fq[i].fq_type = fq_type;
+		dpaa_fq[i].fqid = start ? start + i : 0;
+		list_add_tail(&dpaa_fq[i].list, list);
+	}
+
+	for (i = 0; i < count; i++)
+		dpaa_assign_wq(dpaa_fq + i);
+
+	return dpaa_fq;
+}
+
+static int dpaa_alloc_all_fqs(struct device *dev, struct list_head *list,
+			      struct fm_port_fqs *port_fqs)
+{
+	struct dpaa_fq *dpaa_fq;
+
+	dpaa_fq = dpaa_fq_alloc(dev, 0, 1, list, FQ_TYPE_RX_ERROR);
+	if (!dpaa_fq)
+		goto fq_alloc_failed;
+
+	port_fqs->rx_errq = &dpaa_fq[0];
+
+	dpaa_fq = dpaa_fq_alloc(dev, 0, 1, list, FQ_TYPE_RX_DEFAULT);
+	if (!dpaa_fq)
+		goto fq_alloc_failed;
+
+	port_fqs->rx_defq = &dpaa_fq[0];
+
+	if (!dpaa_fq_alloc(dev, 0, DPAA_ETH_TXQ_NUM, list, FQ_TYPE_TX_CONF_MQ))
+		goto fq_alloc_failed;
+
+	dpaa_fq = dpaa_fq_alloc(dev, 0, 1, list, FQ_TYPE_TX_ERROR);
+	if (!dpaa_fq)
+		goto fq_alloc_failed;
+
+	port_fqs->tx_errq = &dpaa_fq[0];
+
+	dpaa_fq = dpaa_fq_alloc(dev, 0, 1, list, FQ_TYPE_TX_CONFIRM);
+	if (!dpaa_fq)
+		goto fq_alloc_failed;
+
+	port_fqs->tx_defq = &dpaa_fq[0];
+
+	if (!dpaa_fq_alloc(dev, 0, DPAA_ETH_TXQ_NUM, list, FQ_TYPE_TX))
+		goto fq_alloc_failed;
+
+	return 0;
+
+fq_alloc_failed:
+	dev_err(dev, "dpaa_fq_alloc() failed\n");
+	return -ENOMEM;
+}
+
+static u32 rx_pool_channel;
+static DEFINE_SPINLOCK(rx_pool_channel_init);
+
+static int dpaa_get_channel(void)
+{
+	spin_lock(&rx_pool_channel_init);
+	if (!rx_pool_channel) {
+		u32 pool;
+		int ret = qman_alloc_pool(&pool);
+
+		if (!ret)
+			rx_pool_channel = pool;
+	}
+	spin_unlock(&rx_pool_channel_init);
+	if (!rx_pool_channel)
+		return -ENOMEM;
+	return rx_pool_channel;
+}
+
+static void dpaa_release_channel(void)
+{
+	qman_release_pool(rx_pool_channel);
+}
+
+static void dpaa_eth_add_channel(u16 channel)
+{
+	const cpumask_t *cpus = qman_affine_cpus();
+	u32 pool = QM_SDQCR_CHANNELS_POOL_CONV(channel);
+	int cpu;
+	struct qman_portal *portal;
+
+	for_each_cpu(cpu, cpus) {
+		portal = qman_get_affine_portal(cpu);
+		qman_p_static_dequeue_add(portal, pool);
+	}
+}
+
+/* Congestion group state change notification callback.
+ * Stops the device's egress queues while they are congested and
+ * wakes them upon exiting congested state.
+ * Also updates some CGR-related stats.
+ */
+static void dpaa_eth_cgscn(struct qman_portal *qm, struct qman_cgr *cgr,
+			   int congested)
+{
+	struct dpaa_priv *priv = (struct dpaa_priv *)container_of(cgr,
+		struct dpaa_priv, cgr_data.cgr);
+
+	if (congested)
+		netif_tx_stop_all_queues(priv->net_dev);
+	else
+		netif_tx_wake_all_queues(priv->net_dev);
+}
+
+static int dpaa_eth_cgr_init(struct dpaa_priv *priv)
+{
+	struct qm_mcc_initcgr initcgr;
+	u32 cs_th;
+	int err;
+
+	err = qman_alloc_cgrid(&priv->cgr_data.cgr.cgrid);
+	if (err < 0) {
+		if (netif_msg_drv(priv))
+			pr_err("%s: Error %d allocating CGR ID\n",
+			       __func__, err);
+		goto out_error;
+	}
+	priv->cgr_data.cgr.cb = dpaa_eth_cgscn;
+
+	/* Enable Congestion State Change Notifications and CS taildrop */
+	initcgr.we_mask = QM_CGR_WE_CSCN_EN | QM_CGR_WE_CS_THRES;
+	initcgr.cgr.cscn_en = QM_CGR_EN;
+
+	/* Set different thresholds based on the MAC speed.
+	 * This may turn suboptimal if the MAC is reconfigured at a speed
+	 * lower than its max, e.g. if a dTSEC later negotiates a 100Mbps link.
+	 * In such cases, we ought to reconfigure the threshold, too.
+	 */
+	if (priv->mac_dev->if_support & SUPPORTED_10000baseT_Full)
+		cs_th = DPAA_CS_THRESHOLD_10G;
+	else
+		cs_th = DPAA_CS_THRESHOLD_1G;
+	qm_cgr_cs_thres_set64(&initcgr.cgr.cs_thres, cs_th, 1);
+
+	initcgr.we_mask |= QM_CGR_WE_CSTD_EN;
+	initcgr.cgr.cstd_en = QM_CGR_EN;
+
+	err = qman_create_cgr(&priv->cgr_data.cgr, QMAN_CGR_FLAG_USE_INIT,
+			      &initcgr);
+	if (err < 0) {
+		if (netif_msg_drv(priv))
+			pr_err("%s: Error %d creating CGR with ID %d\n",
+			       __func__, err, priv->cgr_data.cgr.cgrid);
+		qman_release_cgrid(priv->cgr_data.cgr.cgrid);
+		goto out_error;
+	}
+	if (netif_msg_drv(priv))
+		pr_debug("Created CGR %d for netdev with hwaddr %pM on QMan channel %d\n",
+			 priv->cgr_data.cgr.cgrid, priv->mac_dev->addr,
+			 priv->cgr_data.cgr.chan);
+
+out_error:
+	return err;
+}
+
+static inline void dpaa_setup_ingress(const struct dpaa_priv *priv,
+				      struct dpaa_fq *fq,
+				      const struct qman_fq *template)
+{
+	fq->fq_base = *template;
+	fq->net_dev = priv->net_dev;
+
+	fq->flags = QMAN_FQ_FLAG_NO_ENQUEUE;
+	fq->channel = priv->channel;
+}
+
+static inline void dpaa_setup_egress(const struct dpaa_priv *priv,
+				     struct dpaa_fq *fq,
+				     struct fman_port *port,
+				     const struct qman_fq *template)
+{
+	fq->fq_base = *template;
+	fq->net_dev = priv->net_dev;
+
+	if (port) {
+		fq->flags = QMAN_FQ_FLAG_TO_DCPORTAL;
+		fq->channel = (u16)fman_port_get_qman_channel_id(port);
+	} else {
+		fq->flags = QMAN_FQ_FLAG_NO_MODIFY;
+	}
+}
+
+static void dpaa_fq_setup(struct dpaa_priv *priv,
+			  const struct dpaa_fq_cbs *fq_cbs,
+			  struct fman_port *tx_port)
+{
+	struct dpaa_fq *fq;
+	u16 portals[NR_CPUS];
+	int cpu, num_portals = 0;
+	const cpumask_t *affine_cpus = qman_affine_cpus();
+	int egress_cnt = 0, conf_cnt = 0;
+
+	for_each_cpu(cpu, affine_cpus)
+		portals[num_portals++] = qman_affine_channel(cpu);
+	if (num_portals == 0)
+		dev_err(priv->net_dev->dev.parent,
+			"No Qman software (affine) channels found");
+
+	/* Initialize each FQ in the list */
+	list_for_each_entry(fq, &priv->dpaa_fq_list, list) {
+		switch (fq->fq_type) {
+		case FQ_TYPE_RX_DEFAULT:
+			dpaa_setup_ingress(priv, fq, &fq_cbs->rx_defq);
+			break;
+		case FQ_TYPE_RX_ERROR:
+			dpaa_setup_ingress(priv, fq, &fq_cbs->rx_errq);
+			break;
+		case FQ_TYPE_TX:
+			dpaa_setup_egress(priv, fq, tx_port,
+					  &fq_cbs->egress_ern);
+			/* If we have more Tx queues than the number of cores,
+			 * just ignore the extra ones.
+			 */
+			if (egress_cnt < DPAA_ETH_TXQ_NUM)
+				priv->egress_fqs[egress_cnt++] = &fq->fq_base;
+			break;
+		case FQ_TYPE_TX_CONF_MQ:
+			priv->conf_fqs[conf_cnt++] = &fq->fq_base;
+			/* fall through */
+		case FQ_TYPE_TX_CONFIRM:
+			dpaa_setup_ingress(priv, fq, &fq_cbs->tx_defq);
+			break;
+		case FQ_TYPE_TX_ERROR:
+			dpaa_setup_ingress(priv, fq, &fq_cbs->tx_errq);
+			break;
+		default:
+			dev_warn(priv->net_dev->dev.parent,
+				 "Unknown FQ type detected!\n");
+			break;
+		}
+	}
+
+	 /* Make sure all CPUs receive a corresponding Tx queue. */
+	while (egress_cnt < DPAA_ETH_TXQ_NUM) {
+		list_for_each_entry(fq, &priv->dpaa_fq_list, list) {
+			if (fq->fq_type != FQ_TYPE_TX)
+				continue;
+			priv->egress_fqs[egress_cnt++] = &fq->fq_base;
+			if (egress_cnt == DPAA_ETH_TXQ_NUM)
+				break;
+		}
+	}
+}
+
+static inline int dpaa_tx_fq_to_id(const struct dpaa_priv *priv,
+				   struct qman_fq *tx_fq)
+{
+	int i;
+
+	for (i = 0; i < DPAA_ETH_TXQ_NUM; i++)
+		if (priv->egress_fqs[i] == tx_fq)
+			return i;
+
+	return -EINVAL;
+}
+
+static int dpaa_fq_init(struct dpaa_fq *dpaa_fq, bool td_enable)
+{
+	int err;
+	const struct dpaa_priv	*priv;
+	struct device *dev;
+	struct qman_fq *fq;
+	struct qm_mcc_initfq initfq;
+	struct qman_fq *confq = NULL;
+	int queue_id;
+
+	priv = netdev_priv(dpaa_fq->net_dev);
+	dev = dpaa_fq->net_dev->dev.parent;
+
+	if (dpaa_fq->fqid == 0)
+		dpaa_fq->flags |= QMAN_FQ_FLAG_DYNAMIC_FQID;
+
+	dpaa_fq->init = !(dpaa_fq->flags & QMAN_FQ_FLAG_NO_MODIFY);
+
+	err = qman_create_fq(dpaa_fq->fqid, dpaa_fq->flags, &dpaa_fq->fq_base);
+	if (err) {
+		dev_err(dev, "qman_create_fq() failed\n");
+		return err;
+	}
+	fq = &dpaa_fq->fq_base;
+
+	if (dpaa_fq->init) {
+		memset(&initfq, 0, sizeof(initfq));
+
+		initfq.we_mask = QM_INITFQ_WE_FQCTRL;
+		/* Note: we may get to keep an empty FQ in cache */
+		initfq.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
+
+		/* Try to reduce the number of portal interrupts for
+		 * Tx Confirmation FQs.
+		 */
+		if (dpaa_fq->fq_type == FQ_TYPE_TX_CONFIRM)
+			initfq.fqd.fq_ctrl |= QM_FQCTRL_HOLDACTIVE;
+
+		/* FQ placement */
+		initfq.we_mask |= QM_INITFQ_WE_DESTWQ;
+
+		qm_fqd_set_destwq(&initfq.fqd, dpaa_fq->channel, dpaa_fq->wq);
+
+		/* Put all egress queues in a congestion group of their own.
+		 * Sensu stricto, the Tx confirmation queues are Rx FQs,
+		 * rather than Tx - but they nonetheless account for the
+		 * memory footprint on behalf of egress traffic. We therefore
+		 * place them in the netdev's CGR, along with the Tx FQs.
+		 */
+		if (dpaa_fq->fq_type == FQ_TYPE_TX ||
+		    dpaa_fq->fq_type == FQ_TYPE_TX_CONFIRM ||
+		    dpaa_fq->fq_type == FQ_TYPE_TX_CONF_MQ) {
+			initfq.we_mask |= QM_INITFQ_WE_CGID;
+			initfq.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+			initfq.fqd.cgid = (u8)priv->cgr_data.cgr.cgrid;
+			/* Set a fixed overhead accounting, in an attempt to
+			 * reduce the impact of fixed-size skb shells and the
+			 * driver's needed headroom on system memory. This is
+			 * especially the case when the egress traffic is
+			 * composed of small datagrams.
+			 * Unfortunately, QMan's OAL value is capped to an
+			 * insufficient value, but even that is better than
+			 * no overhead accounting at all.
+			 */
+			initfq.we_mask |= QM_INITFQ_WE_OAC;
+			qm_fqd_set_oac(&initfq.fqd, QM_OAC_CG);
+			qm_fqd_set_oal(&initfq.fqd,
+				       min(sizeof(struct sk_buff) +
+				       priv->tx_headroom,
+				       (size_t)FSL_QMAN_MAX_OAL));
+		}
+
+		if (td_enable) {
+			initfq.we_mask |= QM_INITFQ_WE_TDTHRESH;
+			qm_fqd_set_taildrop(&initfq.fqd, DPAA_FQ_TD, 1);
+			initfq.fqd.fq_ctrl = QM_FQCTRL_TDE;
+		}
+
+		if (dpaa_fq->fq_type == FQ_TYPE_TX) {
+			queue_id = dpaa_tx_fq_to_id(priv, &dpaa_fq->fq_base);
+			if (queue_id >= 0)
+				confq = priv->conf_fqs[queue_id];
+			if (confq) {
+				initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;
+			/* ContextA: OVOM=1(use contextA2 bits instead of ICAD)
+			 *	     A2V=1 (contextA A2 field is valid)
+			 *	     A0V=1 (contextA A0 field is valid)
+			 *	     B0V=1 (contextB field is valid)
+			 * ContextA A2: EBD=1 (deallocate buffers inside FMan)
+			 * ContextB B0(ASPID): 0 (absolute Virtual Storage ID)
+			 */
+				initfq.fqd.context_a.hi = 0x1e000000;
+				initfq.fqd.context_a.lo = 0x80000000;
+			}
+		}
+
+		/* Put all the ingress queues in our "ingress CGR". */
+		if (priv->use_ingress_cgr &&
+		    (dpaa_fq->fq_type == FQ_TYPE_RX_DEFAULT ||
+		     dpaa_fq->fq_type == FQ_TYPE_RX_ERROR)) {
+			initfq.we_mask |= QM_INITFQ_WE_CGID;
+			initfq.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+			initfq.fqd.cgid = (u8)priv->ingress_cgr.cgrid;
+			/* Set a fixed overhead accounting, just like for the
+			 * egress CGR.
+			 */
+			initfq.we_mask |= QM_INITFQ_WE_OAC;
+			qm_fqd_set_oac(&initfq.fqd, QM_OAC_CG);
+			qm_fqd_set_oal(&initfq.fqd,
+				       min(sizeof(struct sk_buff) +
+				       priv->tx_headroom,
+				       (size_t)FSL_QMAN_MAX_OAL));
+		}
+
+		/* Initialization common to all ingress queues */
+		if (dpaa_fq->flags & QMAN_FQ_FLAG_NO_ENQUEUE) {
+			initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;
+			initfq.fqd.fq_ctrl |=
+				QM_FQCTRL_HOLDACTIVE;
+			initfq.fqd.context_a.stashing.exclusive =
+				QM_STASHING_EXCL_DATA | QM_STASHING_EXCL_CTX |
+				QM_STASHING_EXCL_ANNOTATION;
+			qm_fqd_set_stashing(&initfq.fqd, 1, 2,
+					    DIV_ROUND_UP(sizeof(struct qman_fq),
+							 64));
+		}
+
+		err = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &initfq);
+		if (err < 0) {
+			dev_err(dev, "qman_init_fq(%u) = %d\n",
+				qman_fq_fqid(fq), err);
+			qman_destroy_fq(fq);
+			return err;
+		}
+	}
+
+	dpaa_fq->fqid = qman_fq_fqid(fq);
+
+	return 0;
+}
+
+static int dpaa_fq_free_entry(struct device *dev, struct qman_fq *fq)
+{
+	int err, error;
+	struct dpaa_fq *dpaa_fq;
+	const struct dpaa_priv	*priv;
+
+	err = 0;
+
+	dpaa_fq = container_of(fq, struct dpaa_fq, fq_base);
+	priv = netdev_priv(dpaa_fq->net_dev);
+
+	if (dpaa_fq->init) {
+		err = qman_retire_fq(fq, NULL);
+		if (err < 0 && netif_msg_drv(priv))
+			dev_err(dev, "qman_retire_fq(%u) = %d\n",
+				qman_fq_fqid(fq), err);
+
+		error = qman_oos_fq(fq);
+		if (error < 0 && netif_msg_drv(priv)) {
+			dev_err(dev, "qman_oos_fq(%u) = %d\n",
+				qman_fq_fqid(fq), error);
+			if (err >= 0)
+				err = error;
+		}
+	}
+
+	qman_destroy_fq(fq);
+	list_del(&dpaa_fq->list);
+
+	return err;
+}
+
+static int dpaa_fq_free(struct device *dev, struct list_head *list)
+{
+	int err, error;
+	struct dpaa_fq *dpaa_fq, *tmp;
+
+	err = 0;
+	list_for_each_entry_safe(dpaa_fq, tmp, list, list) {
+		error = dpaa_fq_free_entry(dev, (struct qman_fq *)dpaa_fq);
+		if (error < 0 && err >= 0)
+			err = error;
+	}
+
+	return err;
+}
+
+static void dpaa_eth_init_tx_port(struct fman_port *port, struct dpaa_fq *errq,
+				  struct dpaa_fq *defq,
+				  struct dpaa_buffer_layout *buf_layout)
+{
+	struct fman_port_params params;
+	struct fman_buffer_prefix_content buf_prefix_content;
+	int err;
+
+	memset(&params, 0, sizeof(params));
+	memset(&buf_prefix_content, 0, sizeof(buf_prefix_content));
+
+	buf_prefix_content.priv_data_size = buf_layout->priv_data_size;
+	buf_prefix_content.pass_prs_result = true;
+	buf_prefix_content.pass_hash_result = true;
+	buf_prefix_content.pass_time_stamp = false;
+	buf_prefix_content.data_align = DPAA_FD_DATA_ALIGNMENT;
+
+	params.specific_params.non_rx_params.err_fqid = errq->fqid;
+	params.specific_params.non_rx_params.dflt_fqid = defq->fqid;
+
+	err = fman_port_config(port, &params);
+	if (err)
+		pr_err("%s: fman_port_config failed\n", __func__);
+
+	err = fman_port_cfg_buf_prefix_content(port, &buf_prefix_content);
+	if (err)
+		pr_err("%s: fman_port_cfg_buf_prefix_content failed\n",
+		       __func__);
+
+	err = fman_port_init(port);
+	if (err)
+		pr_err("%s: fm_port_init failed\n", __func__);
+}
+
+static void dpaa_eth_init_rx_port(struct fman_port *port, struct dpaa_bp **bps,
+				  size_t count, struct dpaa_fq *errq,
+				  struct dpaa_fq *defq,
+				  struct dpaa_buffer_layout *buf_layout)
+{
+	struct fman_port_params params;
+	struct fman_buffer_prefix_content buf_prefix_content;
+	struct fman_port_rx_params *rx_p;
+	int i, err;
+
+	memset(&params, 0, sizeof(params));
+	memset(&buf_prefix_content, 0, sizeof(buf_prefix_content));
+
+	buf_prefix_content.priv_data_size = buf_layout->priv_data_size;
+	buf_prefix_content.pass_prs_result = true;
+	buf_prefix_content.pass_hash_result = true;
+	buf_prefix_content.pass_time_stamp = false;
+	buf_prefix_content.data_align = DPAA_FD_DATA_ALIGNMENT;
+
+	rx_p = &params.specific_params.rx_params;
+	rx_p->err_fqid = errq->fqid;
+	rx_p->dflt_fqid = defq->fqid;
+
+	count = min(ARRAY_SIZE(rx_p->ext_buf_pools.ext_buf_pool), count);
+	rx_p->ext_buf_pools.num_of_pools_used = (u8)count;
+	for (i = 0; i < count; i++) {
+		rx_p->ext_buf_pools.ext_buf_pool[i].id =  bps[i]->bpid;
+		rx_p->ext_buf_pools.ext_buf_pool[i].size = (u16)bps[i]->size;
+	}
+
+	err = fman_port_config(port, &params);
+	if (err)
+		pr_err("%s: fman_port_config failed\n", __func__);
+
+	err = fman_port_cfg_buf_prefix_content(port, &buf_prefix_content);
+	if (err)
+		pr_err("%s: fman_port_cfg_buf_prefix_content failed\n",
+		       __func__);
+
+	err = fman_port_init(port);
+	if (err)
+		pr_err("%s: fm_port_init failed\n", __func__);
+}
+
+static void dpaa_eth_init_ports(struct mac_device *mac_dev,
+				struct dpaa_bp **bps, size_t count,
+				struct fm_port_fqs *port_fqs,
+				struct dpaa_buffer_layout *buf_layout,
+				struct device *dev)
+{
+	struct fman_port *rxport = mac_dev->port[RX];
+	struct fman_port *txport = mac_dev->port[TX];
+
+	dpaa_eth_init_tx_port(txport, port_fqs->tx_errq,
+			      port_fqs->tx_defq, &buf_layout[TX]);
+	dpaa_eth_init_rx_port(rxport, bps, count, port_fqs->rx_errq,
+			      port_fqs->rx_defq, &buf_layout[RX]);
+}
+
+static int dpaa_bman_release(const struct dpaa_bp *dpaa_bp,
+			     struct bm_buffer *bmb, int cnt)
+{
+	int err;
+
+	err = bman_release(dpaa_bp->pool, bmb, cnt);
+	/* Should never occur, address anyway to avoid leaking the buffers */
+	if (unlikely(WARN_ON(err)) && dpaa_bp->free_buf_cb)
+		while (cnt-- > 0)
+			dpaa_bp->free_buf_cb(dpaa_bp, &bmb[cnt]);
+
+	return cnt;
+}
+
+static void dpaa_release_sgt_members(struct qm_sg_entry *sgt)
+{
+	struct dpaa_bp *dpaa_bp;
+	struct bm_buffer bmb[DPAA_BUFF_RELEASE_MAX];
+	int i = 0, j;
+
+	memset(bmb, 0, sizeof(bmb));
+
+	do {
+		dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
+		if (!dpaa_bp)
+			return;
+
+		j = 0;
+		do {
+			WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
+
+			bm_buffer_set64(&bmb[j], qm_sg_entry_get64(&sgt[i]));
+
+			j++; i++;
+		} while (j < ARRAY_SIZE(bmb) &&
+				!qm_sg_entry_is_final(&sgt[i - 1]) &&
+				sgt[i - 1].bpid == sgt[i].bpid);
+
+		dpaa_bman_release(dpaa_bp, bmb, j);
+	} while (!qm_sg_entry_is_final(&sgt[i - 1]));
+}
+
+static void dpaa_fd_release(const struct net_device *net_dev,
+			    const struct qm_fd *fd)
+{
+	struct qm_sg_entry *sgt;
+	struct dpaa_bp *dpaa_bp;
+	struct bm_buffer bmb;
+	dma_addr_t addr;
+	void *vaddr;
+
+	bmb.data = 0;
+	bm_buffer_set64(&bmb, qm_fd_addr(fd));
+
+	dpaa_bp = dpaa_bpid2pool(fd->bpid);
+	if (!dpaa_bp)
+		return;
+
+	if (qm_fd_get_format(fd) == qm_fd_sg) {
+		vaddr = phys_to_virt(qm_fd_addr(fd));
+		sgt = vaddr + qm_fd_get_offset(fd);
+
+		dma_unmap_single(dpaa_bp->dev, qm_fd_addr(fd), dpaa_bp->size,
+				 DMA_FROM_DEVICE);
+
+		dpaa_release_sgt_members(sgt);
+
+		addr = dma_map_single(dpaa_bp->dev, vaddr, dpaa_bp->size,
+				      DMA_FROM_DEVICE);
+		if (dma_mapping_error(dpaa_bp->dev, addr)) {
+			dev_err(dpaa_bp->dev, "DMA mapping failed");
+			return;
+		}
+		bm_buffer_set64(&bmb, addr);
+	}
+
+	dpaa_bman_release(dpaa_bp, &bmb, 1);
+}
+
+/* Turn on HW checksum computation for this outgoing frame.
+ * If the current protocol is not something we support in this regard
+ * (or if the stack has already computed the SW checksum), we do nothing.
+ *
+ * Returns 0 if all goes well (or HW csum doesn't apply), and a negative value
+ * otherwise.
+ *
+ * Note that this function may modify the fd->cmd field and the skb data buffer
+ * (the Parse Results area).
+ */
+static int dpaa_enable_tx_csum(struct dpaa_priv *priv,
+			       struct sk_buff *skb,
+			       struct qm_fd *fd,
+			       char *parse_results)
+{
+	struct fman_prs_result *parse_result;
+	struct iphdr *iph;
+	struct ipv6hdr *ipv6h = NULL;
+	u8 l4_proto;
+	u16 ethertype = ntohs(skb->protocol);
+	int retval = 0;
+
+	if (skb->ip_summed != CHECKSUM_PARTIAL)
+		return 0;
+
+	/* Note: L3 csum seems to be already computed in sw, but we can't choose
+	 * L4 alone from the FM configuration anyway.
+	 */
+
+	/* Fill in some fields of the Parse Results array, so the FMan
+	 * can find them as if they came from the FMan Parser.
+	 */
+	parse_result = (struct fman_prs_result *)parse_results;
+
+	/* If we're dealing with VLAN, get the real Ethernet type */
+	if (ethertype == ETH_P_8021Q) {
+		/* We can't always assume the MAC header is set correctly
+		 * by the stack, so reset to beginning of skb->data
+		 */
+		skb_reset_mac_header(skb);
+		ethertype = ntohs(vlan_eth_hdr(skb)->h_vlan_encapsulated_proto);
+	}
+
+	/* Fill in the relevant L3 parse result fields
+	 * and read the L4 protocol type
+	 */
+	switch (ethertype) {
+	case ETH_P_IP:
+		parse_result->l3r = cpu_to_be16(FM_L3_PARSE_RESULT_IPV4);
+		iph = ip_hdr(skb);
+		WARN_ON(!iph);
+		l4_proto = iph->protocol;
+		break;
+	case ETH_P_IPV6:
+		parse_result->l3r = cpu_to_be16(FM_L3_PARSE_RESULT_IPV6);
+		ipv6h = ipv6_hdr(skb);
+		WARN_ON(!ipv6h);
+		l4_proto = ipv6h->nexthdr;
+		break;
+	default:
+		/* We shouldn't even be here */
+		if (net_ratelimit())
+			netif_alert(priv, tx_err, priv->net_dev,
+				    "Can't compute HW csum for L3 proto 0x%x\n",
+				    ntohs(skb->protocol));
+		retval = -EIO;
+		goto return_error;
+	}
+
+	/* Fill in the relevant L4 parse result fields */
+	switch (l4_proto) {
+	case IPPROTO_UDP:
+		parse_result->l4r = FM_L4_PARSE_RESULT_UDP;
+		break;
+	case IPPROTO_TCP:
+		parse_result->l4r = FM_L4_PARSE_RESULT_TCP;
+		break;
+	default:
+		if (net_ratelimit())
+			netif_alert(priv, tx_err, priv->net_dev,
+				    "Can't compute HW csum for L4 proto 0x%x\n",
+				    l4_proto);
+		retval = -EIO;
+		goto return_error;
+	}
+
+	/* At index 0 is IPOffset_1 as defined in the Parse Results */
+	parse_result->ip_off[0] = (u8)skb_network_offset(skb);
+	parse_result->l4_off = (u8)skb_transport_offset(skb);
+
+	/* Enable L3 (and L4, if TCP or UDP) HW checksum. */
+	fd->cmd |= FM_FD_CMD_RPD | FM_FD_CMD_DTC;
+
+	/* On P1023 and similar platforms fd->cmd interpretation could
+	 * be disabled by setting CONTEXT_A bit ICMD; currently this bit
+	 * is not set so we do not need to check; in the future, if/when
+	 * using context_a we need to check this bit
+	 */
+
+return_error:
+	return retval;
+}
+
+static int dpaa_bp_add_8_bufs(const struct dpaa_bp *dpaa_bp)
+{
+	struct bm_buffer bmb[8];
+	void *new_buf;
+	dma_addr_t addr;
+	u8 i;
+	struct device *dev = dpaa_bp->dev;
+
+	for (i = 0; i < 8; i++) {
+		new_buf = netdev_alloc_frag(dpaa_bp->raw_size);
+		if (unlikely(!new_buf)) {
+			dev_err(dev, "netdev_alloc_frag() failed, size %zu\n",
+				dpaa_bp->raw_size);
+			goto release_previous_buffs;
+		}
+		new_buf = PTR_ALIGN(new_buf, SMP_CACHE_BYTES);
+
+		addr = dma_map_single(dev, new_buf,
+				      dpaa_bp->size, DMA_FROM_DEVICE);
+		if (unlikely(dma_mapping_error(dev, addr))) {
+			dev_err(dpaa_bp->dev, "DMA map failed");
+			goto release_previous_buffs;
+		}
+
+		bmb[i].data = 0;
+		bm_buffer_set64(&bmb[i], addr);
+	}
+
+release_bufs:
+	return dpaa_bman_release(dpaa_bp, bmb, i);
+
+release_previous_buffs:
+	WARN_ONCE(1, "dpaa_eth: failed to add buffers on Rx\n");
+
+	bm_buffer_set64(&bmb[i], 0);
+	/* Avoid releasing a completely null buffer; bman_release() requires
+	 * at least one buffer.
+	 */
+	if (likely(i))
+		goto release_bufs;
+
+	return 0;
+}
+
+static int dpaa_bp_seed(struct dpaa_bp *dpaa_bp)
+{
+	int i;
+
+	/* Give each CPU an allotment of "config_count" buffers */
+	for_each_possible_cpu(i) {
+		int *count_ptr = per_cpu_ptr(dpaa_bp->percpu_count, i);
+		int j;
+
+		/* Although we access another CPU's counters here
+		 * we do it at boot time so it is safe
+		 */
+		for (j = 0; j < dpaa_bp->config_count; j += 8)
+			*count_ptr += dpaa_bp_add_8_bufs(dpaa_bp);
+	}
+	return 0;
+}
+
+/* Add buffers/(pages) for Rx processing whenever bpool count falls below
+ * REFILL_THRESHOLD.
+ */
+static int dpaa_eth_refill_bpool(struct dpaa_bp *dpaa_bp, int *countptr)
+{
+	int count = *countptr;
+	int new_bufs;
+
+	if (unlikely(count < FSL_DPAA_ETH_REFILL_THRESHOLD)) {
+		do {
+			new_bufs = dpaa_bp_add_8_bufs(dpaa_bp);
+			if (unlikely(!new_bufs)) {
+				/* Avoid looping forever if we've temporarily
+				 * run out of memory. We'll try again at the
+				 * next NAPI cycle.
+				 */
+				break;
+			}
+			count += new_bufs;
+		} while (count < FSL_DPAA_ETH_MAX_BUF_COUNT);
+
+		*countptr = count;
+		if (unlikely(count < FSL_DPAA_ETH_MAX_BUF_COUNT))
+			return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static int dpaa_eth_refill_bpools(struct dpaa_priv *priv)
+{
+	struct dpaa_bp *dpaa_bp;
+	int *countptr;
+	int res, i;
+
+	for (i = 0; i < DPAA_BPS_NUM; i++) {
+		dpaa_bp = priv->dpaa_bps[i];
+		if (!dpaa_bp)
+			return -EINVAL;
+		countptr = this_cpu_ptr(dpaa_bp->percpu_count);
+		res  = dpaa_eth_refill_bpool(dpaa_bp, countptr);
+		if (res)
+			return res;
+	}
+	return 0;
+}
+
+/* Cleanup function for outgoing frame descriptors that were built on Tx path,
+ * either contiguous frames or scatter/gather ones.
+ * Skb freeing is not handled here.
+ *
+ * This function may be called on error paths in the Tx function, so guard
+ * against cases when not all fd relevant fields were filled in.
+ *
+ * Return the skb backpointer, since for S/G frames the buffer containing it
+ * gets freed here.
+ */
+static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv,
+					  const struct qm_fd *fd)
+{
+	const struct qm_sg_entry *sgt;
+	int i;
+	struct device *dev = priv->net_dev->dev.parent;
+	dma_addr_t addr = qm_fd_addr(fd);
+	struct sk_buff **skbh = (struct sk_buff **)phys_to_virt(addr);
+	struct sk_buff *skb = *skbh;
+	const enum dma_data_direction dma_dir = DMA_TO_DEVICE;
+	int nr_frags;
+
+	if (unlikely(qm_fd_get_format(fd) == qm_fd_sg)) {
+		nr_frags = skb_shinfo(skb)->nr_frags;
+		dma_unmap_single(dev, addr, qm_fd_get_offset(fd) +
+				 sizeof(struct qm_sg_entry) * (1 + nr_frags),
+				 dma_dir);
+
+		/* The sgt buffer has been allocated with netdev_alloc_frag(),
+		 * it's from lowmem.
+		 */
+		sgt = phys_to_virt(addr + qm_fd_get_offset(fd));
+
+		/* sgt[0] is from lowmem, was dma_map_single()-ed */
+		dma_unmap_single(dev, qm_sg_addr(&sgt[0]),
+				 qm_sg_entry_get_len(&sgt[0]), dma_dir);
+
+		/* remaining pages were mapped with skb_frag_dma_map() */
+		for (i = 1; i < nr_frags; i++) {
+			WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
+
+			dma_unmap_page(dev, qm_sg_addr(&sgt[i]),
+				       qm_sg_entry_get_len(&sgt[i]), dma_dir);
+		}
+
+		/* Free the page frag that we allocated on Tx */
+		skb_free_frag(phys_to_virt(addr));
+	} else {
+		dma_unmap_single(dev, addr,
+				 skb_tail_pointer(skb) - (u8 *)skbh, dma_dir);
+	}
+
+	return skb;
+}
+
+/* Build a linear skb around the received buffer.
+ * We are guaranteed there is enough room at the end of the data buffer to
+ * accommodate the shared info area of the skb.
+ */
+static struct sk_buff *contig_fd_to_skb(const struct dpaa_priv *priv,
+					const struct qm_fd *fd)
+{
+	struct sk_buff *skb;
+	ssize_t fd_off = qm_fd_get_offset(fd);
+	struct dpaa_bp *dpaa_bp;
+	dma_addr_t addr = qm_fd_addr(fd);
+	void *vaddr;
+
+	vaddr = phys_to_virt(addr);
+	WARN_ON(!IS_ALIGNED((unsigned long)vaddr, SMP_CACHE_BYTES));
+
+	dpaa_bp = dpaa_bpid2pool(fd->bpid);
+	if (!dpaa_bp)
+		goto free_buffer;
+
+	skb = build_skb(vaddr, dpaa_bp->size +
+			SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
+	if (unlikely(!skb)) {
+		WARN_ONCE(1, "Build skb failure on Rx\n");
+		goto free_buffer;
+	}
+	WARN_ON(fd_off != priv->rx_headroom);
+	skb_reserve(skb, fd_off);
+	skb_put(skb, qm_fd_get_length(fd));
+
+	skb->ip_summed = CHECKSUM_NONE;
+
+	return skb;
+
+free_buffer:
+	skb_free_frag(vaddr);
+	return NULL;
+}
+
+/* Build an skb with the data of the first S/G entry in the linear portion and
+ * the rest of the frame as skb fragments.
+ *
+ * The page fragment holding the S/G Table is recycled here.
+ */
+static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv,
+				    const struct qm_fd *fd)
+{
+	const struct qm_sg_entry *sgt;
+	dma_addr_t addr = qm_fd_addr(fd);
+	ssize_t fd_off = qm_fd_get_offset(fd);
+	dma_addr_t sg_addr;
+	void *vaddr, *sg_vaddr;
+	struct dpaa_bp *dpaa_bp;
+	struct page *page, *head_page;
+	int *count_ptr;
+	int frag_off, frag_len;
+	int page_offset;
+	int i;
+	unsigned int sz;
+	struct sk_buff *skb;
+
+	vaddr = phys_to_virt(addr);
+	WARN_ON(!IS_ALIGNED((unsigned long)vaddr, SMP_CACHE_BYTES));
+
+	/* Iterate through the SGT entries and add data buffers to the skb */
+	sgt = vaddr + fd_off;
+	for (i = 0; i < DPAA_SGT_MAX_ENTRIES; i++) {
+		/* Extension bit is not supported */
+		WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
+
+		sg_addr = qm_sg_addr(&sgt[i]);
+		sg_vaddr = phys_to_virt(sg_addr);
+		WARN_ON(!IS_ALIGNED((unsigned long)sg_vaddr,
+				    SMP_CACHE_BYTES));
+
+		/* We may use multiple Rx pools */
+		dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
+		if (!dpaa_bp)
+			goto free_buffers;
+
+		count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
+		dma_unmap_single(dpaa_bp->dev, sg_addr, dpaa_bp->size,
+				 DMA_FROM_DEVICE);
+		if (i == 0) {
+			sz = dpaa_bp->size +
+				SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+			skb = build_skb(sg_vaddr, sz);
+			if (WARN_ON(unlikely(!skb)))
+				goto free_buffers;
+
+			skb->ip_summed = CHECKSUM_NONE;
+
+			/* Make sure forwarded skbs will have enough space
+			 * on Tx, if extra headers are added.
+			 */
+			WARN_ON(fd_off != priv->rx_headroom);
+			skb_reserve(skb, fd_off);
+			skb_put(skb, qm_sg_entry_get_len(&sgt[i]));
+		} else {
+			/* Not the first S/G entry; all data from buffer will
+			 * be added in an skb fragment; fragment index is offset
+			 * by one since first S/G entry was incorporated in the
+			 * linear part of the skb.
+			 *
+			 * Caution: 'page' may be a tail page.
+			 */
+			page = virt_to_page(sg_vaddr);
+			head_page = virt_to_head_page(sg_vaddr);
+
+			/* Compute offset in (possibly tail) page */
+			page_offset = ((unsigned long)sg_vaddr &
+					(PAGE_SIZE - 1)) +
+				(page_address(page) - page_address(head_page));
+			/* page_offset only refers to the beginning of sgt[i];
+			 * but the buffer itself may have an internal offset.
+			 */
+			frag_off = qm_sg_entry_get_off(&sgt[i]) + page_offset;
+			frag_len = qm_sg_entry_get_len(&sgt[i]);
+			/* skb_add_rx_frag() does no checking on the page; if
+			 * we pass it a tail page, we'll end up with
+			 * bad page accounting and eventually with segafults.
+			 */
+			skb_add_rx_frag(skb, i - 1, head_page, frag_off,
+					frag_len, dpaa_bp->size);
+		}
+		/* Update the pool count for the current {cpu x bpool} */
+		(*count_ptr)--;
+
+		if (qm_sg_entry_is_final(&sgt[i]))
+			break;
+	}
+	WARN_ONCE(i == DPAA_SGT_MAX_ENTRIES, "No final bit on SGT\n");
+
+	/* free the SG table buffer */
+	skb_free_frag(vaddr);
+
+	return skb;
+
+free_buffers:
+	/* compensate sw bpool counter changes */
+	for (i--; i > 0; i--) {
+		dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
+		if (dpaa_bp) {
+			count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
+			(*count_ptr)++;
+		}
+	}
+	/* free all the SG entries */
+	for (i = 0; i < DPAA_SGT_MAX_ENTRIES ; i++) {
+		sg_addr = qm_sg_addr(&sgt[i]);
+		sg_vaddr = phys_to_virt(sg_addr);
+		skb_free_frag(sg_vaddr);
+		dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
+		if (dpaa_bp) {
+			count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
+			(*count_ptr)--;
+		}
+
+		if (qm_sg_entry_is_final(&sgt[i]))
+			break;
+	}
+	/* free the SGT fragment */
+	skb_free_frag(vaddr);
+
+	return NULL;
+}
+
+static int skb_to_contig_fd(struct dpaa_priv *priv,
+			    struct sk_buff *skb, struct qm_fd *fd,
+			    int *offset)
+{
+	struct sk_buff **skbh;
+	dma_addr_t addr;
+	struct net_device *net_dev = priv->net_dev;
+	struct device *dev = net_dev->dev.parent;
+	int err;
+	enum dma_data_direction dma_dir;
+	unsigned char *buffer_start;
+
+	/* We are guaranteed to have at least tx_headroom bytes
+	 * available, so just use that for offset.
+	 */
+	fd->bpid = FSL_DPAA_BPID_INV;
+	buffer_start = skb->data - priv->tx_headroom;
+	dma_dir = DMA_TO_DEVICE;
+
+	skbh = (struct sk_buff **)buffer_start;
+	*skbh = skb;
+
+	/* Enable L3/L4 hardware checksum computation.
+	 *
+	 * We must do this before dma_map_single(DMA_TO_DEVICE), because we may
+	 * need to write into the skb.
+	 */
+	err = dpaa_enable_tx_csum(priv, skb, fd,
+				  ((char *)skbh) + DPAA_TX_PRIV_DATA_SIZE);
+	if (unlikely(err < 0)) {
+		if (net_ratelimit())
+			netif_err(priv, tx_err, net_dev, "HW csum error: %d\n",
+				  err);
+		return err;
+	}
+
+	/* Fill in the rest of the FD fields */
+	qm_fd_set_contig(fd, priv->tx_headroom, skb->len);
+	fd->cmd |= FM_FD_CMD_FCO;
+
+	/* Map the entire buffer size that may be seen by FMan, but no more */
+	addr = dma_map_single(dev, skbh,
+			      skb_tail_pointer(skb) - buffer_start, dma_dir);
+	if (unlikely(dma_mapping_error(dev, addr))) {
+		if (net_ratelimit())
+			netif_err(priv, tx_err, net_dev, "dma_map_single() failed\n");
+		return -EINVAL;
+	}
+	qm_fd_addr_set64(fd, addr);
+
+	return 0;
+}
+
+static int skb_to_sg_fd(struct dpaa_priv *priv,
+			struct sk_buff *skb, struct qm_fd *fd)
+{
+	dma_addr_t addr;
+	struct sk_buff **skbh;
+	struct net_device *net_dev = priv->net_dev;
+	struct device *dev = net_dev->dev.parent;
+	struct qm_sg_entry *sgt;
+	void *sgt_buf;
+	void *buffer_start;
+	skb_frag_t *frag;
+	size_t frag_len;
+	int i, j, err, sz;
+	const enum dma_data_direction dma_dir = DMA_TO_DEVICE;
+	const int nr_frags = skb_shinfo(skb)->nr_frags;
+
+	/* get a page frag to store the SGTable */
+	sz = SKB_DATA_ALIGN(priv->tx_headroom +
+		sizeof(struct qm_sg_entry) * (1 + nr_frags));
+	sgt_buf = netdev_alloc_frag(sz);
+	if (unlikely(!sgt_buf)) {
+		netdev_err(net_dev, "netdev_alloc_frag() failed for size %d\n",
+			   sz);
+		return -ENOMEM;
+	}
+
+	/* Enable L3/L4 hardware checksum computation.
+	 *
+	 * We must do this before dma_map_single(DMA_TO_DEVICE), because we may
+	 * need to write into the skb.
+	 */
+	err = dpaa_enable_tx_csum(priv, skb, fd,
+				  sgt_buf + DPAA_TX_PRIV_DATA_SIZE);
+	if (unlikely(err < 0)) {
+		if (net_ratelimit())
+			netif_err(priv, tx_err, net_dev, "HW csum error: %d\n",
+				  err);
+		goto csum_failed;
+	}
+
+	sgt = (struct qm_sg_entry *)(sgt_buf + priv->tx_headroom);
+	qm_sg_entry_set_len(&sgt[0], skb_headlen(skb));
+	sgt[0].bpid = FSL_DPAA_BPID_INV;
+	sgt[0].offset = 0;
+	addr = dma_map_single(dev, skb->data,
+			      skb_headlen(skb), dma_dir);
+	if (unlikely(dma_mapping_error(dev, addr))) {
+		dev_err(dev, "DMA mapping failed");
+		err = -EINVAL;
+		goto sg0_map_failed;
+	}
+	qm_sg_entry_set64(&sgt[0], addr);
+
+	/* populate the rest of SGT entries */
+	frag = &skb_shinfo(skb)->frags[0];
+	frag_len = frag->size;
+	for (i = 1; i <= nr_frags; i++, frag++) {
+		WARN_ON(!skb_frag_page(frag));
+		addr = skb_frag_dma_map(dev, frag, 0,
+					frag_len, dma_dir);
+		if (unlikely(dma_mapping_error(dev, addr))) {
+			dev_err(dev, "DMA mapping failed");
+			err = -EINVAL;
+			goto sg_map_failed;
+		}
+
+		qm_sg_entry_set_len(&sgt[i], frag_len);
+		sgt[i].bpid = FSL_DPAA_BPID_INV;
+		sgt[i].offset = 0;
+
+		/* keep the offset in the address */
+		qm_sg_entry_set64(&sgt[i], addr);
+		frag_len = frag->size;
+	}
+	qm_sg_entry_set_f(&sgt[i - 1], frag_len);
+
+	qm_fd_set_sg(fd, priv->tx_headroom, skb->len);
+
+	/* DMA map the SGT page */
+	buffer_start = (void *)sgt - priv->tx_headroom;
+	skbh = (struct sk_buff **)buffer_start;
+	*skbh = skb;
+
+	addr = dma_map_single(dev, buffer_start, priv->tx_headroom +
+			      sizeof(struct qm_sg_entry) * (1 + nr_frags),
+			      dma_dir);
+	if (unlikely(dma_mapping_error(dev, addr))) {
+		dev_err(dev, "DMA mapping failed");
+		err = -EINVAL;
+		goto sgt_map_failed;
+	}
+
+	fd->bpid = FSL_DPAA_BPID_INV;
+	fd->cmd |= FM_FD_CMD_FCO;
+	qm_fd_addr_set64(fd, addr);
+
+	return 0;
+
+sgt_map_failed:
+sg_map_failed:
+	for (j = 0; j < i; j++)
+		dma_unmap_page(dev, qm_sg_addr(&sgt[j]),
+			       qm_sg_entry_get_len(&sgt[j]), dma_dir);
+sg0_map_failed:
+csum_failed:
+	skb_free_frag(sgt_buf);
+
+	return err;
+}
+
+static inline int dpaa_xmit(struct dpaa_priv *priv,
+			    struct rtnl_link_stats64 *percpu_stats,
+			    int queue,
+			    struct qm_fd *fd)
+{
+	int err, i;
+	struct qman_fq *egress_fq;
+
+	egress_fq = priv->egress_fqs[queue];
+	if (fd->bpid == FSL_DPAA_BPID_INV)
+		fd->cmd |= qman_fq_fqid(priv->conf_fqs[queue]);
+
+	for (i = 0; i < DPAA_ENQUEUE_RETRIES; i++) {
+		err = qman_enqueue(egress_fq, fd);
+		if (err != -EBUSY)
+			break;
+	}
+
+	if (unlikely(err < 0)) {
+		percpu_stats->tx_errors++;
+		percpu_stats->tx_fifo_errors++;
+		return err;
+	}
+
+	percpu_stats->tx_packets++;
+	percpu_stats->tx_bytes += qm_fd_get_length(fd);
+
+	return 0;
+}
+
+static int dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
+{
+	struct dpaa_priv *priv;
+	struct qm_fd fd;
+	struct dpaa_percpu_priv *percpu_priv;
+	struct rtnl_link_stats64 *percpu_stats;
+	int err = 0;
+	const int queue_mapping = skb_get_queue_mapping(skb);
+	bool nonlinear = skb_is_nonlinear(skb);
+	int offset = 0;
+
+	priv = netdev_priv(net_dev);
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+	percpu_stats = &percpu_priv->stats;
+
+	qm_fd_clear_fd(&fd);
+
+	if (!nonlinear) {
+		/* We're going to store the skb backpointer at the beginning
+		 * of the data buffer, so we need a privately owned skb
+		 *
+		 * We've made sure skb is not shared in dev->priv_flags,
+		 * we need to verify the skb head is not cloned
+		 */
+		if (skb_cow_head(skb, priv->tx_headroom))
+			goto enomem;
+
+		WARN_ON(skb_is_nonlinear(skb));
+	}
+
+	/* MAX_SKB_FRAGS is equal or larger than our dpaa_SGT_MAX_ENTRIES;
+	 * make sure we don't feed FMan with more fragments than it supports.
+	 */
+	if (nonlinear &&
+	    likely(skb_shinfo(skb)->nr_frags < DPAA_SGT_MAX_ENTRIES)) {
+		/* Just create a S/G fd based on the skb */
+		err = skb_to_sg_fd(priv, skb, &fd);
+	} else {
+		/* If the egress skb contains more fragments than we support
+		 * we have no choice but to linearize it ourselves.
+		 */
+		if (unlikely(nonlinear) && __skb_linearize(skb))
+			goto enomem;
+
+		/* Finally, create a contig FD from this skb */
+		err = skb_to_contig_fd(priv, skb, &fd, &offset);
+	}
+	if (unlikely(err < 0))
+		goto skb_to_fd_failed;
+
+	if (likely(dpaa_xmit(priv, percpu_stats, queue_mapping, &fd) == 0))
+		return NETDEV_TX_OK;
+
+	dpaa_cleanup_tx_fd(priv, &fd);
+skb_to_fd_failed:
+enomem:
+	percpu_stats->tx_errors++;
+	dev_kfree_skb(skb);
+	return NETDEV_TX_OK;
+}
+
+static void dpaa_rx_error(struct net_device *net_dev,
+			  const struct dpaa_priv *priv,
+			  struct dpaa_percpu_priv *percpu_priv,
+			  const struct qm_fd *fd,
+			  u32 fqid)
+{
+	if (net_ratelimit())
+		netif_err(priv, hw, net_dev, "Err FD status = 0x%08x\n",
+			  fd->status & FM_FD_STAT_RX_ERRORS);
+
+	percpu_priv->stats.rx_errors++;
+
+	dpaa_fd_release(net_dev, fd);
+}
+
+static void dpaa_tx_error(struct net_device *net_dev,
+			  const struct dpaa_priv *priv,
+			  struct dpaa_percpu_priv *percpu_priv,
+			  const struct qm_fd *fd,
+			  u32 fqid)
+{
+	struct sk_buff *skb;
+
+	if (net_ratelimit())
+		netif_warn(priv, hw, net_dev, "FD status = 0x%08x\n",
+			   fd->status & FM_FD_STAT_TX_ERRORS);
+
+	percpu_priv->stats.tx_errors++;
+
+	skb = dpaa_cleanup_tx_fd(priv, fd);
+	dev_kfree_skb(skb);
+}
+
+static int dpaa_eth_poll(struct napi_struct *napi, int budget)
+{
+	struct dpaa_napi_portal *np =
+			container_of(napi, struct dpaa_napi_portal, napi);
+
+	int cleaned = qman_p_poll_dqrr(np->p, budget);
+
+	if (cleaned < budget) {
+		napi_complete(napi);
+		qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
+
+	} else if (np->down) {
+		qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
+	}
+
+	return cleaned;
+}
+
+static void dpaa_tx_conf(struct net_device *net_dev,
+			 const struct dpaa_priv *priv,
+			 struct dpaa_percpu_priv *percpu_priv,
+			 const struct qm_fd *fd,
+			 u32 fqid)
+{
+	struct sk_buff	*skb;
+
+	if (unlikely(fd->status & FM_FD_STAT_TX_ERRORS) != 0) {
+		if (net_ratelimit())
+			netif_warn(priv, hw, net_dev, "FD status = 0x%08x\n",
+				   fd->status & FM_FD_STAT_TX_ERRORS);
+
+		percpu_priv->stats.tx_errors++;
+	}
+
+	skb = dpaa_cleanup_tx_fd(priv, fd);
+
+	consume_skb(skb);
+}
+
+static inline int dpaa_eth_napi_schedule(struct dpaa_percpu_priv *percpu_priv,
+					 struct qman_portal *portal)
+{
+	if (unlikely(in_irq() || !in_serving_softirq())) {
+		/* Disable QMan IRQ and invoke NAPI */
+		qman_p_irqsource_remove(portal, QM_PIRQ_DQRI);
+
+		percpu_priv->np.p = portal;
+		napi_schedule(&percpu_priv->np.napi);
+		return 1;
+	}
+	return 0;
+}
+
+static enum qman_cb_dqrr_result rx_error_dqrr(struct qman_portal *portal,
+					      struct qman_fq *fq,
+					      const struct qm_dqrr_entry *dq)
+{
+	struct net_device *net_dev;
+	struct dpaa_priv *priv;
+	struct dpaa_percpu_priv *percpu_priv;
+	struct dpaa_fq *dpaa_fq = container_of(fq, struct dpaa_fq, fq_base);
+	struct dpaa_bp *dpaa_bp;
+
+	net_dev = dpaa_fq->net_dev;
+	priv = netdev_priv(net_dev);
+	dpaa_bp = dpaa_bpid2pool(dq->fd.bpid);
+	if (!dpaa_bp)
+		return qman_cb_dqrr_consume;
+
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+
+	if (dpaa_eth_napi_schedule(percpu_priv, portal))
+		return qman_cb_dqrr_stop;
+
+	if (dpaa_eth_refill_bpools(priv))
+		/* Unable to refill the buffer pool due to insufficient
+		 * system memory. Just release the frame back into the pool,
+		 * otherwise we'll soon end up with an empty buffer pool.
+		 */
+		dpaa_fd_release(net_dev, &dq->fd);
+	else
+		dpaa_rx_error(net_dev, priv, percpu_priv, &dq->fd, fq->fqid);
+
+	return qman_cb_dqrr_consume;
+}
+
+static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
+						struct qman_fq *fq,
+						const struct qm_dqrr_entry *dq)
+{
+	struct net_device *net_dev;
+	struct dpaa_priv *priv;
+	struct dpaa_percpu_priv *percpu_priv;
+	const struct qm_fd *fd = &dq->fd;
+	dma_addr_t addr = qm_fd_addr(fd);
+	u32 fd_status = fd->status;
+	enum qm_fd_format fd_format = qm_fd_get_format(fd);
+	unsigned int skb_len;
+	struct rtnl_link_stats64 *percpu_stats;
+	struct dpaa_bp *dpaa_bp;
+	struct sk_buff *skb;
+	int *count_ptr;
+
+	net_dev = ((struct dpaa_fq *)fq)->net_dev;
+	priv = netdev_priv(net_dev);
+	dpaa_bp = dpaa_bpid2pool(dq->fd.bpid);
+	if (!dpaa_bp)
+		return qman_cb_dqrr_consume;
+
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+	percpu_stats = &percpu_priv->stats;
+
+	if (unlikely(dpaa_eth_napi_schedule(percpu_priv, portal)))
+		return qman_cb_dqrr_stop;
+
+	/* Make sure we didn't run out of buffers */
+	if (unlikely(dpaa_eth_refill_bpools(priv))) {
+		/* Unable to refill the buffer pool due to insufficient
+		 * system memory. Just release the frame back into the pool,
+		 * otherwise we'll soon end up with an empty buffer pool.
+		 */
+		dpaa_fd_release(net_dev, &dq->fd);
+		return qman_cb_dqrr_consume;
+	}
+
+	if (unlikely(fd_status & FM_FD_STAT_RX_ERRORS) != 0) {
+		if (net_ratelimit())
+			netif_warn(priv, hw, net_dev, "FD status = 0x%08x\n",
+				   fd_status & FM_FD_STAT_RX_ERRORS);
+
+		percpu_stats->rx_errors++;
+		dpaa_fd_release(net_dev, fd);
+		return qman_cb_dqrr_consume;
+	}
+
+	dpaa_bp = dpaa_bpid2pool(fd->bpid);
+	if (!dpaa_bp)
+		return qman_cb_dqrr_consume;
+
+	dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size, DMA_FROM_DEVICE);
+
+	/* prefetch the first 64 bytes of the frame or the SGT start */
+	prefetch(phys_to_virt(addr) + qm_fd_get_offset(fd));
+
+	/* The only FD types that we may receive are contig and S/G */
+	WARN_ON((fd_format != qm_fd_contig) && (fd_format != qm_fd_sg));
+
+	/* Account for either the contig buffer or the SGT buffer (depending on
+	 * which case we were in) having been removed from the pool.
+	 */
+	count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
+	(*count_ptr)--;
+
+	if (likely(fd_format == qm_fd_contig))
+		skb = contig_fd_to_skb(priv, fd);
+	else
+		skb = sg_fd_to_skb(priv, fd);
+	if (!skb)
+		return qman_cb_dqrr_consume;
+
+	skb->protocol = eth_type_trans(skb, net_dev);
+
+	skb_len = skb->len;
+
+	if (unlikely(netif_receive_skb(skb) == NET_RX_DROP))
+		return qman_cb_dqrr_consume;
+
+	percpu_stats->rx_packets++;
+	percpu_stats->rx_bytes += skb_len;
+
+	return qman_cb_dqrr_consume;
+}
+
+static enum qman_cb_dqrr_result conf_error_dqrr(struct qman_portal *portal,
+						struct qman_fq *fq,
+						const struct qm_dqrr_entry *dq)
+{
+	struct net_device *net_dev;
+	struct dpaa_priv *priv;
+	struct dpaa_percpu_priv *percpu_priv;
+
+	net_dev = ((struct dpaa_fq *)fq)->net_dev;
+	priv = netdev_priv(net_dev);
+
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+
+	if (dpaa_eth_napi_schedule(percpu_priv, portal))
+		return qman_cb_dqrr_stop;
+
+	dpaa_tx_error(net_dev, priv, percpu_priv, &dq->fd, fq->fqid);
+
+	return qman_cb_dqrr_consume;
+}
+
+static enum qman_cb_dqrr_result conf_dflt_dqrr(struct qman_portal *portal,
+					       struct qman_fq *fq,
+					       const struct qm_dqrr_entry *dq)
+{
+	struct net_device *net_dev;
+	struct dpaa_priv *priv;
+	struct dpaa_percpu_priv *percpu_priv;
+
+	net_dev = ((struct dpaa_fq *)fq)->net_dev;
+	priv = netdev_priv(net_dev);
+
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+
+	if (dpaa_eth_napi_schedule(percpu_priv, portal))
+		return qman_cb_dqrr_stop;
+
+	dpaa_tx_conf(net_dev, priv, percpu_priv, &dq->fd, fq->fqid);
+
+	return qman_cb_dqrr_consume;
+}
+
+static void egress_ern(struct qman_portal *portal,
+		       struct qman_fq *fq,
+		       const union qm_mr_entry *msg)
+{
+	struct net_device *net_dev;
+	const struct dpaa_priv *priv;
+	struct sk_buff *skb;
+	struct dpaa_percpu_priv *percpu_priv;
+	const struct qm_fd *fd = &msg->ern.fd;
+
+	net_dev = ((struct dpaa_fq *)fq)->net_dev;
+	priv = netdev_priv(net_dev);
+	percpu_priv = this_cpu_ptr(priv->percpu_priv);
+
+	percpu_priv->stats.tx_dropped++;
+	percpu_priv->stats.tx_fifo_errors++;
+
+	skb = dpaa_cleanup_tx_fd(priv, fd);
+	dev_kfree_skb_any(skb);
+}
+
+static const struct dpaa_fq_cbs dpaa_fq_cbs = {
+	.rx_defq = { .cb = { .dqrr = rx_default_dqrr } },
+	.tx_defq = { .cb = { .dqrr = conf_dflt_dqrr } },
+	.rx_errq = { .cb = { .dqrr = rx_error_dqrr } },
+	.tx_errq = { .cb = { .dqrr = conf_error_dqrr } },
+	.egress_ern = { .cb = { .ern = egress_ern } }
+};
+
+static void dpaa_eth_napi_enable(struct dpaa_priv *priv)
+{
+	struct dpaa_percpu_priv *percpu_priv;
+	int i;
+
+	for_each_possible_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+
+		percpu_priv->np.down = 0;
+		napi_enable(&percpu_priv->np.napi);
+	}
+}
+
+static void dpaa_eth_napi_disable(struct dpaa_priv *priv)
+{
+	struct dpaa_percpu_priv *percpu_priv;
+	int i;
+
+	for_each_possible_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+
+		percpu_priv->np.down = 1;
+		napi_disable(&percpu_priv->np.napi);
+	}
+}
+
+static int dpaa_open(struct net_device *net_dev)
+{
+	struct dpaa_priv *priv;
+	struct mac_device *mac_dev;
+	int err, i;
+
+	priv = netdev_priv(net_dev);
+	mac_dev = priv->mac_dev;
+	dpaa_eth_napi_enable(priv);
+
+	net_dev->phydev = mac_dev->init_phy(net_dev, priv->mac_dev);
+	if (!net_dev->phydev) {
+		netif_err(priv, ifup, net_dev, "init_phy() failed\n");
+		return -ENODEV;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(mac_dev->port); i++) {
+		err = fman_port_enable(mac_dev->port[i]);
+		if (err)
+			goto mac_start_failed;
+	}
+
+	err = priv->mac_dev->start(mac_dev);
+	if (err < 0) {
+		netif_err(priv, ifup, net_dev, "mac_dev->start() = %d\n", err);
+		goto mac_start_failed;
+	}
+
+	netif_tx_start_all_queues(net_dev);
+
+	return 0;
+
+mac_start_failed:
+	for (i = 0; i < ARRAY_SIZE(mac_dev->port); i++)
+		fman_port_disable(mac_dev->port[i]);
+
+	dpaa_eth_napi_disable(priv);
+
+	return err;
+}
+
+static int dpaa_eth_stop(struct net_device *net_dev)
+{
+	struct dpaa_priv *priv;
+	int err;
+
+	err = dpaa_stop(net_dev);
+
+	priv = netdev_priv(net_dev);
+	dpaa_eth_napi_disable(priv);
+
+	return err;
+}
+
+static const struct net_device_ops dpaa_ops = {
+	.ndo_open = dpaa_open,
+	.ndo_start_xmit = dpaa_start_xmit,
+	.ndo_stop = dpaa_eth_stop,
+	.ndo_tx_timeout = dpaa_tx_timeout,
+	.ndo_get_stats64 = dpaa_get_stats64,
+	.ndo_set_mac_address = dpaa_set_mac_address,
+	.ndo_validate_addr = eth_validate_addr,
+	.ndo_change_mtu = dpaa_change_mtu,
+	.ndo_set_rx_mode = dpaa_set_rx_mode,
+	.ndo_set_features = dpaa_set_features,
+	.ndo_fix_features = dpaa_fix_features,
+};
+
+static int dpaa_napi_add(struct net_device *net_dev)
+{
+	struct dpaa_priv *priv = netdev_priv(net_dev);
+	struct dpaa_percpu_priv *percpu_priv;
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, cpu);
+
+		netif_napi_add(net_dev, &percpu_priv->np.napi,
+			       dpaa_eth_poll, NAPI_POLL_WEIGHT);
+	}
+
+	return 0;
+}
+
+static void dpaa_napi_del(struct net_device *net_dev)
+{
+	struct dpaa_priv *priv = netdev_priv(net_dev);
+	struct dpaa_percpu_priv *percpu_priv;
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, cpu);
+
+		netif_napi_del(&percpu_priv->np.napi);
+	}
+}
+
+static inline void dpaa_bp_free_pf(const struct dpaa_bp *bp,
+				   struct bm_buffer *bmb)
+{
+	dma_addr_t addr = bm_buf_addr(bmb);
+
+	dma_unmap_single(bp->dev, addr, bp->size, DMA_FROM_DEVICE);
+
+	skb_free_frag(phys_to_virt(addr));
+}
+
+/* Alloc the dpaa_bp struct and configure default values */
+static struct dpaa_bp *dpaa_bp_alloc(struct device *dev)
+{
+	struct dpaa_bp *dpaa_bp;
+
+	dpaa_bp = devm_kzalloc(dev, sizeof(*dpaa_bp), GFP_KERNEL);
+	if (!dpaa_bp)
+		return ERR_PTR(-ENOMEM);
+
+	dpaa_bp->bpid = FSL_DPAA_BPID_INV;
+	dpaa_bp->percpu_count = devm_alloc_percpu(dev, *dpaa_bp->percpu_count);
+	dpaa_bp->config_count = FSL_DPAA_ETH_MAX_BUF_COUNT;
+
+	dpaa_bp->seed_cb = dpaa_bp_seed;
+	dpaa_bp->free_buf_cb = dpaa_bp_free_pf;
+
+	return dpaa_bp;
+}
+
+/* Place all ingress FQs (Rx Default, Rx Error) in a dedicated CGR.
+ * We won't be sending congestion notifications to FMan; for now, we just use
+ * this CGR to generate enqueue rejections to FMan in order to drop the frames
+ * before they reach our ingress queues and eat up memory.
+ */
+static int dpaa_ingress_cgr_init(struct dpaa_priv *priv)
+{
+	struct qm_mcc_initcgr initcgr;
+	u32 cs_th;
+	int err;
+
+	err = qman_alloc_cgrid(&priv->ingress_cgr.cgrid);
+	if (err < 0) {
+		if (netif_msg_drv(priv))
+			pr_err("Error %d allocating CGR ID\n", err);
+		goto out_error;
+	}
+
+	/* Enable CS TD, but disable Congestion State Change Notifications. */
+	initcgr.we_mask = QM_CGR_WE_CS_THRES;
+	initcgr.cgr.cscn_en = QM_CGR_EN;
+	cs_th = DPAA_INGRESS_CS_THRESHOLD;
+	qm_cgr_cs_thres_set64(&initcgr.cgr.cs_thres, cs_th, 1);
+
+	initcgr.we_mask |= QM_CGR_WE_CSTD_EN;
+	initcgr.cgr.cstd_en = QM_CGR_EN;
+
+	/* This CGR will be associated with the SWP affined to the current CPU.
+	 * However, we'll place all our ingress FQs in it.
+	 */
+	err = qman_create_cgr(&priv->ingress_cgr, QMAN_CGR_FLAG_USE_INIT,
+			      &initcgr);
+	if (err < 0) {
+		if (netif_msg_drv(priv))
+			pr_err("Error %d creating ingress CGR with ID %d\n",
+			       err, priv->ingress_cgr.cgrid);
+		qman_release_cgrid(priv->ingress_cgr.cgrid);
+		goto out_error;
+	}
+	if (netif_msg_drv(priv))
+		pr_debug("Created ingress CGR %d for netdev with hwaddr %pM\n",
+			 priv->ingress_cgr.cgrid, priv->mac_dev->addr);
+
+	priv->use_ingress_cgr = true;
+
+out_error:
+	return err;
+}
+
+static const struct of_device_id dpaa_match[];
+
+static inline u16 dpaa_get_headroom(struct dpaa_buffer_layout *bl)
+{
+	u16 headroom;
+	/* The frame headroom must accommodate:
+	 * - the driver private data area
+	 * - parse results, hash results, timestamp if selected
+	 * If either hash results or time stamp are selected, both will
+	 * be copied to/from the frame headroom, as TS is located between PR and
+	 * HR in the IC and IC copy size has a granularity of 16bytes
+	 * (see description of FMBM_RICP and FMBM_TICP registers in DPAARM)
+	 *
+	 * Also make sure the headroom is a multiple of data_align bytes
+	 */
+	headroom = (u16)(bl->priv_data_size + DPAA_PARSE_RESULTS_SIZE +
+		DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE);
+
+	return DPAA_FD_DATA_ALIGNMENT ? ALIGN(headroom,
+					      DPAA_FD_DATA_ALIGNMENT) :
+					headroom;
+}
+
+static int dpaa_eth_probe(struct platform_device *pdev)
+{
+	int err = 0, i, channel;
+	struct device *dev;
+	struct dpaa_bp *dpaa_bps[DPAA_BPS_NUM] = {NULL};
+	struct dpaa_fq *dpaa_fq, *tmp;
+	struct net_device *net_dev = NULL;
+	struct dpaa_priv *priv = NULL;
+	struct dpaa_percpu_priv *percpu_priv;
+	struct fm_port_fqs port_fqs;
+	struct mac_device *mac_dev;
+
+	dev = &pdev->dev;
+
+	/* Allocate this early, so we can store relevant information in
+	 * the private area
+	 */
+	net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA_ETH_TXQ_NUM);
+	if (!net_dev) {
+		dev_err(dev, "alloc_etherdev_mq() failed\n");
+		goto alloc_etherdev_mq_failed;
+	}
+
+#ifdef CONFIG_FSL_DPAA_ETH_FRIENDLY_IF_NAME
+	snprintf(net_dev->name, IFNAMSIZ, "fm%d-mac%d",
+		 dpaa_mac_fman_index_get(pdev),
+		 dpaa_mac_hw_index_get(pdev));
+#endif
+
+	/* Do this here, so we can be verbose early */
+	SET_NETDEV_DEV(net_dev, dev);
+	dev_set_drvdata(dev, net_dev);
+
+	priv = netdev_priv(net_dev);
+	priv->net_dev = net_dev;
+
+	priv->msg_enable = netif_msg_init(debug, DPAA_MSG_DEFAULT);
+
+	mac_dev = dpaa_mac_dev_get(pdev);
+	if (IS_ERR(mac_dev)) {
+		dev_err(dev, "dpaa_mac_dev_get() failed\n");
+		err = PTR_ERR(mac_dev);
+		goto mac_probe_failed;
+	}
+
+	/* If fsl_fm_max_frm is set to a higher value than the all-common 1500,
+	 * we choose conservatively and let the user explicitly set a higher
+	 * MTU via ifconfig. Otherwise, the user may end up with different MTUs
+	 * in the same LAN.
+	 * If on the other hand fsl_fm_max_frm has been chosen below 1500,
+	 * start with the maximum allowed.
+	 */
+	net_dev->mtu = min(dpaa_get_max_mtu(), ETH_DATA_LEN);
+
+	netdev_dbg(net_dev, "Setting initial MTU on net device: %d\n",
+		   net_dev->mtu);
+
+	priv->buf_layout[RX].priv_data_size = DPAA_RX_PRIV_DATA_SIZE; /* Rx */
+	priv->buf_layout[TX].priv_data_size = DPAA_TX_PRIV_DATA_SIZE; /* Tx */
+
+	/* device used for DMA mapping */
+	arch_setup_dma_ops(dev, 0, 0, NULL, false);
+	err = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(40));
+	if (err) {
+		dev_err(dev, "dma_coerce_mask_and_coherent() failed\n");
+		goto dev_mask_failed;
+	}
+
+	/* bp init */
+	for (i = 0; i < DPAA_BPS_NUM; i++) {
+		int err;
+
+		dpaa_bps[i] = dpaa_bp_alloc(dev);
+		if (IS_ERR(dpaa_bps[i]))
+			return PTR_ERR(dpaa_bps[i]);
+		/* the raw size of the buffers used for reception */
+		dpaa_bps[i]->raw_size = bpool_buffer_raw_size(i, DPAA_BPS_NUM);
+		/* avoid runtime computations by keeping the usable size here */
+		dpaa_bps[i]->size = dpaa_bp_size(dpaa_bps[i]->raw_size);
+		dpaa_bps[i]->dev = dev;
+
+		err = dpaa_bp_alloc_pool(dpaa_bps[i]);
+		if (err < 0) {
+			dpaa_bps_free(priv);
+			priv->dpaa_bps[i] = NULL;
+			goto bp_create_failed;
+		}
+		priv->dpaa_bps[i] = dpaa_bps[i];
+	}
+
+	INIT_LIST_HEAD(&priv->dpaa_fq_list);
+
+	memset(&port_fqs, 0, sizeof(port_fqs));
+
+	err = dpaa_alloc_all_fqs(dev, &priv->dpaa_fq_list, &port_fqs);
+	if (err < 0) {
+		dev_err(dev, "dpaa_alloc_all_fqs() failed\n");
+		goto fq_probe_failed;
+	}
+
+	priv->mac_dev = mac_dev;
+
+	channel = dpaa_get_channel();
+	if (channel < 0) {
+		dev_err(dev, "dpaa_get_channel() failed\n");
+		err = channel;
+		goto get_channel_failed;
+	}
+
+	priv->channel = (u16)channel;
+
+	/* Start a thread that will walk the CPUs with affine portals
+	 * and add this pool channel to each's dequeue mask.
+	 */
+	dpaa_eth_add_channel(priv->channel);
+
+	dpaa_fq_setup(priv, &dpaa_fq_cbs, priv->mac_dev->port[TX]);
+
+	/* Create a congestion group for this netdev, with
+	 * dynamically-allocated CGR ID.
+	 * Must be executed after probing the MAC, but before
+	 * assigning the egress FQs to the CGRs.
+	 */
+	err = dpaa_eth_cgr_init(priv);
+	if (err < 0) {
+		dev_err(dev, "Error initializing CGR\n");
+		goto tx_cgr_init_failed;
+	}
+
+	err = dpaa_ingress_cgr_init(priv);
+	if (err < 0) {
+		dev_err(dev, "Error initializing ingress CGR\n");
+		goto rx_cgr_init_failed;
+	}
+
+	/* Add the FQs to the interface, and make them active */
+	list_for_each_entry_safe(dpaa_fq, tmp, &priv->dpaa_fq_list, list) {
+		err = dpaa_fq_init(dpaa_fq, false);
+		if (err < 0)
+			goto fq_alloc_failed;
+	}
+
+	priv->tx_headroom = dpaa_get_headroom(&priv->buf_layout[TX]);
+	priv->rx_headroom = dpaa_get_headroom(&priv->buf_layout[RX]);
+
+	/* All real interfaces need their ports initialized */
+	dpaa_eth_init_ports(mac_dev, dpaa_bps, DPAA_BPS_NUM, &port_fqs,
+			    &priv->buf_layout[0], dev);
+
+	priv->percpu_priv = devm_alloc_percpu(dev, *priv->percpu_priv);
+	if (!priv->percpu_priv) {
+		dev_err(dev, "devm_alloc_percpu() failed\n");
+		err = -ENOMEM;
+		goto alloc_percpu_failed;
+	}
+	for_each_possible_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+		memset(percpu_priv, 0, sizeof(*percpu_priv));
+	}
+
+	/* Initialize NAPI */
+	err = dpaa_napi_add(net_dev);
+	if (err < 0)
+		goto napi_add_failed;
+
+	err = dpaa_netdev_init(net_dev, &dpaa_ops, tx_timeout);
+	if (err < 0)
+		goto netdev_init_failed;
+
+	netif_info(priv, probe, net_dev, "Probed interface %s\n",
+		   net_dev->name);
+
+	return 0;
+
+netdev_init_failed:
+napi_add_failed:
+	dpaa_napi_del(net_dev);
+alloc_percpu_failed:
+	dpaa_fq_free(dev, &priv->dpaa_fq_list);
+fq_alloc_failed:
+	qman_delete_cgr_safe(&priv->ingress_cgr);
+	qman_release_cgrid(priv->ingress_cgr.cgrid);
+rx_cgr_init_failed:
+	qman_delete_cgr_safe(&priv->cgr_data.cgr);
+	qman_release_cgrid(priv->cgr_data.cgr.cgrid);
+tx_cgr_init_failed:
+get_channel_failed:
+	dpaa_bps_free(priv);
+bp_create_failed:
+fq_probe_failed:
+dev_mask_failed:
+mac_probe_failed:
+	dev_set_drvdata(dev, NULL);
+	free_netdev(net_dev);
+alloc_etherdev_mq_failed:
+	for (i = 0; i < DPAA_BPS_NUM && dpaa_bps[i]; i++) {
+		if (atomic_read(&dpaa_bps[i]->refs) == 0)
+			devm_kfree(dev, dpaa_bps[i]);
+	}
+	return err;
+}
+
+static int dpaa_remove(struct platform_device *pdev)
+{
+	int err;
+	struct device *dev;
+	struct net_device *net_dev;
+	struct dpaa_priv *priv;
+
+	dev = &pdev->dev;
+	net_dev = dev_get_drvdata(dev);
+
+	priv = netdev_priv(net_dev);
+
+	dev_set_drvdata(dev, NULL);
+	unregister_netdev(net_dev);
+
+	err = dpaa_fq_free(dev, &priv->dpaa_fq_list);
+
+	qman_delete_cgr_safe(&priv->ingress_cgr);
+	qman_release_cgrid(priv->ingress_cgr.cgrid);
+	qman_delete_cgr_safe(&priv->cgr_data.cgr);
+	qman_release_cgrid(priv->cgr_data.cgr.cgrid);
+
+	dpaa_napi_del(net_dev);
+
+	dpaa_bps_free(priv);
+
+	free_netdev(net_dev);
+
+	return err;
+}
+
+static struct platform_device_id dpaa_devtype[] = {
+	{
+		.name = "dpaa-ethernet",
+		.driver_data = 0,
+	}, {
+	}
+};
+MODULE_DEVICE_TABLE(platform, dpaa_devtype);
+
+static struct platform_driver dpaa_driver = {
+	.driver = {
+		.name = KBUILD_MODNAME,
+	},
+	.id_table = dpaa_devtype,
+	.probe = dpaa_eth_probe,
+	.remove = dpaa_remove
+};
+
+static int __init dpaa_load(void)
+{
+	int err;
+
+	pr_debug("FSL DPAA Ethernet driver\n");
+
+	/* initialize dpaa_eth mirror values */
+	dpaa_rx_extra_headroom = fman_get_rx_extra_headroom();
+	dpaa_max_frm = fman_get_max_frm();
+
+	err = platform_driver_register(&dpaa_driver);
+	if (err < 0)
+		pr_err("Error, platform_driver_register() = %d\n", err);
+
+	return err;
+}
+module_init(dpaa_load);
+
+static void __exit dpaa_unload(void)
+{
+	platform_driver_unregister(&dpaa_driver);
+
+	/* Only one channel is used and needs to be released after all
+	 * interfaces are removed
+	 */
+	dpaa_release_channel();
+}
+module_exit(dpaa_unload);
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_DESCRIPTION("FSL DPAA Ethernet driver");
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
new file mode 100644
index 0000000..fe98e08
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
@@ -0,0 +1,144 @@
+/* Copyright 2008 - 2016 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPAA_H
+#define __DPAA_H
+
+#include <linux/netdevice.h>
+#include <soc/fsl/qman.h>
+#include <soc/fsl/bman.h>
+
+#include "fman.h"
+#include "mac.h"
+
+#define DPAA_ETH_TXQ_NUM	NR_CPUS
+
+#define DPAA_BPS_NUM 3 /* number of bpools per interface */
+
+/* More detailed FQ types - used for fine-grained WQ assignments */
+enum dpaa_fq_type {
+	FQ_TYPE_RX_DEFAULT = 1, /* Rx Default FQs */
+	FQ_TYPE_RX_ERROR,	/* Rx Error FQs */
+	FQ_TYPE_TX,		/* "Real" Tx FQs */
+	FQ_TYPE_TX_CONFIRM,	/* Tx default Conf FQ (actually an Rx FQ) */
+	FQ_TYPE_TX_CONF_MQ,	/* Tx conf FQs (one for each Tx FQ) */
+	FQ_TYPE_TX_ERROR,	/* Tx Error FQs (these are actually Rx FQs) */
+};
+
+struct dpaa_fq {
+	struct qman_fq fq_base;
+	struct list_head list;
+	struct net_device *net_dev;
+	bool init;
+	u32 fqid;
+	u32 flags;
+	u16 channel;
+	u8 wq;
+	enum dpaa_fq_type fq_type;
+};
+
+struct dpaa_fq_cbs {
+	struct qman_fq rx_defq;
+	struct qman_fq tx_defq;
+	struct qman_fq rx_errq;
+	struct qman_fq tx_errq;
+	struct qman_fq egress_ern;
+};
+
+struct dpaa_bp {
+	/* device used in the DMA mapping operations */
+	struct device *dev;
+	/* current number of buffers in the buffer pool alloted to each CPU */
+	int __percpu *percpu_count;
+	/* all buffers allocated for this pool have this raw size */
+	size_t raw_size;
+	/* all buffers in this pool have this same usable size */
+	size_t size;
+	/* the buffer pools are initialized with config_count buffers for each
+	 * CPU; at runtime the number of buffers per CPU is constantly brought
+	 * back to this level
+	 */
+	u16 config_count;
+	u8 bpid;
+	struct bman_pool *pool;
+	/* bpool can be seeded before use by this cb */
+	int (*seed_cb)(struct dpaa_bp *);
+	/* bpool can be emptied before freeing by this cb */
+	void (*free_buf_cb)(const struct dpaa_bp *, struct bm_buffer *);
+	atomic_t refs;
+};
+
+struct dpaa_napi_portal {
+	struct napi_struct napi;
+	struct qman_portal *p;
+	bool down;
+};
+
+struct dpaa_percpu_priv {
+	struct net_device *net_dev;
+	struct dpaa_napi_portal np;
+	struct rtnl_link_stats64 stats;
+};
+
+struct dpaa_buffer_layout {
+	u16 priv_data_size;
+};
+
+struct dpaa_priv {
+	struct dpaa_percpu_priv __percpu *percpu_priv;
+	struct dpaa_bp *dpaa_bps[DPAA_BPS_NUM];
+	/* Store here the needed Tx headroom for convenience and speed
+	 * (even though it can be computed based on the fields of buf_layout)
+	 */
+	u16 tx_headroom;
+	struct net_device *net_dev;
+	struct mac_device *mac_dev;
+	struct qman_fq *egress_fqs[DPAA_ETH_TXQ_NUM];
+	struct qman_fq *conf_fqs[DPAA_ETH_TXQ_NUM];
+
+	u16 channel;
+	struct list_head dpaa_fq_list;
+
+	u32 msg_enable;	/* net_device message level */
+
+	struct {
+		/* All egress queues to a given net device belong to one
+		 * (and the same) congestion group.
+		 */
+		struct qman_cgr cgr;
+	} cgr_data;
+	/* Use a per-port CGR for ingress traffic. */
+	bool use_ingress_cgr;
+	struct qman_cgr ingress_cgr;
+
+	struct dpaa_buffer_layout buf_layout[2];
+	u16 rx_headroom;
+};
+#endif	/* __DPAA_H */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH net-next v6 03/10] dpaa_eth: add option to use one buffer pool set
  2016-11-02 20:17 [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 01/10] devres: add devm_alloc_percpu() Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet Madalin Bucur
@ 2016-11-02 20:17 ` Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 04/10] dpaa_eth: add ethtool functionality Madalin Bucur
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Madalin Bucur @ 2016-11-02 20:17 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/Kconfig    |  6 ++++++
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 23 +++++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/drivers/net/ethernet/freescale/dpaa/Kconfig b/drivers/net/ethernet/freescale/dpaa/Kconfig
index 670e039..308fc21 100644
--- a/drivers/net/ethernet/freescale/dpaa/Kconfig
+++ b/drivers/net/ethernet/freescale/dpaa/Kconfig
@@ -18,4 +18,10 @@ config FSL_DPAA_ETH_FRIENDLY_IF_NAME
 	  The DPAA Ethernet netdevices are created for each FMan port available
 	  on a certain board. Enable this to get interface names derived from
 	  the underlying FMan hardware for a simple identification.
+config FSL_DPAA_ETH_COMMON_BPOOL
+	bool "Use a common buffer pool set for all the interfaces"
+	---help---
+	  The DPAA Ethernet netdevices require buffer pools for storing the buffers
+	  used by the FMan hardware for reception. One can use a single buffer pool
+	  set for all interfaces or a dedicated buffer pool set for each interface.
 endif # FSL_DPAA_ETH
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 55e89b7..5e8c3df 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -158,6 +158,11 @@ struct fm_port_fqs {
 	struct dpaa_fq *rx_errq;
 };
 
+#ifdef CONFIG_FSL_DPAA_ETH_COMMON_BPOOL
+/* These bpools are shared by all the dpaa interfaces */
+static u8 dpaa_common_bpids[DPAA_BPS_NUM];
+#endif
+
 /* All the dpa bps in use at any moment */
 static struct dpaa_bp *dpaa_bp_array[BM_MAX_NUM_OF_POOLS];
 
@@ -2527,6 +2532,12 @@ static int dpaa_eth_probe(struct platform_device *pdev)
 	for (i = 0; i < DPAA_BPS_NUM; i++) {
 		int err;
 
+#ifdef CONFIG_FSL_DPAA_ETH_COMMON_BPOOL
+		/* if another interface probed the bps reuse those */
+		dpaa_bps[i] = (dpaa_common_bpids[i] != FSL_DPAA_BPID_INV) ?
+				dpaa_bpid2pool(dpaa_common_bpids[i]) : NULL;
+		if (!dpaa_bps[i]) {
+#endif
 		dpaa_bps[i] = dpaa_bp_alloc(dev);
 		if (IS_ERR(dpaa_bps[i]))
 			return PTR_ERR(dpaa_bps[i]);
@@ -2542,6 +2553,11 @@ static int dpaa_eth_probe(struct platform_device *pdev)
 			priv->dpaa_bps[i] = NULL;
 			goto bp_create_failed;
 		}
+#ifdef CONFIG_FSL_DPAA_ETH_COMMON_BPOOL
+		}
+		dpaa_common_bpids[i] = dpaa_bps[i]->bpid;
+		dpaa_bps[i] = (dpaa_bpid2pool(dpaa_common_bpids[i]));
+#endif
 		priv->dpaa_bps[i] = dpaa_bps[i];
 	}
 
@@ -2716,6 +2732,13 @@ static int __init dpaa_load(void)
 	dpaa_rx_extra_headroom = fman_get_rx_extra_headroom();
 	dpaa_max_frm = fman_get_max_frm();
 
+#ifdef CONFIG_FSL_DPAA_ETH_COMMON_BPOOL
+	/* set initial invalid values, first interface probe will set correct
+	 * values that will be shared by the other interfaces
+	 */
+	memset(dpaa_common_bpids, FSL_DPAA_BPID_INV, sizeof(dpaa_common_bpids));
+#endif
+
 	err = platform_driver_register(&dpaa_driver);
 	if (err < 0)
 		pr_err("Error, platform_driver_register() = %d\n", err);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH net-next v6 04/10] dpaa_eth: add ethtool functionality
  2016-11-02 20:17 [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (2 preceding siblings ...)
  2016-11-02 20:17 ` [PATCH net-next v6 03/10] dpaa_eth: add option to use one buffer pool set Madalin Bucur
@ 2016-11-02 20:17 ` Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 05/10] dpaa_eth: add ethtool statistics Madalin Bucur
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Madalin Bucur @ 2016-11-02 20:17 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Add support for basic ethtool operations.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/Makefile       |   2 +-
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c     |   2 +
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h     |   3 +
 drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c | 218 +++++++++++++++++++++
 4 files changed, 224 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c

diff --git a/drivers/net/ethernet/freescale/dpaa/Makefile b/drivers/net/ethernet/freescale/dpaa/Makefile
index fc76029..43a4cfd 100644
--- a/drivers/net/ethernet/freescale/dpaa/Makefile
+++ b/drivers/net/ethernet/freescale/dpaa/Makefile
@@ -8,4 +8,4 @@ ccflags-y += -I$(FMAN)
 
 obj-$(CONFIG_FSL_DPAA_ETH) += fsl_dpa.o
 
-fsl_dpa-objs += dpaa_eth.o
+fsl_dpa-objs += dpaa_eth.o dpaa_ethtool.o
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 5e8c3df..681abf1 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -242,6 +242,8 @@ static int dpaa_netdev_init(struct net_device *net_dev,
 	memcpy(net_dev->perm_addr, mac_addr, net_dev->addr_len);
 	memcpy(net_dev->dev_addr, mac_addr, net_dev->addr_len);
 
+	net_dev->ethtool_ops = &dpaa_ethtool_ops;
+
 	net_dev->needed_headroom = priv->tx_headroom;
 	net_dev->watchdog_timeo = msecs_to_jiffies(tx_timeout);
 
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
index fe98e08..d6ab335 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
@@ -141,4 +141,7 @@ struct dpaa_priv {
 	struct dpaa_buffer_layout buf_layout[2];
 	u16 rx_headroom;
 };
+
+/* from dpaa_ethtool.c */
+extern const struct ethtool_ops dpaa_ethtool_ops;
 #endif	/* __DPAA_H */
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
new file mode 100644
index 0000000..f97f563
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
@@ -0,0 +1,218 @@
+/* Copyright 2008-2016 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/string.h>
+
+#include "dpaa_eth.h"
+#include "mac.h"
+
+static int dpaa_get_settings(struct net_device *net_dev,
+			     struct ethtool_cmd *et_cmd)
+{
+	int err;
+
+	if (!net_dev->phydev) {
+		netdev_dbg(net_dev, "phy device not initialized\n");
+		return 0;
+	}
+
+	err = phy_ethtool_gset(net_dev->phydev, et_cmd);
+
+	return err;
+}
+
+static int dpaa_set_settings(struct net_device *net_dev,
+			     struct ethtool_cmd *et_cmd)
+{
+	int err;
+
+	if (!net_dev->phydev) {
+		netdev_err(net_dev, "phy device not initialized\n");
+		return -ENODEV;
+	}
+
+	err = phy_ethtool_sset(net_dev->phydev, et_cmd);
+	if (err < 0)
+		netdev_err(net_dev, "phy_ethtool_sset() = %d\n", err);
+
+	return err;
+}
+
+static void dpaa_get_drvinfo(struct net_device *net_dev,
+			     struct ethtool_drvinfo *drvinfo)
+{
+	int len;
+
+	strlcpy(drvinfo->driver, KBUILD_MODNAME,
+		sizeof(drvinfo->driver));
+	len = snprintf(drvinfo->version, sizeof(drvinfo->version),
+		       "%X", 0);
+	len = snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
+		       "%X", 0);
+
+	if (len >= sizeof(drvinfo->fw_version)) {
+		/* Truncated output */
+		netdev_notice(net_dev, "snprintf() = %d\n", len);
+	}
+	strlcpy(drvinfo->bus_info, dev_name(net_dev->dev.parent->parent),
+		sizeof(drvinfo->bus_info));
+}
+
+static u32 dpaa_get_msglevel(struct net_device *net_dev)
+{
+	return ((struct dpaa_priv *)netdev_priv(net_dev))->msg_enable;
+}
+
+static void dpaa_set_msglevel(struct net_device *net_dev,
+			      u32 msg_enable)
+{
+	((struct dpaa_priv *)netdev_priv(net_dev))->msg_enable = msg_enable;
+}
+
+static int dpaa_nway_reset(struct net_device *net_dev)
+{
+	int err;
+
+	if (!net_dev->phydev) {
+		netdev_err(net_dev, "phy device not initialized\n");
+		return -ENODEV;
+	}
+
+	err = 0;
+	if (net_dev->phydev->autoneg) {
+		err = phy_start_aneg(net_dev->phydev);
+		if (err < 0)
+			netdev_err(net_dev, "phy_start_aneg() = %d\n",
+				   err);
+	}
+
+	return err;
+}
+
+static void dpaa_get_pauseparam(struct net_device *net_dev,
+				struct ethtool_pauseparam *epause)
+{
+	struct dpaa_priv *priv;
+	struct mac_device *mac_dev;
+
+	priv = netdev_priv(net_dev);
+	mac_dev = priv->mac_dev;
+
+	if (!net_dev->phydev) {
+		netdev_err(net_dev, "phy device not initialized\n");
+		return;
+	}
+
+	epause->autoneg = mac_dev->autoneg_pause;
+	epause->rx_pause = mac_dev->rx_pause_active;
+	epause->tx_pause = mac_dev->tx_pause_active;
+}
+
+static int dpaa_set_pauseparam(struct net_device *net_dev,
+			       struct ethtool_pauseparam *epause)
+{
+	struct dpaa_priv *priv;
+	struct mac_device *mac_dev;
+	struct phy_device *phydev;
+	int err;
+	u32 newadv, oldadv;
+	bool rx_pause, tx_pause;
+
+	priv = netdev_priv(net_dev);
+	mac_dev = priv->mac_dev;
+
+	phydev = net_dev->phydev;
+	if (!phydev) {
+		netdev_err(net_dev, "phy device not initialized\n");
+		return -ENODEV;
+	}
+
+	if (!(phydev->supported & SUPPORTED_Pause) ||
+	    (!(phydev->supported & SUPPORTED_Asym_Pause) &&
+	    (epause->rx_pause != epause->tx_pause)))
+		return -EINVAL;
+
+	/* The MAC should know how to handle PAUSE frame autonegotiation before
+	 * adjust_link is triggered by a forced renegotiation of sym/asym PAUSE
+	 * settings.
+	 */
+	mac_dev->autoneg_pause = !!epause->autoneg;
+	mac_dev->rx_pause_req = !!epause->rx_pause;
+	mac_dev->tx_pause_req = !!epause->tx_pause;
+
+	/* Determine the sym/asym advertised PAUSE capabilities from the desired
+	 * rx/tx pause settings.
+	 */
+	newadv = 0;
+	if (epause->rx_pause)
+		newadv = ADVERTISED_Pause | ADVERTISED_Asym_Pause;
+	if (epause->tx_pause)
+		newadv |= ADVERTISED_Asym_Pause;
+
+	oldadv = phydev->advertising &
+			(ADVERTISED_Pause | ADVERTISED_Asym_Pause);
+
+	/* If there are differences between the old and the new advertised
+	 * values, restart PHY autonegotiation and advertise the new values.
+	 */
+	if (oldadv != newadv) {
+		phydev->advertising &= ~(ADVERTISED_Pause
+				| ADVERTISED_Asym_Pause);
+		phydev->advertising |= newadv;
+		if (phydev->autoneg) {
+			err = phy_start_aneg(phydev);
+			if (err < 0)
+				netdev_err(net_dev, "phy_start_aneg() = %d\n",
+					   err);
+		}
+	}
+
+	fman_get_pause_cfg(mac_dev, &rx_pause, &tx_pause);
+	err = fman_set_mac_active_pause(mac_dev, rx_pause, tx_pause);
+	if (err < 0)
+		netdev_err(net_dev, "set_mac_active_pause() = %d\n", err);
+
+	return err;
+}
+
+const struct ethtool_ops dpaa_ethtool_ops = {
+	.get_settings = dpaa_get_settings,
+	.set_settings = dpaa_set_settings,
+	.get_drvinfo = dpaa_get_drvinfo,
+	.get_msglevel = dpaa_get_msglevel,
+	.set_msglevel = dpaa_set_msglevel,
+	.nway_reset = dpaa_nway_reset,
+	.get_pauseparam = dpaa_get_pauseparam,
+	.set_pauseparam = dpaa_set_pauseparam,
+	.get_link = ethtool_op_get_link,
+};
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH net-next v6 05/10] dpaa_eth: add ethtool statistics
  2016-11-02 20:17 [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (3 preceding siblings ...)
  2016-11-02 20:17 ` [PATCH net-next v6 04/10] dpaa_eth: add ethtool functionality Madalin Bucur
@ 2016-11-02 20:17 ` Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 06/10] dpaa_eth: add sysfs exports Madalin Bucur
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Madalin Bucur @ 2016-11-02 20:17 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Add a series of counters to be exported through ethtool:
- add detailed counters for reception errors;
- add detailed counters for QMan enqueue reject events;
- count the number of fragmented skbs received from the stack;
- count all frames received on the Tx confirmation path;
- add congestion group statistics;
- count the number of interrupts for each CPU.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c     |  54 +++++-
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h     |  33 ++++
 drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c | 199 +++++++++++++++++++++
 3 files changed, 284 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 681abf1..3deb240 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -755,10 +755,15 @@ static void dpaa_eth_cgscn(struct qman_portal *qm, struct qman_cgr *cgr,
 	struct dpaa_priv *priv = (struct dpaa_priv *)container_of(cgr,
 		struct dpaa_priv, cgr_data.cgr);
 
-	if (congested)
+	if (congested) {
+		priv->cgr_data.congestion_start_jiffies = jiffies;
 		netif_tx_stop_all_queues(priv->net_dev);
-	else
+		priv->cgr_data.cgr_congested_count++;
+	} else {
+		priv->cgr_data.congested_jiffies +=
+			(jiffies - priv->cgr_data.congestion_start_jiffies);
 		netif_tx_wake_all_queues(priv->net_dev);
+	}
 }
 
 static int dpaa_eth_cgr_init(struct dpaa_priv *priv)
@@ -1273,6 +1278,37 @@ static void dpaa_fd_release(const struct net_device *net_dev,
 	dpaa_bman_release(dpaa_bp, &bmb, 1);
 }
 
+static void count_ern(struct dpaa_percpu_priv *percpu_priv,
+		      const union qm_mr_entry *msg)
+{
+	switch (msg->ern.rc & QM_MR_RC_MASK) {
+	case QM_MR_RC_CGR_TAILDROP:
+		percpu_priv->ern_cnt.cg_tdrop++;
+		break;
+	case QM_MR_RC_WRED:
+		percpu_priv->ern_cnt.wred++;
+		break;
+	case QM_MR_RC_ERROR:
+		percpu_priv->ern_cnt.err_cond++;
+		break;
+	case QM_MR_RC_ORPWINDOW_EARLY:
+		percpu_priv->ern_cnt.early_window++;
+		break;
+	case QM_MR_RC_ORPWINDOW_LATE:
+		percpu_priv->ern_cnt.late_window++;
+		break;
+	case QM_MR_RC_FQ_TAILDROP:
+		percpu_priv->ern_cnt.fq_tdrop++;
+		break;
+	case QM_MR_RC_ORPWINDOW_RETIRED:
+		percpu_priv->ern_cnt.fq_retired++;
+		break;
+	case QM_MR_RC_ORP_ZERO:
+		percpu_priv->ern_cnt.orp_zero++;
+		break;
+	}
+}
+
 /* Turn on HW checksum computation for this outgoing frame.
  * If the current protocol is not something we support in this regard
  * (or if the stack has already computed the SW checksum), we do nothing.
@@ -1937,6 +1973,7 @@ static int dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
 	    likely(skb_shinfo(skb)->nr_frags < DPAA_SGT_MAX_ENTRIES)) {
 		/* Just create a S/G fd based on the skb */
 		err = skb_to_sg_fd(priv, skb, &fd);
+		percpu_priv->tx_frag_skbuffs++;
 	} else {
 		/* If the egress skb contains more fragments than we support
 		 * we have no choice but to linearize it ourselves.
@@ -1973,6 +2010,15 @@ static void dpaa_rx_error(struct net_device *net_dev,
 
 	percpu_priv->stats.rx_errors++;
 
+	if (fd->status & FM_FD_ERR_DMA)
+		percpu_priv->rx_errors.dme++;
+	if (fd->status & FM_FD_ERR_PHYSICAL)
+		percpu_priv->rx_errors.fpe++;
+	if (fd->status & FM_FD_ERR_SIZE)
+		percpu_priv->rx_errors.fse++;
+	if (fd->status & FM_FD_ERR_PRS_HDR_ERR)
+		percpu_priv->rx_errors.phe++;
+
 	dpaa_fd_release(net_dev, fd);
 }
 
@@ -2028,6 +2074,8 @@ static void dpaa_tx_conf(struct net_device *net_dev,
 		percpu_priv->stats.tx_errors++;
 	}
 
+	percpu_priv->tx_confirm++;
+
 	skb = dpaa_cleanup_tx_fd(priv, fd);
 
 	consume_skb(skb);
@@ -2042,6 +2090,7 @@ static inline int dpaa_eth_napi_schedule(struct dpaa_percpu_priv *percpu_priv,
 
 		percpu_priv->np.p = portal;
 		napi_schedule(&percpu_priv->np.napi);
+		percpu_priv->in_interrupt++;
 		return 1;
 	}
 	return 0;
@@ -2225,6 +2274,7 @@ static void egress_ern(struct qman_portal *portal,
 
 	percpu_priv->stats.tx_dropped++;
 	percpu_priv->stats.tx_fifo_errors++;
+	count_ern(percpu_priv, msg);
 
 	skb = dpaa_cleanup_tx_fd(priv, fd);
 	dev_kfree_skb_any(skb);
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
index d6ab335..711fb06 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
@@ -95,6 +95,25 @@ struct dpaa_bp {
 	atomic_t refs;
 };
 
+struct dpaa_rx_errors {
+	u64 dme;		/* DMA Error */
+	u64 fpe;		/* Frame Physical Error */
+	u64 fse;		/* Frame Size Error */
+	u64 phe;		/* Header Error */
+};
+
+/* Counters for QMan ERN frames - one counter per rejection code */
+struct dpaa_ern_cnt {
+	u64 cg_tdrop;		/* Congestion group taildrop */
+	u64 wred;		/* WRED congestion */
+	u64 err_cond;		/* Error condition */
+	u64 early_window;	/* Order restoration, frame too early */
+	u64 late_window;	/* Order restoration, frame too late */
+	u64 fq_tdrop;		/* FQ taildrop */
+	u64 fq_retired;		/* FQ is retired */
+	u64 orp_zero;		/* ORP disabled */
+};
+
 struct dpaa_napi_portal {
 	struct napi_struct napi;
 	struct qman_portal *p;
@@ -104,7 +123,13 @@ struct dpaa_napi_portal {
 struct dpaa_percpu_priv {
 	struct net_device *net_dev;
 	struct dpaa_napi_portal np;
+	u64 in_interrupt;
+	u64 tx_confirm;
+	/* fragmented (non-linear) skbuffs received from the stack */
+	u64 tx_frag_skbuffs;
 	struct rtnl_link_stats64 stats;
+	struct dpaa_rx_errors rx_errors;
+	struct dpaa_ern_cnt ern_cnt;
 };
 
 struct dpaa_buffer_layout {
@@ -133,6 +158,14 @@ struct dpaa_priv {
 		 * (and the same) congestion group.
 		 */
 		struct qman_cgr cgr;
+		/* If congested, when it began. Used for performance stats. */
+		u32 congestion_start_jiffies;
+		/* Number of jiffies the Tx port was congested. */
+		u32 congested_jiffies;
+		/* Counter for the number of times the CGR
+		 * entered congestion state
+		 */
+		u32 cgr_congested_count;
 	} cgr_data;
 	/* Use a per-port CGR for ingress traffic. */
 	bool use_ingress_cgr;
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
index f97f563..71ffe16 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
@@ -36,6 +36,42 @@
 #include "dpaa_eth.h"
 #include "mac.h"
 
+static const char dpaa_stats_percpu[][ETH_GSTRING_LEN] = {
+	"interrupts",
+	"rx packets",
+	"tx packets",
+	"tx confirm",
+	"tx S/G",
+	"tx error",
+	"rx error",
+};
+
+static char dpaa_stats_global[][ETH_GSTRING_LEN] = {
+	/* dpa rx errors */
+	"rx dma error",
+	"rx frame physical error",
+	"rx frame size error",
+	"rx header error",
+
+	/* demultiplexing errors */
+	"qman cg_tdrop",
+	"qman wred",
+	"qman error cond",
+	"qman early window",
+	"qman late window",
+	"qman fq tdrop",
+	"qman fq retired",
+	"qman orp disabled",
+
+	/* congestion related stats */
+	"congestion time (ms)",
+	"entered congestion",
+	"congested (0/1)"
+};
+
+#define DPAA_STATS_PERCPU_LEN ARRAY_SIZE(dpaa_stats_percpu)
+#define DPAA_STATS_GLOBAL_LEN ARRAY_SIZE(dpaa_stats_global)
+
 static int dpaa_get_settings(struct net_device *net_dev,
 			     struct ethtool_cmd *et_cmd)
 {
@@ -205,6 +241,166 @@ static int dpaa_set_pauseparam(struct net_device *net_dev,
 	return err;
 }
 
+static int dpaa_get_sset_count(struct net_device *net_dev, int type)
+{
+	unsigned int total_stats, num_stats;
+
+	num_stats   = num_online_cpus() + 1;
+	total_stats = num_stats * (DPAA_STATS_PERCPU_LEN + DPAA_BPS_NUM) +
+			DPAA_STATS_GLOBAL_LEN;
+
+	switch (type) {
+	case ETH_SS_STATS:
+		return total_stats;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static void copy_stats(struct dpaa_percpu_priv *percpu_priv, int num_cpus,
+		       int crr_cpu, u64 *bp_count, u64 *data)
+{
+	int num_values = num_cpus + 1;
+	int crr = 0, j;
+
+	/* update current CPU's stats and also add them to the total values */
+	data[crr * num_values + crr_cpu] = percpu_priv->in_interrupt;
+	data[crr++ * num_values + num_cpus] += percpu_priv->in_interrupt;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->stats.rx_packets;
+	data[crr++ * num_values + num_cpus] += percpu_priv->stats.rx_packets;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->stats.tx_packets;
+	data[crr++ * num_values + num_cpus] += percpu_priv->stats.tx_packets;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->tx_confirm;
+	data[crr++ * num_values + num_cpus] += percpu_priv->tx_confirm;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->tx_frag_skbuffs;
+	data[crr++ * num_values + num_cpus] += percpu_priv->tx_frag_skbuffs;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->stats.tx_errors;
+	data[crr++ * num_values + num_cpus] += percpu_priv->stats.tx_errors;
+
+	data[crr * num_values + crr_cpu] = percpu_priv->stats.rx_errors;
+	data[crr++ * num_values + num_cpus] += percpu_priv->stats.rx_errors;
+
+	for (j = 0; j < DPAA_BPS_NUM; j++) {
+		data[crr * num_values + crr_cpu] = bp_count[j];
+		data[crr++ * num_values + num_cpus] += bp_count[j];
+	}
+}
+
+static void dpaa_get_ethtool_stats(struct net_device *net_dev,
+				   struct ethtool_stats *stats, u64 *data)
+{
+	u64 bp_count[DPAA_BPS_NUM], cg_time, cg_num;
+	struct dpaa_percpu_priv *percpu_priv;
+	struct dpaa_rx_errors rx_errors;
+	struct dpaa_ern_cnt ern_cnt;
+	struct dpaa_priv *priv;
+	unsigned int num_cpus, offset;
+	struct dpaa_bp *dpaa_bp;
+	int total_stats, i, j;
+	bool cg_status;
+
+	total_stats = dpaa_get_sset_count(net_dev, ETH_SS_STATS);
+	priv     = netdev_priv(net_dev);
+	num_cpus = num_online_cpus();
+
+	memset(&bp_count, 0, sizeof(bp_count));
+	memset(&rx_errors, 0, sizeof(struct dpaa_rx_errors));
+	memset(&ern_cnt, 0, sizeof(struct dpaa_ern_cnt));
+	memset(data, 0, total_stats * sizeof(u64));
+
+	for_each_online_cpu(i) {
+		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+		for (j = 0; j < DPAA_BPS_NUM; j++) {
+			dpaa_bp = priv->dpaa_bps[j];
+			if (!dpaa_bp->percpu_count)
+				continue;
+			bp_count[j] = *(per_cpu_ptr(dpaa_bp->percpu_count, i));
+		}
+		rx_errors.dme += percpu_priv->rx_errors.dme;
+		rx_errors.fpe += percpu_priv->rx_errors.fpe;
+		rx_errors.fse += percpu_priv->rx_errors.fse;
+		rx_errors.phe += percpu_priv->rx_errors.phe;
+
+		ern_cnt.cg_tdrop     += percpu_priv->ern_cnt.cg_tdrop;
+		ern_cnt.wred         += percpu_priv->ern_cnt.wred;
+		ern_cnt.err_cond     += percpu_priv->ern_cnt.err_cond;
+		ern_cnt.early_window += percpu_priv->ern_cnt.early_window;
+		ern_cnt.late_window  += percpu_priv->ern_cnt.late_window;
+		ern_cnt.fq_tdrop     += percpu_priv->ern_cnt.fq_tdrop;
+		ern_cnt.fq_retired   += percpu_priv->ern_cnt.fq_retired;
+		ern_cnt.orp_zero     += percpu_priv->ern_cnt.orp_zero;
+
+		copy_stats(percpu_priv, num_cpus, i, bp_count, data);
+	}
+
+	offset = (num_cpus + 1) * (DPAA_STATS_PERCPU_LEN + DPAA_BPS_NUM);
+	memcpy(data + offset, &rx_errors, sizeof(struct dpaa_rx_errors));
+
+	offset += sizeof(struct dpaa_rx_errors) / sizeof(u64);
+	memcpy(data + offset, &ern_cnt, sizeof(struct dpaa_ern_cnt));
+
+	/* gather congestion related counters */
+	cg_num    = 0;
+	cg_status = 0;
+	cg_time   = jiffies_to_msecs(priv->cgr_data.congested_jiffies);
+	if (qman_query_cgr_congested(&priv->cgr_data.cgr, &cg_status) == 0) {
+		cg_num    = priv->cgr_data.cgr_congested_count;
+
+		/* reset congestion stats (like QMan API does */
+		priv->cgr_data.congested_jiffies   = 0;
+		priv->cgr_data.cgr_congested_count = 0;
+	}
+
+	offset += sizeof(struct dpaa_ern_cnt) / sizeof(u64);
+	data[offset++] = cg_time;
+	data[offset++] = cg_num;
+	data[offset++] = cg_status;
+}
+
+static void dpaa_get_strings(struct net_device *net_dev, u32 stringset,
+			     u8 *data)
+{
+	unsigned int i, j, num_cpus, size;
+	char string_cpu[ETH_GSTRING_LEN];
+	u8 *strings;
+
+	memset(string_cpu, 0, sizeof(string_cpu));
+	strings   = data;
+	num_cpus  = num_online_cpus();
+	size      = DPAA_STATS_GLOBAL_LEN * ETH_GSTRING_LEN;
+
+	for (i = 0; i < DPAA_STATS_PERCPU_LEN; i++) {
+		for (j = 0; j < num_cpus; j++) {
+			snprintf(string_cpu, ETH_GSTRING_LEN, "%s [CPU %d]",
+				 dpaa_stats_percpu[i], j);
+			memcpy(strings, string_cpu, ETH_GSTRING_LEN);
+			strings += ETH_GSTRING_LEN;
+		}
+		snprintf(string_cpu, ETH_GSTRING_LEN, "%s [TOTAL]",
+			 dpaa_stats_percpu[i]);
+		memcpy(strings, string_cpu, ETH_GSTRING_LEN);
+		strings += ETH_GSTRING_LEN;
+	}
+	for (i = 0; i < DPAA_BPS_NUM; i++) {
+		for (j = 0; j < num_cpus; j++) {
+			snprintf(string_cpu, ETH_GSTRING_LEN,
+				 "bpool %c [CPU %d]", 'a' + i, j);
+			memcpy(strings, string_cpu, ETH_GSTRING_LEN);
+			strings += ETH_GSTRING_LEN;
+		}
+		snprintf(string_cpu, ETH_GSTRING_LEN, "bpool %c [TOTAL]",
+			 'a' + i);
+		memcpy(strings, string_cpu, ETH_GSTRING_LEN);
+		strings += ETH_GSTRING_LEN;
+	}
+	memcpy(strings, dpaa_stats_global, size);
+}
+
 const struct ethtool_ops dpaa_ethtool_ops = {
 	.get_settings = dpaa_get_settings,
 	.set_settings = dpaa_set_settings,
@@ -215,4 +411,7 @@ const struct ethtool_ops dpaa_ethtool_ops = {
 	.get_pauseparam = dpaa_get_pauseparam,
 	.set_pauseparam = dpaa_set_pauseparam,
 	.get_link = ethtool_op_get_link,
+	.get_sset_count = dpaa_get_sset_count,
+	.get_ethtool_stats = dpaa_get_ethtool_stats,
+	.get_strings = dpaa_get_strings,
 };
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH net-next v6 06/10] dpaa_eth: add sysfs exports
  2016-11-02 20:17 [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (4 preceding siblings ...)
  2016-11-02 20:17 ` [PATCH net-next v6 05/10] dpaa_eth: add ethtool statistics Madalin Bucur
@ 2016-11-02 20:17 ` Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 07/10] dpaa_eth: add trace points Madalin Bucur
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Madalin Bucur @ 2016-11-02 20:17 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Export Frame Queue and Buffer Pool IDs through sysfs.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/Makefile       |   2 +-
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c     |   4 +
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h     |   4 +
 .../net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c   | 165 +++++++++++++++++++++
 4 files changed, 174 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c

diff --git a/drivers/net/ethernet/freescale/dpaa/Makefile b/drivers/net/ethernet/freescale/dpaa/Makefile
index 43a4cfd..bfb03d4 100644
--- a/drivers/net/ethernet/freescale/dpaa/Makefile
+++ b/drivers/net/ethernet/freescale/dpaa/Makefile
@@ -8,4 +8,4 @@ ccflags-y += -I$(FMAN)
 
 obj-$(CONFIG_FSL_DPAA_ETH) += fsl_dpa.o
 
-fsl_dpa-objs += dpaa_eth.o dpaa_ethtool.o
+fsl_dpa-objs += dpaa_eth.o dpaa_ethtool.o dpaa_eth_sysfs.o
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 3deb240..045b23b 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -2692,6 +2692,8 @@ static int dpaa_eth_probe(struct platform_device *pdev)
 	if (err < 0)
 		goto netdev_init_failed;
 
+	dpaa_eth_sysfs_init(&net_dev->dev);
+
 	netif_info(priv, probe, net_dev, "Probed interface %s\n",
 		   net_dev->name);
 
@@ -2737,6 +2739,8 @@ static int dpaa_remove(struct platform_device *pdev)
 
 	priv = netdev_priv(net_dev);
 
+	dpaa_eth_sysfs_remove(dev);
+
 	dev_set_drvdata(dev, NULL);
 	unregister_netdev(net_dev);
 
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
index 711fb06..44323e2 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
@@ -177,4 +177,8 @@ struct dpaa_priv {
 
 /* from dpaa_ethtool.c */
 extern const struct ethtool_ops dpaa_ethtool_ops;
+
+/* from dpaa_eth_sysfs.c */
+void dpaa_eth_sysfs_remove(struct device *dev);
+void dpaa_eth_sysfs_init(struct device *dev);
 #endif	/* __DPAA_H */
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c
new file mode 100644
index 0000000..93f0251
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c
@@ -0,0 +1,165 @@
+/* Copyright 2008-2016 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/io.h>
+#include <linux/of_net.h>
+#include "dpaa_eth.h"
+#include "mac.h"
+
+static ssize_t dpaa_eth_show_addr(struct device *dev,
+				  struct device_attribute *attr, char *buf)
+{
+	struct dpaa_priv *priv = netdev_priv(to_net_dev(dev));
+	struct mac_device *mac_dev = priv->mac_dev;
+
+	if (mac_dev)
+		return sprintf(buf, "%llx",
+				(unsigned long long)mac_dev->res->start);
+	else
+		return sprintf(buf, "none");
+}
+
+static ssize_t dpaa_eth_show_fqids(struct device *dev,
+				   struct device_attribute *attr, char *buf)
+{
+	struct dpaa_priv *priv = netdev_priv(to_net_dev(dev));
+	ssize_t bytes = 0;
+	int i = 0;
+	char *str;
+	struct dpaa_fq *fq;
+	struct dpaa_fq *tmp;
+	struct dpaa_fq *prev = NULL;
+	u32 first_fqid = 0;
+	u32 last_fqid = 0;
+	char *prevstr = NULL;
+
+	list_for_each_entry_safe(fq, tmp, &priv->dpaa_fq_list, list) {
+		switch (fq->fq_type) {
+		case FQ_TYPE_RX_DEFAULT:
+			str = "Rx default";
+			break;
+		case FQ_TYPE_RX_ERROR:
+			str = "Rx error";
+			break;
+		case FQ_TYPE_TX_CONFIRM:
+			str = "Tx default confirmation";
+			break;
+		case FQ_TYPE_TX_CONF_MQ:
+			str = "Tx confirmation (mq)";
+			break;
+		case FQ_TYPE_TX_ERROR:
+			str = "Tx error";
+			break;
+		case FQ_TYPE_TX:
+			str = "Tx";
+			break;
+		default:
+			str = "Unknown";
+		}
+
+		if (prev && (abs(fq->fqid - prev->fqid) != 1 ||
+			     str != prevstr)) {
+			if (last_fqid == first_fqid)
+				bytes += sprintf(buf + bytes,
+					"%s: %d\n", prevstr, prev->fqid);
+			else
+				bytes += sprintf(buf + bytes,
+					"%s: %d - %d\n", prevstr,
+					first_fqid, last_fqid);
+		}
+
+		if (prev && abs(fq->fqid - prev->fqid) == 1 &&
+		    str == prevstr) {
+			last_fqid = fq->fqid;
+		} else {
+			first_fqid = fq->fqid;
+			last_fqid = fq->fqid;
+		}
+
+		prev = fq;
+		prevstr = str;
+		i++;
+	}
+
+	if (prev) {
+		if (last_fqid == first_fqid)
+			bytes += sprintf(buf + bytes, "%s: %d\n", prevstr,
+					prev->fqid);
+		else
+			bytes += sprintf(buf + bytes, "%s: %d - %d\n", prevstr,
+					first_fqid, last_fqid);
+	}
+
+	return bytes;
+}
+
+static ssize_t dpaa_eth_show_bpids(struct device *dev,
+				   struct device_attribute *attr, char *buf)
+{
+	ssize_t bytes = 0;
+	struct dpaa_priv *priv = netdev_priv(to_net_dev(dev));
+	int i = 0;
+
+	for (i = 0; i < DPAA_BPS_NUM; i++)
+		bytes += snprintf(buf + bytes, PAGE_SIZE - bytes, "%u\n",
+				  priv->dpaa_bps[i]->bpid);
+
+	return bytes;
+}
+
+static struct device_attribute dpaa_eth_attrs[] = {
+	__ATTR(device_addr, S_IRUGO, dpaa_eth_show_addr, NULL),
+	__ATTR(fqids, S_IRUGO, dpaa_eth_show_fqids, NULL),
+	__ATTR(bpids, S_IRUGO, dpaa_eth_show_bpids, NULL),
+};
+
+void dpaa_eth_sysfs_init(struct device *dev)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(dpaa_eth_attrs); i++)
+		if (device_create_file(dev, &dpaa_eth_attrs[i])) {
+			dev_err(dev, "Error creating sysfs file\n");
+			while (i > 0)
+				device_remove_file(dev, &dpaa_eth_attrs[--i]);
+			return;
+		}
+}
+
+void dpaa_eth_sysfs_remove(struct device *dev)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(dpaa_eth_attrs); i++)
+		device_remove_file(dev, &dpaa_eth_attrs[i]);
+}
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH net-next v6 07/10] dpaa_eth: add trace points
  2016-11-02 20:17 [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (5 preceding siblings ...)
  2016-11-02 20:17 ` [PATCH net-next v6 06/10] dpaa_eth: add sysfs exports Madalin Bucur
@ 2016-11-02 20:17 ` Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 08/10] arch/powerpc: Enable FSL_PAMU Madalin Bucur
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Madalin Bucur @ 2016-11-02 20:17 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Add trace points on the hot processing path.

Signed-off-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/Makefile       |   1 +
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c     |  15 +++
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h     |   1 +
 .../net/ethernet/freescale/dpaa/dpaa_eth_trace.h   | 141 +++++++++++++++++++++
 4 files changed, 158 insertions(+)
 create mode 100644 drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h

diff --git a/drivers/net/ethernet/freescale/dpaa/Makefile b/drivers/net/ethernet/freescale/dpaa/Makefile
index bfb03d4..7db50bc 100644
--- a/drivers/net/ethernet/freescale/dpaa/Makefile
+++ b/drivers/net/ethernet/freescale/dpaa/Makefile
@@ -9,3 +9,4 @@ ccflags-y += -I$(FMAN)
 obj-$(CONFIG_FSL_DPAA_ETH) += fsl_dpa.o
 
 fsl_dpa-objs += dpaa_eth.o dpaa_ethtool.o dpaa_eth_sysfs.o
+CFLAGS_dpaa_eth.o := -I$(src)
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 045b23b..9d240b7 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -59,6 +59,12 @@
 #include "mac.h"
 #include "dpaa_eth.h"
 
+/* CREATE_TRACE_POINTS only needs to be defined once. Other dpaa files
+ * using trace events only need to #include <trace/events/sched.h>
+ */
+#define CREATE_TRACE_POINTS
+#include "dpaa_eth_trace.h"
+
 static int debug = -1;
 module_param(debug, int, S_IRUGO);
 MODULE_PARM_DESC(debug, "Module/Driver verbosity level (0=none,...,16=all)");
@@ -1918,6 +1924,9 @@ static inline int dpaa_xmit(struct dpaa_priv *priv,
 	if (fd->bpid == FSL_DPAA_BPID_INV)
 		fd->cmd |= qman_fq_fqid(priv->conf_fqs[queue]);
 
+	/* Trace this Tx fd */
+	trace_dpaa_tx_fd(priv->net_dev, egress_fq, fd);
+
 	for (i = 0; i < DPAA_ENQUEUE_RETRIES; i++) {
 		err = qman_enqueue(egress_fq, fd);
 		if (err != -EBUSY)
@@ -2152,6 +2161,9 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
 	if (!dpaa_bp)
 		return qman_cb_dqrr_consume;
 
+	/* Trace the Rx fd */
+	trace_dpaa_rx_fd(net_dev, fq, &dq->fd);
+
 	percpu_priv = this_cpu_ptr(priv->percpu_priv);
 	percpu_stats = &percpu_priv->stats;
 
@@ -2248,6 +2260,9 @@ static enum qman_cb_dqrr_result conf_dflt_dqrr(struct qman_portal *portal,
 	net_dev = ((struct dpaa_fq *)fq)->net_dev;
 	priv = netdev_priv(net_dev);
 
+	/* Trace the fd */
+	trace_dpaa_tx_conf_fd(net_dev, fq, &dq->fd);
+
 	percpu_priv = this_cpu_ptr(priv->percpu_priv);
 
 	if (dpaa_eth_napi_schedule(percpu_priv, portal))
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
index 44323e2..1f9aebf 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
@@ -37,6 +37,7 @@
 
 #include "fman.h"
 #include "mac.h"
+#include "dpaa_eth_trace.h"
 
 #define DPAA_ETH_TXQ_NUM	NR_CPUS
 
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h
new file mode 100644
index 0000000..409c1dc
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h
@@ -0,0 +1,141 @@
+/* Copyright 2013-2015 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM	dpaa_eth
+
+#if !defined(_DPAA_ETH_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _DPAA_ETH_TRACE_H
+
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include "dpaa_eth.h"
+#include <linux/tracepoint.h>
+
+#define fd_format_name(format)	{ qm_fd_##format, #format }
+#define fd_format_list	\
+	fd_format_name(contig),	\
+	fd_format_name(sg)
+
+/* This is used to declare a class of events.
+ * individual events of this type will be defined below.
+ */
+
+/* Store details about a frame descriptor and the FQ on which it was
+ * transmitted/received.
+ */
+DECLARE_EVENT_CLASS(dpaa_eth_fd,
+	/* Trace function prototype */
+	TP_PROTO(struct net_device *netdev,
+		 struct qman_fq *fq,
+		 const struct qm_fd *fd),
+
+	/* Repeat argument list here */
+	TP_ARGS(netdev, fq, fd),
+
+	/* A structure containing the relevant information we want to record.
+	 * Declare name and type for each normal element, name, type and size
+	 * for arrays. Use __string for variable length strings.
+	 */
+	TP_STRUCT__entry(
+		__field(u32,	fqid)
+		__field(u64,	fd_addr)
+		__field(u8,	fd_format)
+		__field(u16,	fd_offset)
+		__field(u32,	fd_length)
+		__field(u32,	fd_status)
+		__string(name,	netdev->name)
+	),
+
+	/* The function that assigns values to the above declared fields */
+	TP_fast_assign(
+		__entry->fqid = fq->fqid;
+		__entry->fd_addr = qm_fd_addr_get64(fd);
+		__entry->fd_format = qm_fd_get_format(fd);
+		__entry->fd_offset = qm_fd_get_offset(fd);
+		__entry->fd_length = qm_fd_get_length(fd);
+		__entry->fd_status = fd->status;
+		__assign_str(name, netdev->name);
+	),
+
+	/* This is what gets printed when the trace event is triggered */
+	TP_printk("[%s] fqid=%d, fd: addr=0x%llx, format=%s, off=%u, len=%u, status=0x%08x",
+		  __get_str(name), __entry->fqid, __entry->fd_addr,
+		  __print_symbolic(__entry->fd_format, fd_format_list),
+		  __entry->fd_offset, __entry->fd_length, __entry->fd_status)
+);
+
+/* Now declare events of the above type. Format is:
+ * DEFINE_EVENT(class, name, proto, args), with proto and args same as for class
+ */
+
+/* Tx (egress) fd */
+DEFINE_EVENT(dpaa_eth_fd, dpaa_tx_fd,
+
+	TP_PROTO(struct net_device *netdev,
+		 struct qman_fq *fq,
+		 const struct qm_fd *fd),
+
+	TP_ARGS(netdev, fq, fd)
+);
+
+/* Rx fd */
+DEFINE_EVENT(dpaa_eth_fd, dpaa_rx_fd,
+
+	TP_PROTO(struct net_device *netdev,
+		 struct qman_fq *fq,
+		 const struct qm_fd *fd),
+
+	TP_ARGS(netdev, fq, fd)
+);
+
+/* Tx confirmation fd */
+DEFINE_EVENT(dpaa_eth_fd, dpaa_tx_conf_fd,
+
+	TP_PROTO(struct net_device *netdev,
+		 struct qman_fq *fq,
+		 const struct qm_fd *fd),
+
+	TP_ARGS(netdev, fq, fd)
+);
+
+/* If only one event of a certain type needs to be declared, use TRACE_EVENT().
+ * The syntax is the same as for DECLARE_EVENT_CLASS().
+ */
+
+#endif /* _DPAA_ETH_TRACE_H */
+
+/* This must be outside ifdef _DPAA_ETH_TRACE_H */
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE	dpaa_eth_trace
+#include <trace/define_trace.h>
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH net-next v6 08/10] arch/powerpc: Enable FSL_PAMU
  2016-11-02 20:17 [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (6 preceding siblings ...)
  2016-11-02 20:17 ` [PATCH net-next v6 07/10] dpaa_eth: add trace points Madalin Bucur
@ 2016-11-02 20:17 ` Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 09/10] arch/powerpc: Enable FSL_FMAN Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 10/10] arch/powerpc: Enable dpaa_eth Madalin Bucur
  9 siblings, 0 replies; 29+ messages in thread
From: Madalin Bucur @ 2016-11-02 20:17 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 arch/powerpc/configs/dpaa.config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/configs/dpaa.config b/arch/powerpc/configs/dpaa.config
index efa99c0..f124ee1 100644
--- a/arch/powerpc/configs/dpaa.config
+++ b/arch/powerpc/configs/dpaa.config
@@ -1 +1,2 @@
 CONFIG_FSL_DPAA=y
+CONFIG_FSL_PAMU=y
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH net-next v6 09/10] arch/powerpc: Enable FSL_FMAN
  2016-11-02 20:17 [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (7 preceding siblings ...)
  2016-11-02 20:17 ` [PATCH net-next v6 08/10] arch/powerpc: Enable FSL_PAMU Madalin Bucur
@ 2016-11-02 20:17 ` Madalin Bucur
  2016-11-02 20:17 ` [PATCH net-next v6 10/10] arch/powerpc: Enable dpaa_eth Madalin Bucur
  9 siblings, 0 replies; 29+ messages in thread
From: Madalin Bucur @ 2016-11-02 20:17 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 arch/powerpc/configs/dpaa.config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/configs/dpaa.config b/arch/powerpc/configs/dpaa.config
index f124ee1..9ad9bc0 100644
--- a/arch/powerpc/configs/dpaa.config
+++ b/arch/powerpc/configs/dpaa.config
@@ -1,2 +1,3 @@
 CONFIG_FSL_DPAA=y
 CONFIG_FSL_PAMU=y
+CONFIG_FSL_FMAN=y
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH net-next v6 10/10] arch/powerpc: Enable dpaa_eth
  2016-11-02 20:17 [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
                   ` (8 preceding siblings ...)
  2016-11-02 20:17 ` [PATCH net-next v6 09/10] arch/powerpc: Enable FSL_FMAN Madalin Bucur
@ 2016-11-02 20:17 ` Madalin Bucur
  9 siblings, 0 replies; 29+ messages in thread
From: Madalin Bucur @ 2016-11-02 20:17 UTC (permalink / raw)
  To: netdev
  Cc: linuxppc-dev, linux-kernel, davem, oss, ppc, joe, pebolle,
	joakim.tjernlund

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 arch/powerpc/configs/dpaa.config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/configs/dpaa.config b/arch/powerpc/configs/dpaa.config
index 9ad9bc0..2fe76f5 100644
--- a/arch/powerpc/configs/dpaa.config
+++ b/arch/powerpc/configs/dpaa.config
@@ -1,3 +1,4 @@
 CONFIG_FSL_DPAA=y
 CONFIG_FSL_PAMU=y
 CONFIG_FSL_FMAN=y
+CONFIG_FSL_DPAA_ETH=y
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet
  2016-11-02 20:17 ` [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet Madalin Bucur
@ 2016-11-03 19:58   ` David Miller
  2016-11-04  6:53     ` Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet) Joe Perches
                       ` (2 more replies)
  2016-11-07 16:25   ` Joakim Tjernlund
  1 sibling, 3 replies; 29+ messages in thread
From: David Miller @ 2016-11-03 19:58 UTC (permalink / raw)
  To: madalin.bucur
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, joe, pebolle,
	joakim.tjernlund

From: Madalin Bucur <madalin.bucur@nxp.com>
Date: Wed, 2 Nov 2016 22:17:26 +0200

> This introduces the Freescale Data Path Acceleration Architecture
> +static inline size_t bpool_buffer_raw_size(u8 index, u8 cnt)
> +{
> +	u8 i;
> +	size_t res = DPAA_BP_RAW_SIZE / 2;

Always order local variable declarations from longest to shortest line,
also know as Reverse Christmas Tree Format.

Please audit your entire submission for this problem, it occurs
everywhere.

> +	/* we do not want shared skbs on TX */
> +	net_dev->priv_flags &= ~IFF_TX_SKB_SHARING;

Why?  By clearing this, you disallow an important fundamental way to do
performane testing, via pktgen.


> +	int numstats = sizeof(struct rtnl_link_stats64) / sizeof(u64);
 ...
> +		cpustats = (u64 *)&percpu_priv->stats;
> +
> +		for (j = 0; j < numstats; j++)
> +			netstats[j] += cpustats[j];

This is a memcpy() on well-typed datastructures which requires no
casting or special handling whatsoever, so use memcpy instead of
needlessly open coding the operation.

> +static int dpaa_change_mtu(struct net_device *net_dev, int new_mtu)
> +{
> +	const int max_mtu = dpaa_get_max_mtu();
> +
> +	/* Make sure we don't exceed the Ethernet controller's MAXFRM */
> +	if (new_mtu < 68 || new_mtu > max_mtu) {
> +		netdev_err(net_dev, "Invalid L3 mtu %d (must be between %d and %d).\n",
> +			   new_mtu, 68, max_mtu);
> +		return -EINVAL;
> +	}
> +	net_dev->mtu = new_mtu;
> +
> +	return 0;
> +}

MTU restrictions are handled in the net-next tree via net_dev->min_mtu and
net_dev->max_mtu.  Use that and do not define this NDO operation as you do
not need it.

> +static int dpaa_set_features(struct net_device *dev, netdev_features_t features)
> +{
> +	/* Not much to do here for now */
> +	dev->features = features;
> +	return 0;
> +}

Do not define unnecessary NDO operations, let the defaults do their job.

> +static netdev_features_t dpaa_fix_features(struct net_device *dev,
> +					   netdev_features_t features)
> +{
> +	netdev_features_t unsupported_features = 0;
> +
> +	/* In theory we should never be requested to enable features that
> +	 * we didn't set in netdev->features and netdev->hw_features at probe
> +	 * time, but double check just to be on the safe side.
> +	 */
> +	unsupported_features |= NETIF_F_RXCSUM;
> +
> +	features &= ~unsupported_features;
> +
> +	return features;
> +}

Unless you can show that your need this, do not "guess" by implement this
NDO operation.  You don't need it.

> +#ifdef CONFIG_FSL_DPAA_ETH_FRIENDLY_IF_NAME
> +static int dpaa_mac_hw_index_get(struct platform_device *pdev)
> +{
> +	struct device *dpaa_dev;
> +	struct dpaa_eth_data *eth_data;
> +
> +	dpaa_dev = &pdev->dev;
> +	eth_data = dpaa_dev->platform_data;
> +
> +	return eth_data->mac_hw_id;
> +}
> +
> +static int dpaa_mac_fman_index_get(struct platform_device *pdev)
> +{
> +	struct device *dpaa_dev;
> +	struct dpaa_eth_data *eth_data;
> +
> +	dpaa_dev = &pdev->dev;
> +	eth_data = dpaa_dev->platform_data;
> +
> +	return eth_data->fman_hw_id;
> +}
> +#endif

Do not play network device naming games like this, use the standard name
assignment done by the kernel and have userspace entities do geographic or
device type specific naming.

I want to see this code completely removed.

> +static int dpaa_set_mac_address(struct net_device *net_dev, void *addr)
> +{
> +	const struct dpaa_priv	*priv;
> +	int err;
> +	struct mac_device *mac_dev;
> +
> +	priv = netdev_priv(net_dev);
> +
> +	err = eth_mac_addr(net_dev, addr);
> +	if (err < 0) {
> +		netif_err(priv, drv, net_dev, "eth_mac_addr() = %d\n", err);
> +		return err;
> +	}
> +
> +	mac_dev = priv->mac_dev;
> +
> +	err = mac_dev->change_addr(mac_dev->fman_mac,
> +				   (enet_addr_t *)net_dev->dev_addr);
> +	if (err < 0) {
> +		netif_err(priv, drv, net_dev, "mac_dev->change_addr() = %d\n",
> +			  err);
> +		return err;
> +	}

You MUST NOT return an error at this point without rewinding the state change
performed by eth_mac_addr().  Otherwise device will be left in an inconsistent
state compared to what the software MAC address has recorded.

This driver is enormous, I don't have the time nor the patience to
review it further for what seems to be many fundamental errors like
the ones I have pointed out so far.

Sorry.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet)
  2016-11-03 19:58   ` David Miller
@ 2016-11-04  6:53     ` Joe Perches
  2016-11-04 11:01       ` Lino Sanfilippo
                         ` (2 more replies)
  2016-11-07 15:43     ` [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet Madalin-Cristian Bucur
  2016-11-09 17:16     ` Madalin-Cristian Bucur
  2 siblings, 3 replies; 29+ messages in thread
From: Joe Perches @ 2016-11-04  6:53 UTC (permalink / raw)
  To: David Miller, madalin.bucur, Andrew Morton, Jonathan Corbet
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, pebolle, joakim.tjernlund

[-- Attachment #1: Type: text/plain, Size: 7497 bytes --]

On Thu, 2016-11-03 at 15:58 -0400, David Miller wrote:
> From: Madalin Bucur <madalin.bucur@nxp.com>
> Date: Wed, 2 Nov 2016 22:17:26 +0200
> 
> > This introduces the Freescale Data Path Acceleration Architecture
> > +static inline size_t bpool_buffer_raw_size(u8 index, u8 cnt)
> > +{
> > +     u8 i;
> > +     size_t res = DPAA_BP_RAW_SIZE / 2;
> 
> Always order local variable declarations from longest to shortest line,
> also know as Reverse Christmas Tree Format.

I think this declaration sorting order is misguided but
here's a possible change to checkpatch adding a test for it
that does this test just for net/ and drivers/net/

(this likely won't apply because Evolution is a horrible
email client for sending patches, but the attachment should)

An example output:

$ ./scripts/checkpatch.pl -f drivers/net/ethernet/ethoc.c --types=reverse_xmas_tree --show-types
CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
#200: FILE: drivers/net/ethernet/ethoc.c:200:
+	void __iomem *iobase;
+	void __iomem *membase;

CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
#202: FILE: drivers/net/ethernet/ethoc.c:202:
+	int dma_alloc;
+	resource_size_t io_region_size;

CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
#305: FILE: drivers/net/ethernet/ethoc.c:305:
+	int i;
+	void *vma;

CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
#446: FILE: drivers/net/ethernet/ethoc.c:446:
+			int size = bd.stat >> 16;
+			struct sk_buff *skb;

CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
#515: FILE: drivers/net/ethernet/ethoc.c:515:
+	int count;
+	struct ethoc_bd bd;

CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
#675: FILE: drivers/net/ethernet/ethoc.c:675:
+	struct ethoc *priv = netdev_priv(dev);
+	struct phy_device *phy;

CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
#756: FILE: drivers/net/ethernet/ethoc.c:756:
+	struct ethoc *priv = netdev_priv(dev);
+	struct mii_ioctl_data *mdio = if_mii(ifr);

CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
#801: FILE: drivers/net/ethernet/ethoc.c:801:
+	u32 mode = ethoc_read(priv, MODER);
+	struct netdev_hw_addr *ha;

CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
#996: FILE: drivers/net/ethernet/ethoc.c:996:
+	struct resource *res = NULL;
+	struct resource *mmio = NULL;

CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
#1001: FILE: drivers/net/ethernet/ethoc.c:1001:
+	int ret = 0;
+	bool random_mac = false;

CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
#1002: FILE: drivers/net/ethernet/ethoc.c:1002:
+	bool random_mac = false;
+	struct ethoc_platform_data *pdata = dev_get_platdata(&pdev->dev);

total: 0 errors, 0 warnings, 11 checks, 1297 lines checked

NOTE: For some of the reported defects, checkpatch may be able to
      mechanically convert to the typical style using --fix or --fix-inplace.

drivers/net/ethernet/ethoc.c has style problems, please review.

NOTE: Used message types: REVERSE_XMAS_TREE

NOTE: If any of the errors are false positives, please report
      them to the maintainer, see CHECKPATCH in MAINTAINERS.
---
 scripts/checkpatch.pl | 70 ++++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 56 insertions(+), 14 deletions(-)

diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index fdacd759078e..dd344ac77cb8 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -2114,6 +2114,29 @@ sub pos_last_openparen {
 	return length(expand_tabs(substr($line, 0, $last_openparen))) + 1;
 }
 
+sub get_decl {
+	my ($line) = @_;
+
+	# typical declarations
+	if ($line =~ /^\+\s+($Declare\s*$Ident)\s*[=,;:\[]/) {
+		return $1;
+	}
+	# function pointer declarations
+	if ($line =~ /^\+\s+($Declare\s*\(\s*\*\s*$Ident\s*\))\s*[=,;:\[\(]/) {
+		return $1;
+	}
+	# foo bar; where foo is some local typedef or #define
+	if ($line =~ /^\+\s+($Ident(?:\s+|\s*\*\s*)$Ident)\s*[=,;\[]/) {
+		return $1;
+	}
+	# known declaration macros
+	if ($line =~ /^\+\s+$(declaration_macros)/) {
+		return $1;
+	}
+
+	return undef;
+}
+
 sub process {
 	my $filename = shift;
 
@@ -3063,9 +3086,7 @@ sub process {
 			$last_blank_line = $linenr;
 		}
 
-# check for missing blank lines after declarations
-		if ($sline =~ /^\+\s+\S/ &&			#Not at char 1
-			# actual declarations
+		my $prev_decl =
 		    ($prevline =~ /^\+\s+$Declare\s*$Ident\s*[=,;:\[]/ ||
 			# function pointer declarations
 		     $prevline =~ /^\+\s+$Declare\s*\(\s*\*\s*$Ident\s*\)\s*[=,;:\[\(]/ ||
@@ -3078,25 +3099,30 @@ sub process {
 			# other possible extensions of declaration lines
 		      $prevline =~ /(?:$Compare|$Assignment|$Operators)\s*$/ ||
 			# not starting a section or a macro "\" extended line
-		      $prevline =~ /(?:\{\s*|\\)$/) &&
-			# looks like a declaration
-		    !($sline =~ /^\+\s+$Declare\s*$Ident\s*[=,;:\[]/ ||
+		      $prevline =~ /(?:\{\s*|\\)$/);
+		my $sline_decl =
+		    $sline =~ /^\+\s+$Declare\s*$Ident\s*[=,;:\[]/ ||
 			# function pointer declarations
-		      $sline =~ /^\+\s+$Declare\s*\(\s*\*\s*$Ident\s*\)\s*[=,;:\[\(]/ ||
+		    $sline =~ /^\+\s+$Declare\s*\(\s*\*\s*$Ident\s*\)\s*[=,;:\[\(]/ ||
 			# foo bar; where foo is some local typedef or #define
-		      $sline =~ /^\+\s+$Ident(?:\s+|\s*\*\s*)$Ident\s*[=,;\[]/ ||
+		    $sline =~ /^\+\s+$Ident(?:\s+|\s*\*\s*)$Ident\s*[=,;\[]/ ||
 			# known declaration macros
-		      $sline =~ /^\+\s+$declaration_macros/ ||
+		    $sline =~ /^\+\s+$declaration_macros/ ||
 			# start of struct or union or enum
-		      $sline =~ /^\+\s+(?:union|struct|enum|typedef)\b/ ||
+		    $sline =~ /^\+\s+(?:union|struct|enum|typedef)\b/ ||
 			# start or end of block or continuation of declaration
-		      $sline =~ /^\+\s+(?:$|[\{\}\.\#\"\?\:\(\[])/ ||
+		    $sline =~ /^\+\s+(?:$|[\{\}\.\#\"\?\:\(\[])/ ||
 			# bitfield continuation
-		      $sline =~ /^\+\s+$Ident\s*:\s*\d+\s*[,;]/ ||
+		    $sline =~ /^\+\s+$Ident\s*:\s*\d+\s*[,;]/ ||
 			# other possible extensions of declaration lines
-		      $sline =~ /^\+\s+\(?\s*(?:$Compare|$Assignment|$Operators)/) &&
+		    $sline =~ /^\+\s+\(?\s*(?:$Compare|$Assignment|$Operators)/;
+
+# check for missing blank lines after declarations
+		if ($sline =~ /^\+\s+\S/ &&			#Not at char 1
+			# actual declarations
+		    $prev_decl && !$sline_decl &&
 			# indentation of previous and current line are the same
-		    (($prevline =~ /\+(\s+)\S/) && $sline =~ /^\+$1\S/)) {
+		    ($prevline =~ /\+(\s+)\S/ && $sline =~ /^\+$1\S/)) {
 			if (WARN("LINE_SPACING",
 				 "Missing a blank line after declarations\n" . $hereprev) &&
 			    $fix) {
@@ -3104,6 +3130,22 @@ sub process {
 			}
 		}
 
+# check for reverse christmas tree declarations in net/ and drivers/net/
+		if ($realfile =~ m@^(?:drivers/net/|net/)@ &&
+		    $sline =~ /^\+\s+\S/ &&			#Not at char 1
+			# actual declarations
+		    $prev_decl && $sline_decl &&
+			# indentation of previous and current line are the same
+		    (($prevline =~ /\+(\s+)\S/) && $sline =~ /^\+$1\S/)) {
+			my $p = get_decl($prevline);
+			my $l = get_decl($sline);
+			if (defined($p) && defined($l) && length($p) < length($l) &&
+			    CHK("REVERSE_XMAS_TREE",
+				"Prefer ordering declarations longest to shortest\n" . $hereprev) &&
+			    $fix) {
+			}
+		}
+
 # check for spaces at the beginning of a line.
 # Exceptions:
 #  1) within comments

[-- Attachment #2: cp_xmas.diff --]
[-- Type: text/x-patch, Size: 4214 bytes --]

 scripts/checkpatch.pl | 70 ++++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 56 insertions(+), 14 deletions(-)

diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index fdacd759078e..dd344ac77cb8 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -2114,6 +2114,29 @@ sub pos_last_openparen {
 	return length(expand_tabs(substr($line, 0, $last_openparen))) + 1;
 }
 
+sub get_decl {
+	my ($line) = @_;
+
+	# typical declarations
+	if ($line =~ /^\+\s+($Declare\s*$Ident)\s*[=,;:\[]/) {
+		return $1;
+	}
+	# function pointer declarations
+	if ($line =~ /^\+\s+($Declare\s*\(\s*\*\s*$Ident\s*\))\s*[=,;:\[\(]/) {
+		return $1;
+	}
+	# foo bar; where foo is some local typedef or #define
+	if ($line =~ /^\+\s+($Ident(?:\s+|\s*\*\s*)$Ident)\s*[=,;\[]/) {
+		return $1;
+	}
+	# known declaration macros
+	if ($line =~ /^\+\s+$(declaration_macros)/) {
+		return $1;
+	}
+
+	return undef;
+}
+
 sub process {
 	my $filename = shift;
 
@@ -3063,9 +3086,7 @@ sub process {
 			$last_blank_line = $linenr;
 		}
 
-# check for missing blank lines after declarations
-		if ($sline =~ /^\+\s+\S/ &&			#Not at char 1
-			# actual declarations
+		my $prev_decl =
 		    ($prevline =~ /^\+\s+$Declare\s*$Ident\s*[=,;:\[]/ ||
 			# function pointer declarations
 		     $prevline =~ /^\+\s+$Declare\s*\(\s*\*\s*$Ident\s*\)\s*[=,;:\[\(]/ ||
@@ -3078,25 +3099,30 @@ sub process {
 			# other possible extensions of declaration lines
 		      $prevline =~ /(?:$Compare|$Assignment|$Operators)\s*$/ ||
 			# not starting a section or a macro "\" extended line
-		      $prevline =~ /(?:\{\s*|\\)$/) &&
-			# looks like a declaration
-		    !($sline =~ /^\+\s+$Declare\s*$Ident\s*[=,;:\[]/ ||
+		      $prevline =~ /(?:\{\s*|\\)$/);
+		my $sline_decl =
+		    $sline =~ /^\+\s+$Declare\s*$Ident\s*[=,;:\[]/ ||
 			# function pointer declarations
-		      $sline =~ /^\+\s+$Declare\s*\(\s*\*\s*$Ident\s*\)\s*[=,;:\[\(]/ ||
+		    $sline =~ /^\+\s+$Declare\s*\(\s*\*\s*$Ident\s*\)\s*[=,;:\[\(]/ ||
 			# foo bar; where foo is some local typedef or #define
-		      $sline =~ /^\+\s+$Ident(?:\s+|\s*\*\s*)$Ident\s*[=,;\[]/ ||
+		    $sline =~ /^\+\s+$Ident(?:\s+|\s*\*\s*)$Ident\s*[=,;\[]/ ||
 			# known declaration macros
-		      $sline =~ /^\+\s+$declaration_macros/ ||
+		    $sline =~ /^\+\s+$declaration_macros/ ||
 			# start of struct or union or enum
-		      $sline =~ /^\+\s+(?:union|struct|enum|typedef)\b/ ||
+		    $sline =~ /^\+\s+(?:union|struct|enum|typedef)\b/ ||
 			# start or end of block or continuation of declaration
-		      $sline =~ /^\+\s+(?:$|[\{\}\.\#\"\?\:\(\[])/ ||
+		    $sline =~ /^\+\s+(?:$|[\{\}\.\#\"\?\:\(\[])/ ||
 			# bitfield continuation
-		      $sline =~ /^\+\s+$Ident\s*:\s*\d+\s*[,;]/ ||
+		    $sline =~ /^\+\s+$Ident\s*:\s*\d+\s*[,;]/ ||
 			# other possible extensions of declaration lines
-		      $sline =~ /^\+\s+\(?\s*(?:$Compare|$Assignment|$Operators)/) &&
+		    $sline =~ /^\+\s+\(?\s*(?:$Compare|$Assignment|$Operators)/;
+
+# check for missing blank lines after declarations
+		if ($sline =~ /^\+\s+\S/ &&			#Not at char 1
+			# actual declarations
+		    $prev_decl && !$sline_decl &&
 			# indentation of previous and current line are the same
-		    (($prevline =~ /\+(\s+)\S/) && $sline =~ /^\+$1\S/)) {
+		    ($prevline =~ /\+(\s+)\S/ && $sline =~ /^\+$1\S/)) {
 			if (WARN("LINE_SPACING",
 				 "Missing a blank line after declarations\n" . $hereprev) &&
 			    $fix) {
@@ -3104,6 +3130,22 @@ sub process {
 			}
 		}
 
+# check for reverse christmas tree declarations in net/ and drivers/net/
+		if ($realfile =~ m@^(?:drivers/net/|net/)@ &&
+		    $sline =~ /^\+\s+\S/ &&			#Not at char 1
+			# actual declarations
+		    $prev_decl && $sline_decl &&
+			# indentation of previous and current line are the same
+		    (($prevline =~ /\+(\s+)\S/) && $sline =~ /^\+$1\S/)) {
+			my $p = get_decl($prevline);
+			my $l = get_decl($sline);
+			if (defined($p) && defined($l) && length($p) < length($l) &&
+			    CHK("REVERSE_XMAS_TREE",
+				"Prefer ordering declarations longest to shortest\n" . $hereprev) &&
+			    $fix) {
+			}
+		}
+
 # check for spaces at the beginning of a line.
 # Exceptions:
 #  1) within comments

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet)
  2016-11-04  6:53     ` Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet) Joe Perches
@ 2016-11-04 11:01       ` Lino Sanfilippo
  2016-11-04 15:07         ` Coding Style: Reverse XMAS tree declarations ? David Miller
  2016-11-04 17:05       ` Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet) Randy Dunlap
  2016-11-07  8:05       ` Michael Ellerman
  2 siblings, 1 reply; 29+ messages in thread
From: Lino Sanfilippo @ 2016-11-04 11:01 UTC (permalink / raw)
  To: Joe Perches, David Miller, madalin.bucur, Andrew Morton, Jonathan Corbet
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, pebolle, joakim.tjernlund

Hi,

On 04.11.2016 07:53, Joe Perches wrote:
>
> CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to shortest
> #446: FILE: drivers/net/ethernet/ethoc.c:446:
> +			int size = bd.stat >> 16;
> +			struct sk_buff *skb;
>

should not this case be valid? Optically the longer line is already before the shorter.
I think that the whole point in using this reverse xmas tree ordering is to have
the code optically tidied up and not to enforce ordering between variable name lengths.


Regards,
Lino

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Coding Style: Reverse XMAS tree declarations ?
  2016-11-04 11:01       ` Lino Sanfilippo
@ 2016-11-04 15:07         ` David Miller
  2016-11-04 17:44           ` Joe Perches
  0 siblings, 1 reply; 29+ messages in thread
From: David Miller @ 2016-11-04 15:07 UTC (permalink / raw)
  To: lsanfil
  Cc: joe, madalin.bucur, akpm, corbet, netdev, linuxppc-dev,
	linux-kernel, oss, ppc, pebolle, joakim.tjernlund

From: Lino Sanfilippo <lsanfil@marvell.com>
Date: Fri, 4 Nov 2016 12:01:17 +0100

> Hi,
> 
> On 04.11.2016 07:53, Joe Perches wrote:
>>
>> CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to
>> shortest
>> #446: FILE: drivers/net/ethernet/ethoc.c:446:
>> +			int size = bd.stat >> 16;
>> +			struct sk_buff *skb;
>>
> 
> should not this case be valid? Optically the longer line is already
> before the shorter.
> I think that the whole point in using this reverse xmas tree ordering
> is to have
> the code optically tidied up and not to enforce ordering between
> variable name lengths.

That's correct.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet)
  2016-11-04  6:53     ` Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet) Joe Perches
  2016-11-04 11:01       ` Lino Sanfilippo
@ 2016-11-04 17:05       ` Randy Dunlap
  2016-11-04 19:48         ` David VomLehn
  2016-11-07  8:05       ` Michael Ellerman
  2 siblings, 1 reply; 29+ messages in thread
From: Randy Dunlap @ 2016-11-04 17:05 UTC (permalink / raw)
  To: Joe Perches, David Miller, madalin.bucur, Andrew Morton, Jonathan Corbet
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, pebolle, joakim.tjernlund

On 11/03/16 23:53, Joe Perches wrote:
> On Thu, 2016-11-03 at 15:58 -0400, David Miller wrote:
>> From: Madalin Bucur <madalin.bucur@nxp.com>
>> Date: Wed, 2 Nov 2016 22:17:26 +0200
>>
>>> This introduces the Freescale Data Path Acceleration Architecture
>>> +static inline size_t bpool_buffer_raw_size(u8 index, u8 cnt)
>>> +{
>>> +     u8 i;
>>> +     size_t res = DPAA_BP_RAW_SIZE / 2;
>>
>> Always order local variable declarations from longest to shortest line,
>> also know as Reverse Christmas Tree Format.
> 
> I think this declaration sorting order is misguided but
> here's a possible change to checkpatch adding a test for it
> that does this test just for net/ and drivers/net/

I agree with the misguided part.
That's not actually in CodingStyle AFAICT. Where did this come from?


thanks.
-- 
~Randy

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Coding Style: Reverse XMAS tree declarations ?
  2016-11-04 15:07         ` Coding Style: Reverse XMAS tree declarations ? David Miller
@ 2016-11-04 17:44           ` Joe Perches
  2016-11-04 20:06             ` Lino Sanfilippo
  0 siblings, 1 reply; 29+ messages in thread
From: Joe Perches @ 2016-11-04 17:44 UTC (permalink / raw)
  To: David Miller, lsanfil
  Cc: madalin.bucur, akpm, corbet, netdev, linuxppc-dev, linux-kernel,
	oss, ppc, pebolle, joakim.tjernlund, Randy Dunlap

On Fri, 2016-11-04 at 11:07 -0400, David Miller wrote:
> From: Lino Sanfilippo <lsanfil@marvell.com>
> > On 04.11.2016 07:53, Joe Perches wrote:
> >> CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to
> >> shortest
> >> #446: FILE: drivers/net/ethernet/ethoc.c:446:
> >> +                    int size = bd.stat >> 16;
> >> +                    struct sk_buff *skb;
> > should not this case be valid? Optically the longer line is already
> > before the shorter.
> > I think that the whole point in using this reverse xmas tree ordering
> > is to have
> > the code optically tidied up and not to enforce ordering between
> > variable name lengths.
> 
> That's correct.

And also another reason the whole reverse xmas tree
automatic declaration layout concept is IMO dubious.

Basically, you're looking not at the initial ordering
of automatics as important, but helping find a specific
automatic when reversing from reading code is not always
correct.

Something like:

static void function{args,...)
{
	[longish list of reverse xmas tree identifiers...]
	struct foo *bar = longish_function(args, ...);
	struct foobarbaz *qux;
	[more identifers]

	[multiple screenfuls of code later...)

	new_function(..., bar, ...);

	[more code...]
}

and the reverse xmas tree helpfulness of looking up the
type of bar is neither obvious nor easy.

My preference would be for a bar that serves coffee and alcohol.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet)
  2016-11-04 17:05       ` Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet) Randy Dunlap
@ 2016-11-04 19:48         ` David VomLehn
  0 siblings, 0 replies; 29+ messages in thread
From: David VomLehn @ 2016-11-04 19:48 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: Joe Perches, David Miller, madalin.bucur, Andrew Morton,
	Jonathan Corbet, netdev, linuxppc-dev, linux-kernel, oss, ppc,
	pebolle, joakim.tjernlund

On Fri, Nov 04, 2016 at 10:05:15AM -0700, Randy Dunlap wrote:
> On 11/03/16 23:53, Joe Perches wrote:
> > On Thu, 2016-11-03 at 15:58 -0400, David Miller wrote:
> >> From: Madalin Bucur <madalin.bucur@nxp.com>
> >> Date: Wed, 2 Nov 2016 22:17:26 +0200
> >>
> >>> This introduces the Freescale Data Path Acceleration Architecture
> >>> +static inline size_t bpool_buffer_raw_size(u8 index, u8 cnt)
> >>> +{
> >>> +     u8 i;
> >>> +     size_t res = DPAA_BP_RAW_SIZE / 2;
> >>
> >> Always order local variable declarations from longest to shortest line,
> >> also know as Reverse Christmas Tree Format.
> > 
> > I think this declaration sorting order is misguided but
> > here's a possible change to checkpatch adding a test for it
> > that does this test just for net/ and drivers/net/
> 
> I agree with the misguided part.
> That's not actually in CodingStyle AFAICT. Where did this come from?
> 
> 
> thanks.
> -- 
> ~Randy

This puzzles me. The CodingStyle gives some pretty reasonable rationales
for coding style over above the "it's easier to read if it all looks the
same". I can see rationales for other approaches (and I am not proposing
any of these):

alphabetic order        Easier to search for declarations
complex to simple       As in, structs and unions, pointers to simple
                        data (int, char), simple data. It seems like I
                        can deduce the simple types from usage, but more
                        complex I need to know things like the
                        particular structure.
group by usage          Mirror the ontological locality in the code

Do we have a basis for thinking this is easier or more consistent than
any other approach?
-- 
David VL

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Coding Style: Reverse XMAS tree declarations ?
  2016-11-04 17:44           ` Joe Perches
@ 2016-11-04 20:06             ` Lino Sanfilippo
  2016-11-07 11:00               ` David Laight
  0 siblings, 1 reply; 29+ messages in thread
From: Lino Sanfilippo @ 2016-11-04 20:06 UTC (permalink / raw)
  To: Joe Perches, David Miller, lsanfil
  Cc: madalin.bucur, akpm, corbet, netdev, linuxppc-dev, linux-kernel,
	oss, ppc, pebolle, joakim.tjernlund, Randy Dunlap

On 04.11.2016 18:44, Joe Perches wrote:
> On Fri, 2016-11-04 at 11:07 -0400, David Miller wrote:
>> From: Lino Sanfilippo <lsanfil@marvell.com>
>> > On 04.11.2016 07:53, Joe Perches wrote:
>> >> CHECK:REVERSE_XMAS_TREE: Prefer ordering declarations longest to
>> >> shortest
>> >> #446: FILE: drivers/net/ethernet/ethoc.c:446:
>> >> +                    int size = bd.stat >> 16;
>> >> +                    struct sk_buff *skb;
>> > should not this case be valid? Optically the longer line is already
>> > before the shorter.
>> > I think that the whole point in using this reverse xmas tree ordering
>> > is to have
>> > the code optically tidied up and not to enforce ordering between
>> > variable name lengths.
>> 
>> That's correct.
> 
> And also another reason the whole reverse xmas tree
> automatic declaration layout concept is IMO dubious.
> 
> Basically, you're looking not at the initial ordering
> of automatics as important, but helping find a specific
> automatic when reversing from reading code is not always
> correct.
> 
> Something like:
> 
> static void function{args,...)
> {
> 	[longish list of reverse xmas tree identifiers...]
> 	struct foo *bar = longish_function(args, ...);
> 	struct foobarbaz *qux;
> 	[more identifers]
> 
> 	[multiple screenfuls of code later...)
> 
> 	new_function(..., bar, ...);
> 
> 	[more code...]
> }
> 
> and the reverse xmas tree helpfulness of looking up the
> type of bar is neither obvious nor easy.
> 

In this case it is IMHO rather the declaration + initialization that makes
"bar" hard to find at one glance, not the use of RXT. You could do something like

 	[longish list of reverse xmas tree identifiers...]
 	struct foobarbaz *qux;
 	struct foo *bar;

 	bar = longish_function(args, ...);

to increase readability. 

Personally I find it more readable to always use a separate line for initializations
by means of functions (regardless of whether the RXT scheme is used or not). 
 
> My preference would be for a bar that serves coffee and alcohol.
> 

At least a bar like this should not be too hard to find :)

Regards,
Lino

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet)
  2016-11-04  6:53     ` Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet) Joe Perches
  2016-11-04 11:01       ` Lino Sanfilippo
  2016-11-04 17:05       ` Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet) Randy Dunlap
@ 2016-11-07  8:05       ` Michael Ellerman
  2 siblings, 0 replies; 29+ messages in thread
From: Michael Ellerman @ 2016-11-07  8:05 UTC (permalink / raw)
  To: Joe Perches, David Miller, madalin.bucur, Andrew Morton, Jonathan Corbet
  Cc: pebolle, joakim.tjernlund, netdev, linux-kernel, ppc, oss, linuxppc-dev

Joe Perches <joe@perches.com> writes:
> On Thu, 2016-11-03 at 15:58 -0400, David Miller wrote:
>> From: Madalin Bucur <madalin.bucur@nxp.com>
>> Date: Wed, 2 Nov 2016 22:17:26 +0200
>> 
>> > This introduces the Freescale Data Path Acceleration Architecture
>> > +static inline size_t bpool_buffer_raw_size(u8 index, u8 cnt)
>> > +{
>> > +     u8 i;
>> > +     size_t res = DPAA_BP_RAW_SIZE / 2;
>> 
>> Always order local variable declarations from longest to shortest line,
>> also know as Reverse Christmas Tree Format.
>
> I think this declaration sorting order is misguided but
> here's a possible change to checkpatch adding a test for it
> that does this test just for net/ and drivers/net/

And arch/powerpc too please.

cheers

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: Coding Style: Reverse XMAS tree declarations ?
  2016-11-04 20:06             ` Lino Sanfilippo
@ 2016-11-07 11:00               ` David Laight
  0 siblings, 0 replies; 29+ messages in thread
From: David Laight @ 2016-11-07 11:00 UTC (permalink / raw)
  To: 'Lino Sanfilippo', Joe Perches, David Miller, lsanfil
  Cc: madalin.bucur, akpm, corbet, netdev, linuxppc-dev, linux-kernel,
	oss, ppc, pebolle, joakim.tjernlund, Randy Dunlap

From: Lino Sanfilippo
> Sent: 04 November 2016 20:07
...
> In this case it is IMHO rather the declaration + initialization that makes
> "bar" hard to find at one glance, not the use of RXT. You could do something like
> 
>  	[longish list of reverse xmas tree identifiers...]
>  	struct foobarbaz *qux;
>  	struct foo *bar;
> 
>  	bar = longish_function(args, ...);
> 
> to increase readability.
> 
> Personally I find it more readable to always use a separate line for initializations
> by means of functions (regardless of whether the RXT scheme is used or not).

I find it best to only use initialisers for 'variables' that are (mostly) constant.
If something need to be set to NULL in case a search fails, set it to NULL
just before the loop.
Don't put initialisation on the declaration 'because you can'.

Difficulty in spotting the type of a variable is why (IMHO) you should
but all declarations at the top of a function
(except, maybe, temporaries needed for a few lines).

	David

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet
  2016-11-03 19:58   ` David Miller
  2016-11-04  6:53     ` Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet) Joe Perches
@ 2016-11-07 15:43     ` Madalin-Cristian Bucur
  2016-11-07 15:55       ` David Miller
  2016-11-09 17:16     ` Madalin-Cristian Bucur
  2 siblings, 1 reply; 29+ messages in thread
From: Madalin-Cristian Bucur @ 2016-11-07 15:43 UTC (permalink / raw)
  To: David Miller
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, joe, pebolle,
	joakim.tjernlund

> From: David Miller [mailto:davem@davemloft.net]
> Sent: Thursday, November 03, 2016 9:58 PM
> 
> From: Madalin Bucur <madalin.bucur@nxp.com>
> Date: Wed, 2 Nov 2016 22:17:26 +0200
> 
> > This introduces the Freescale Data Path Acceleration Architecture
> > +static inline size_t bpool_buffer_raw_size(u8 index, u8 cnt)
> > +{
> > +	u8 i;
> > +	size_t res = DPAA_BP_RAW_SIZE / 2;
> 
> Always order local variable declarations from longest to shortest line,
> also know as Reverse Christmas Tree Format.
> 
> Please audit your entire submission for this problem, it occurs
> everywhere.

Thank you, I'll resolve this.

> > +	/* we do not want shared skbs on TX */
> > +	net_dev->priv_flags &= ~IFF_TX_SKB_SHARING;
> 
> Why?  By clearing this, you disallow an important fundamental way to do
> performane testing, via pktgen.

The Tx path in DPAA requires one to insert a back-pointer to the skb into
the Tx buffer. On the Tx confirmation path the back-pointer in the buffer
is used to release the skb. If Tx buffer is shared we'd alter the back-pointer
and leak/double free skbs. See also 

	static int dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
	{
	...
      	  if (!nonlinear) {
	                /* We're going to store the skb backpointer at the beginning
            	     * of the data buffer, so we need a privately owned skb
      	           *
	                 * We've made sure skb is not shared in dev->priv_flags,
            	     * we need to verify the skb head is not cloned
      	           */
	                if (skb_cow_head(skb, priv->tx_headroom))
            	            goto enomem;

      	          WARN_ON(skb_is_nonlinear(skb));
	        }
	...

> > +	int numstats = sizeof(struct rtnl_link_stats64) / sizeof(u64);
>  ...
> > +		cpustats = (u64 *)&percpu_priv->stats;
> > +
> > +		for (j = 0; j < numstats; j++)
> > +			netstats[j] += cpustats[j];
> 
> This is a memcpy() on well-typed datastructures which requires no
> casting or special handling whatsoever, so use memcpy instead of
> needlessly open coding the operation.

Will fix.

> > +static int dpaa_change_mtu(struct net_device *net_dev, int new_mtu)
> > +{
> > +	const int max_mtu = dpaa_get_max_mtu();
> > +
> > +	/* Make sure we don't exceed the Ethernet controller's MAXFRM */
> > +	if (new_mtu < 68 || new_mtu > max_mtu) {
> > +		netdev_err(net_dev, "Invalid L3 mtu %d (must be between %d and
> %d).\n",
> > +			   new_mtu, 68, max_mtu);
> > +		return -EINVAL;
> > +	}
> > +	net_dev->mtu = new_mtu;
> > +
> > +	return 0;
> > +}
> 
> MTU restrictions are handled in the net-next tree via net_dev->min_mtu and
> net_dev->max_mtu.  Use that and do not define this NDO operation as you do
> not need it.

OK
 
> > +static int dpaa_set_features(struct net_device *dev, netdev_features_t
> features)
> > +{
> > +	/* Not much to do here for now */
> > +	dev->features = features;
> > +	return 0;
> > +}
> 
> Do not define unnecessary NDO operations, let the defaults do their job.
> 
> > +static netdev_features_t dpaa_fix_features(struct net_device *dev,
> > +					   netdev_features_t features)
> > +{
> > +	netdev_features_t unsupported_features = 0;
> > +
> > +	/* In theory we should never be requested to enable features that
> > +	 * we didn't set in netdev->features and netdev->hw_features at
> probe
> > +	 * time, but double check just to be on the safe side.
> > +	 */
> > +	unsupported_features |= NETIF_F_RXCSUM;
> > +
> > +	features &= ~unsupported_features;
> > +
> > +	return features;
> > +}
> 
> Unless you can show that your need this, do not "guess" by implement this
> NDO operation.  You don't need it.

Will remove it.
 
> > +#ifdef CONFIG_FSL_DPAA_ETH_FRIENDLY_IF_NAME
> > +static int dpaa_mac_hw_index_get(struct platform_device *pdev)
> > +{
> > +	struct device *dpaa_dev;
> > +	struct dpaa_eth_data *eth_data;
> > +
> > +	dpaa_dev = &pdev->dev;
> > +	eth_data = dpaa_dev->platform_data;
> > +
> > +	return eth_data->mac_hw_id;
> > +}
> > +
> > +static int dpaa_mac_fman_index_get(struct platform_device *pdev)
> > +{
> > +	struct device *dpaa_dev;
> > +	struct dpaa_eth_data *eth_data;
> > +
> > +	dpaa_dev = &pdev->dev;
> > +	eth_data = dpaa_dev->platform_data;
> > +
> > +	return eth_data->fman_hw_id;
> > +}
> > +#endif
> 
> Do not play network device naming games like this, use the standard name
> assignment done by the kernel and have userspace entities do geographic or
> device type specific naming.
> 
> I want to see this code completely removed.

I'll remove the option, udev rules like these can achieve the same effect:

SUBSYSTEM=="net", DRIVERS=="fsl_dpa*", ATTR{device_addr}=="ffe4e0000", NAME="fm1-mac1"
SUBSYSTEM=="net", DRIVERS=="fsl_dpa*", ATTR{device_addr}=="ffe4e2000", NAME="fm1-mac2"
 
> > +static int dpaa_set_mac_address(struct net_device *net_dev, void *addr)
> > +{
> > +	const struct dpaa_priv	*priv;
> > +	int err;
> > +	struct mac_device *mac_dev;
> > +
> > +	priv = netdev_priv(net_dev);
> > +
> > +	err = eth_mac_addr(net_dev, addr);
> > +	if (err < 0) {
> > +		netif_err(priv, drv, net_dev, "eth_mac_addr() = %d\n", err);
> > +		return err;
> > +	}
> > +
> > +	mac_dev = priv->mac_dev;
> > +
> > +	err = mac_dev->change_addr(mac_dev->fman_mac,
> > +				   (enet_addr_t *)net_dev->dev_addr);
> > +	if (err < 0) {
> > +		netif_err(priv, drv, net_dev, "mac_dev->change_addr() = %d\n",
> > +			  err);
> > +		return err;
> > +	}
> 
> You MUST NOT return an error at this point without rewinding the state
> change
> performed by eth_mac_addr().  Otherwise device will be left in an
> inconsistent
> state compared to what the software MAC address has recorded.

Will address this.

> This driver is enormous, I don't have the time nor the patience to
> review it further for what seems to be many fundamental errors like
> the ones I have pointed out so far.
> 
> Sorry.

Thank you for your time. I know the dpaa_eth is large, about 4K LOC,
also depends on the in-tree FMan and QMan drivers that are a bit
larger. We're coming down to this from a set of drivers that amounted
to more than 100K LOC and stuff like the ndos that here do nothing are
artifacts of that deep pruning. I'll go through all the code to check
for such redundancies before the next submission but junk is always
easier to see for a fresh pair of eyes.

Thank you,
Madalin

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet
  2016-11-07 15:43     ` [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet Madalin-Cristian Bucur
@ 2016-11-07 15:55       ` David Miller
  2016-11-07 16:32         ` Madalin-Cristian Bucur
  0 siblings, 1 reply; 29+ messages in thread
From: David Miller @ 2016-11-07 15:55 UTC (permalink / raw)
  To: madalin.bucur
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, joe, pebolle,
	joakim.tjernlund

From: Madalin-Cristian Bucur <madalin.bucur@nxp.com>
Date: Mon, 7 Nov 2016 15:43:26 +0000

>> From: David Miller [mailto:davem@davemloft.net]
>> Sent: Thursday, November 03, 2016 9:58 PM
>> 
>> Why?  By clearing this, you disallow an important fundamental way to do
>> performane testing, via pktgen.
> 
> The Tx path in DPAA requires one to insert a back-pointer to the skb into
> the Tx buffer. On the Tx confirmation path the back-pointer in the buffer
> is used to release the skb. If Tx buffer is shared we'd alter the back-pointer
> and leak/double free skbs. See also 

Then have your software state store an array of SKB pointers, one for each
TX ring entry, just like every other driver does.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet
  2016-11-02 20:17 ` [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet Madalin Bucur
  2016-11-03 19:58   ` David Miller
@ 2016-11-07 16:25   ` Joakim Tjernlund
  1 sibling, 0 replies; 29+ messages in thread
From: Joakim Tjernlund @ 2016-11-07 16:25 UTC (permalink / raw)
  To: netdev, madalin.bucur
  Cc: joe, linux-kernel, oss, linuxppc-dev, pebolle, ppc, davem

On Wed, 2016-11-02 at 22:17 +0200, Madalin Bucur wrote:
> This introduces the Freescale Data Path Acceleration Architecture
> (DPAA) Ethernet driver (dpaa_eth) that builds upon the DPAA QMan,
> BMan, PAMU and FMan drivers to deliver Ethernet connectivity on
> the Freescale DPAA QorIQ platforms.

Nice to see DPAA support soon entering the kernel(not a day too early:)
I would like to see BQL supported from day one though, if possible.

 Regards
          Joakim Tjernlund

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet
  2016-11-07 15:55       ` David Miller
@ 2016-11-07 16:32         ` Madalin-Cristian Bucur
  2016-11-07 16:39           ` David Miller
  0 siblings, 1 reply; 29+ messages in thread
From: Madalin-Cristian Bucur @ 2016-11-07 16:32 UTC (permalink / raw)
  To: David Miller
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, joe, pebolle,
	joakim.tjernlund

> -----Original Message-----
> From: David Miller [mailto:davem@davemloft.net]
> Sent: Monday, November 07, 2016 5:55 PM
> 
> From: Madalin-Cristian Bucur <madalin.bucur@nxp.com>
> Date: Mon, 7 Nov 2016 15:43:26 +0000
> 
> >> From: David Miller [mailto:davem@davemloft.net]
> >> Sent: Thursday, November 03, 2016 9:58 PM
> >>
> >> Why?  By clearing this, you disallow an important fundamental way to do
> >> performane testing, via pktgen.
> >
> > The Tx path in DPAA requires one to insert a back-pointer to the skb
> into
> > the Tx buffer. On the Tx confirmation path the back-pointer in the
> buffer
> > is used to release the skb. If Tx buffer is shared we'd alter the back-
> pointer
> > and leak/double free skbs. See also
> 
> Then have your software state store an array of SKB pointers, one for each
> TX ring entry, just like every other driver does.

There is no Tx ring in DPAA. Frames are send out on QMan HW queues towards
the FMan for Tx and then received back on Tx confirmation queues for cleanup.
Array traversal would for sure cost more than using the back-pointer. Also,
we can now process confirmations on a different core than the one doing Tx,
we'd have to keep the arrays percpu and force the Tx conf on the same core.
Or add locks.

Madalin

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet
  2016-11-07 16:32         ` Madalin-Cristian Bucur
@ 2016-11-07 16:39           ` David Miller
  2016-11-07 16:59             ` Madalin-Cristian Bucur
  0 siblings, 1 reply; 29+ messages in thread
From: David Miller @ 2016-11-07 16:39 UTC (permalink / raw)
  To: madalin.bucur
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, joe, pebolle,
	joakim.tjernlund

From: Madalin-Cristian Bucur <madalin.bucur@nxp.com>
Date: Mon, 7 Nov 2016 16:32:16 +0000

>> -----Original Message-----
>> From: David Miller [mailto:davem@davemloft.net]
>> Sent: Monday, November 07, 2016 5:55 PM
>> 
>> From: Madalin-Cristian Bucur <madalin.bucur@nxp.com>
>> Date: Mon, 7 Nov 2016 15:43:26 +0000
>> 
>> >> From: David Miller [mailto:davem@davemloft.net]
>> >> Sent: Thursday, November 03, 2016 9:58 PM
>> >>
>> >> Why?  By clearing this, you disallow an important fundamental way to do
>> >> performane testing, via pktgen.
>> >
>> > The Tx path in DPAA requires one to insert a back-pointer to the skb
>> into
>> > the Tx buffer. On the Tx confirmation path the back-pointer in the
>> buffer
>> > is used to release the skb. If Tx buffer is shared we'd alter the back-
>> pointer
>> > and leak/double free skbs. See also
>> 
>> Then have your software state store an array of SKB pointers, one for each
>> TX ring entry, just like every other driver does.
> 
> There is no Tx ring in DPAA. Frames are send out on QMan HW queues towards
> the FMan for Tx and then received back on Tx confirmation queues for cleanup.
> Array traversal would for sure cost more than using the back-pointer. Also,
> we can now process confirmations on a different core than the one doing Tx,
> we'd have to keep the arrays percpu and force the Tx conf on the same core.
> Or add locks.

Report back an integer index, like every scsi driver out there which
completes tagged queued block I/O operations asynchronously.  You can
associate the array with a specific TX confirmation queue.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet
  2016-11-07 16:39           ` David Miller
@ 2016-11-07 16:59             ` Madalin-Cristian Bucur
  0 siblings, 0 replies; 29+ messages in thread
From: Madalin-Cristian Bucur @ 2016-11-07 16:59 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, joe, pebolle

> From: David Miller [mailto:davem@davemloft.net]
> Sent: Monday, November 07, 2016 6:40 PM
> 
> From: Madalin-Cristian Bucur <madalin.bucur@nxp.com>
> Date: Mon, 7 Nov 2016 16:32:16 +0000
> 
> >> From: David Miller [mailto:davem@davemloft.net]
> >> Sent: Monday, November 07, 2016 5:55 PM
> >>
> >> From: Madalin-Cristian Bucur <madalin.bucur@nxp.com>
> >> Date: Mon, 7 Nov 2016 15:43:26 +0000
> >>
> >> >> From: David Miller [mailto:davem@davemloft.net]
> >> >> Sent: Thursday, November 03, 2016 9:58 PM
> >> >>
> >> >> Why?  By clearing this, you disallow an important fundamental way to
> >> >> do performane testing, via pktgen.
> >> >
> >> > The Tx path in DPAA requires one to insert a back-pointer to the skb
> >> > into
> >> > the Tx buffer. On the Tx confirmation path the back-pointer in the
> >> > buffer
> >> > is used to release the skb. If Tx buffer is shared we'd alter the
> >> > back-pointer
> >> > and leak/double free skbs. See also
> >>
> >> Then have your software state store an array of SKB pointers, one for
> each
> >> TX ring entry, just like every other driver does.
> >
> > There is no Tx ring in DPAA. Frames are send out on QMan HW queues
> > towards the FMan for Tx and then received back on Tx confirmation queues
> > for cleanup.
> > Array traversal would for sure cost more than using the back-pointer.
> > Also, we can now process confirmations on a different core than the one
> > doing Tx,
> > we'd have to keep the arrays percpu and force the Tx conf on the same
> > core. Or add locks.
> 
> Report back an integer index, like every scsi driver out there which
> completes tagged queued block I/O operations asynchronously.  You can
> associate the array with a specific TX confirmation queue.

>From HW? It only gives you back the buffer start address (plus length, etc).
"buff_2_skb()" needs to be solved in SW, expensively using array (lists? As
the number of frames in flight can be large/variable) or cheaply with the back
pointer. The back-pointer approach has its tradeoffs: no shared skbs, imposed
non-zero needed_headroom.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet
  2016-11-03 19:58   ` David Miller
  2016-11-04  6:53     ` Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet) Joe Perches
  2016-11-07 15:43     ` [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet Madalin-Cristian Bucur
@ 2016-11-09 17:16     ` Madalin-Cristian Bucur
  2016-11-09 17:18       ` David Miller
  2 siblings, 1 reply; 29+ messages in thread
From: Madalin-Cristian Bucur @ 2016-11-09 17:16 UTC (permalink / raw)
  To: David Miller
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, joe, pebolle,
	joakim.tjernlund

> From: Madalin-Cristian Bucur
> Sent: Monday, November 07, 2016 5:43 PM
> 
> > From: David Miller [mailto:davem@davemloft.net]
> > Sent: Thursday, November 03, 2016 9:58 PM
> >
> > From: Madalin Bucur <madalin.bucur@nxp.com>
> > Date: Wed, 2 Nov 2016 22:17:26 +0200
> >
> > > This introduces the Freescale Data Path Acceleration Architecture
<snip>

> > > +	int numstats = sizeof(struct rtnl_link_stats64) / sizeof(u64);
> >  ...
> > > +		cpustats = (u64 *)&percpu_priv->stats;
> > > +
> > > +		for (j = 0; j < numstats; j++)
> > > +			netstats[j] += cpustats[j];
> >
> > This is a memcpy() on well-typed datastructures which requires no
> > casting or special handling whatsoever, so use memcpy instead of
> > needlessly open coding the operation.
> 
> Will fix.

Took a second look at this, it's not copying but adding the percpu
statistics into consolidated results.

Madalin

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet
  2016-11-09 17:16     ` Madalin-Cristian Bucur
@ 2016-11-09 17:18       ` David Miller
  0 siblings, 0 replies; 29+ messages in thread
From: David Miller @ 2016-11-09 17:18 UTC (permalink / raw)
  To: madalin.bucur
  Cc: netdev, linuxppc-dev, linux-kernel, oss, ppc, joe, pebolle,
	joakim.tjernlund

From: Madalin-Cristian Bucur <madalin.bucur@nxp.com>
Date: Wed, 9 Nov 2016 17:16:12 +0000

>> From: Madalin-Cristian Bucur
>> Sent: Monday, November 07, 2016 5:43 PM
>> 
>> > From: David Miller [mailto:davem@davemloft.net]
>> > Sent: Thursday, November 03, 2016 9:58 PM
>> >
>> > From: Madalin Bucur <madalin.bucur@nxp.com>
>> > Date: Wed, 2 Nov 2016 22:17:26 +0200
>> >
>> > > This introduces the Freescale Data Path Acceleration Architecture
> <snip>
> 
>> > > +	int numstats = sizeof(struct rtnl_link_stats64) / sizeof(u64);
>> >  ...
>> > > +		cpustats = (u64 *)&percpu_priv->stats;
>> > > +
>> > > +		for (j = 0; j < numstats; j++)
>> > > +			netstats[j] += cpustats[j];
>> >
>> > This is a memcpy() on well-typed datastructures which requires no
>> > casting or special handling whatsoever, so use memcpy instead of
>> > needlessly open coding the operation.
>> 
>> Will fix.
> 
> Took a second look at this, it's not copying but adding the percpu
> statistics into consolidated results.

Ok, then it looks fine, thanks for clarifying.

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2016-11-09 17:18 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-02 20:17 [PATCH net-next v6 00/10] dpaa_eth: Add the QorIQ DPAA Ethernet driver Madalin Bucur
2016-11-02 20:17 ` [PATCH net-next v6 01/10] devres: add devm_alloc_percpu() Madalin Bucur
2016-11-02 20:17 ` [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet Madalin Bucur
2016-11-03 19:58   ` David Miller
2016-11-04  6:53     ` Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet) Joe Perches
2016-11-04 11:01       ` Lino Sanfilippo
2016-11-04 15:07         ` Coding Style: Reverse XMAS tree declarations ? David Miller
2016-11-04 17:44           ` Joe Perches
2016-11-04 20:06             ` Lino Sanfilippo
2016-11-07 11:00               ` David Laight
2016-11-04 17:05       ` Coding Style: Reverse XMAS tree declarations ? (was Re: [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet) Randy Dunlap
2016-11-04 19:48         ` David VomLehn
2016-11-07  8:05       ` Michael Ellerman
2016-11-07 15:43     ` [PATCH net-next v6 02/10] dpaa_eth: add support for DPAA Ethernet Madalin-Cristian Bucur
2016-11-07 15:55       ` David Miller
2016-11-07 16:32         ` Madalin-Cristian Bucur
2016-11-07 16:39           ` David Miller
2016-11-07 16:59             ` Madalin-Cristian Bucur
2016-11-09 17:16     ` Madalin-Cristian Bucur
2016-11-09 17:18       ` David Miller
2016-11-07 16:25   ` Joakim Tjernlund
2016-11-02 20:17 ` [PATCH net-next v6 03/10] dpaa_eth: add option to use one buffer pool set Madalin Bucur
2016-11-02 20:17 ` [PATCH net-next v6 04/10] dpaa_eth: add ethtool functionality Madalin Bucur
2016-11-02 20:17 ` [PATCH net-next v6 05/10] dpaa_eth: add ethtool statistics Madalin Bucur
2016-11-02 20:17 ` [PATCH net-next v6 06/10] dpaa_eth: add sysfs exports Madalin Bucur
2016-11-02 20:17 ` [PATCH net-next v6 07/10] dpaa_eth: add trace points Madalin Bucur
2016-11-02 20:17 ` [PATCH net-next v6 08/10] arch/powerpc: Enable FSL_PAMU Madalin Bucur
2016-11-02 20:17 ` [PATCH net-next v6 09/10] arch/powerpc: Enable FSL_FMAN Madalin Bucur
2016-11-02 20:17 ` [PATCH net-next v6 10/10] arch/powerpc: Enable dpaa_eth Madalin Bucur

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).