All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v2 00/10] Support for the Broadcom GENET driver
@ 2014-02-13  5:29 ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

Hi all,

This patchset adds support for the Broadcom GENET Gigabita Ethernet MAC
controllers version 1 to v4. This controller is found on the Broadcom
BCM7xxx Set Top Box System-on-a-Chip hardware.

Florian Fainelli (10):
  net: phy: add "internal" PHY mode
  net: phy: add MoCA PHY type
  net: phy: update port type for MoCA PHYs
  net: phy: add Broadcom BCM7xxx internal PHY driver
  net: bcmgenet: add driver definitions and private structure
  net: bcmgenet: add main driver file
  net: bcmgenet: add MDIO routines
  net: bcmgenet: hook into the build system
  Documentation: add Device tree bindings for Broadcom GENET
  MAINTAINERS: add entry for the Broadcom GENET driver

 .../devicetree/bindings/net/broadcom-bcmgenet.txt  |  111 +
 MAINTAINERS                                        |    6 +
 drivers/net/ethernet/broadcom/Kconfig              |   10 +
 drivers/net/ethernet/broadcom/Makefile             |    1 +
 drivers/net/ethernet/broadcom/genet/Makefile       |    2 +
 drivers/net/ethernet/broadcom/genet/bcmgenet.c     | 2664 ++++++++++++++++++++
 drivers/net/ethernet/broadcom/genet/bcmgenet.h     |  624 +++++
 drivers/net/ethernet/broadcom/genet/bcmmii.c       |  481 ++++
 drivers/net/phy/Kconfig                            |    6 +
 drivers/net/phy/Makefile                           |    1 +
 drivers/net/phy/bcm7xxx.c                          |  322 +++
 drivers/net/phy/phy.c                              |    5 +-
 include/linux/brcmphy.h                            |    9 +
 include/linux/phy.h                                |    9 +-
 14 files changed, 4249 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
 create mode 100644 drivers/net/ethernet/broadcom/genet/Makefile
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmgenet.c
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmgenet.h
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmmii.c
 create mode 100644 drivers/net/phy/bcm7xxx.c

-- 
1.8.3.2

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 00/10] Support for the Broadcom GENET driver
@ 2014-02-13  5:29 ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

Hi all,

This patchset adds support for the Broadcom GENET Gigabita Ethernet MAC
controllers version 1 to v4. This controller is found on the Broadcom
BCM7xxx Set Top Box System-on-a-Chip hardware.

Florian Fainelli (10):
  net: phy: add "internal" PHY mode
  net: phy: add MoCA PHY type
  net: phy: update port type for MoCA PHYs
  net: phy: add Broadcom BCM7xxx internal PHY driver
  net: bcmgenet: add driver definitions and private structure
  net: bcmgenet: add main driver file
  net: bcmgenet: add MDIO routines
  net: bcmgenet: hook into the build system
  Documentation: add Device tree bindings for Broadcom GENET
  MAINTAINERS: add entry for the Broadcom GENET driver

 .../devicetree/bindings/net/broadcom-bcmgenet.txt  |  111 +
 MAINTAINERS                                        |    6 +
 drivers/net/ethernet/broadcom/Kconfig              |   10 +
 drivers/net/ethernet/broadcom/Makefile             |    1 +
 drivers/net/ethernet/broadcom/genet/Makefile       |    2 +
 drivers/net/ethernet/broadcom/genet/bcmgenet.c     | 2664 ++++++++++++++++++++
 drivers/net/ethernet/broadcom/genet/bcmgenet.h     |  624 +++++
 drivers/net/ethernet/broadcom/genet/bcmmii.c       |  481 ++++
 drivers/net/phy/Kconfig                            |    6 +
 drivers/net/phy/Makefile                           |    1 +
 drivers/net/phy/bcm7xxx.c                          |  322 +++
 drivers/net/phy/phy.c                              |    5 +-
 include/linux/brcmphy.h                            |    9 +
 include/linux/phy.h                                |    9 +-
 14 files changed, 4249 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
 create mode 100644 drivers/net/ethernet/broadcom/genet/Makefile
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmgenet.c
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmgenet.h
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmmii.c
 create mode 100644 drivers/net/phy/bcm7xxx.c

-- 
1.8.3.2

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 01/10] net: phy: add "internal" PHY mode
       [not found] ` <1392269395-23513-1-git-send-email-f.fainelli-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2014-02-13  5:29     ` Florian Fainelli
  2014-02-13  5:29     ` Florian Fainelli
  1 sibling, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev-u79uwXL29TY76Z2rM5mHXA
  Cc: davem-fT/PcQaiUtIeIZ0/mPfg9Q, cernekee-Re5JQEeQqe8AvxtiuMwx3w,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Florian Fainelli

On some systems, the PHY can be internal, in the same package as the
Ethernet MAC, and still be responding to a specific address on the MDIO
bus, in that case, the Ethernet MAC might need to know about it to
properly configure a port multiplexer to switch to an internal or
external PHY. Add a new PHY interface mode for this and update the
Device Tree of_get_phy_mode() function to look for it.

Signed-off-by: Florian Fainelli <f.fainelli-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
Changes since v1:
- rebased against lastest net-next master branch

 include/linux/phy.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/include/linux/phy.h b/include/linux/phy.h
index 42f1bc7..743c90e 100644
--- a/include/linux/phy.h
+++ b/include/linux/phy.h
@@ -74,6 +74,7 @@ typedef enum {
 	PHY_INTERFACE_MODE_RTBI,
 	PHY_INTERFACE_MODE_SMII,
 	PHY_INTERFACE_MODE_XGMII,
+	PHY_INTERFACE_MODE_INTERNAL,
 	PHY_INTERFACE_MODE_MAX,
 } phy_interface_t;
 
@@ -113,6 +114,8 @@ static inline const char *phy_modes(phy_interface_t interface)
 		return "smii";
 	case PHY_INTERFACE_MODE_XGMII:
 		return "xgmii";
+	case PHY_INTERFACE_MODE_INTERNAL:
+		return "internal";
 	default:
 		return "unknown";
 	}
@@ -599,7 +602,8 @@ static inline bool phy_interrupt_is_valid(struct phy_device *phydev)
  */
 static inline bool phy_is_internal(struct phy_device *phydev)
 {
-	return phydev->is_internal;
+	return phydev->is_internal ||
+		phydev->interface == PHY_INTERFACE_MODE_INTERNAL;
 }
 
 /**
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 01/10] net: phy: add "internal" PHY mode
@ 2014-02-13  5:29     ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev-u79uwXL29TY76Z2rM5mHXA
  Cc: davem-fT/PcQaiUtIeIZ0/mPfg9Q, cernekee-Re5JQEeQqe8AvxtiuMwx3w,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Florian Fainelli

On some systems, the PHY can be internal, in the same package as the
Ethernet MAC, and still be responding to a specific address on the MDIO
bus, in that case, the Ethernet MAC might need to know about it to
properly configure a port multiplexer to switch to an internal or
external PHY. Add a new PHY interface mode for this and update the
Device Tree of_get_phy_mode() function to look for it.

Signed-off-by: Florian Fainelli <f.fainelli-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
Changes since v1:
- rebased against lastest net-next master branch

 include/linux/phy.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/include/linux/phy.h b/include/linux/phy.h
index 42f1bc7..743c90e 100644
--- a/include/linux/phy.h
+++ b/include/linux/phy.h
@@ -74,6 +74,7 @@ typedef enum {
 	PHY_INTERFACE_MODE_RTBI,
 	PHY_INTERFACE_MODE_SMII,
 	PHY_INTERFACE_MODE_XGMII,
+	PHY_INTERFACE_MODE_INTERNAL,
 	PHY_INTERFACE_MODE_MAX,
 } phy_interface_t;
 
@@ -113,6 +114,8 @@ static inline const char *phy_modes(phy_interface_t interface)
 		return "smii";
 	case PHY_INTERFACE_MODE_XGMII:
 		return "xgmii";
+	case PHY_INTERFACE_MODE_INTERNAL:
+		return "internal";
 	default:
 		return "unknown";
 	}
@@ -599,7 +602,8 @@ static inline bool phy_interrupt_is_valid(struct phy_device *phydev)
  */
 static inline bool phy_is_internal(struct phy_device *phydev)
 {
-	return phydev->is_internal;
+	return phydev->is_internal ||
+		phydev->interface == PHY_INTERFACE_MODE_INTERNAL;
 }
 
 /**
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 02/10] net: phy: add MoCA PHY type
  2014-02-13  5:29 ` Florian Fainelli
@ 2014-02-13  5:29   ` Florian Fainelli
  -1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

Some Ethernet MACs are connected to a MoCA PHY which will handle the
low-level job of sending Ethernet frames on the coaxial cable, these
Ethernet MACs need to know about it to be properly configured.
Add a new PHY mode "moca" and update the Device Tree parsing logic to
look for it.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- rebased against the latest net-next master branch

 include/linux/phy.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/linux/phy.h b/include/linux/phy.h
index 743c90e..950d2c9 100644
--- a/include/linux/phy.h
+++ b/include/linux/phy.h
@@ -75,6 +75,7 @@ typedef enum {
 	PHY_INTERFACE_MODE_SMII,
 	PHY_INTERFACE_MODE_XGMII,
 	PHY_INTERFACE_MODE_INTERNAL,
+	PHY_INTERFACE_MODE_MOCA,
 	PHY_INTERFACE_MODE_MAX,
 } phy_interface_t;
 
@@ -116,6 +117,8 @@ static inline const char *phy_modes(phy_interface_t interface)
 		return "xgmii";
 	case PHY_INTERFACE_MODE_INTERNAL:
 		return "internal";
+	case PHY_INTERFACE_MODE_MOCA:
+		return "moca";
 	default:
 		return "unknown";
 	}
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 02/10] net: phy: add MoCA PHY type
@ 2014-02-13  5:29   ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

Some Ethernet MACs are connected to a MoCA PHY which will handle the
low-level job of sending Ethernet frames on the coaxial cable, these
Ethernet MACs need to know about it to be properly configured.
Add a new PHY mode "moca" and update the Device Tree parsing logic to
look for it.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- rebased against the latest net-next master branch

 include/linux/phy.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/linux/phy.h b/include/linux/phy.h
index 743c90e..950d2c9 100644
--- a/include/linux/phy.h
+++ b/include/linux/phy.h
@@ -75,6 +75,7 @@ typedef enum {
 	PHY_INTERFACE_MODE_SMII,
 	PHY_INTERFACE_MODE_XGMII,
 	PHY_INTERFACE_MODE_INTERNAL,
+	PHY_INTERFACE_MODE_MOCA,
 	PHY_INTERFACE_MODE_MAX,
 } phy_interface_t;
 
@@ -116,6 +117,8 @@ static inline const char *phy_modes(phy_interface_t interface)
 		return "xgmii";
 	case PHY_INTERFACE_MODE_INTERNAL:
 		return "internal";
+	case PHY_INTERFACE_MODE_MOCA:
+		return "moca";
 	default:
 		return "unknown";
 	}
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 03/10] net: phy: update port type for MoCA PHYs
  2014-02-13  5:29 ` Florian Fainelli
@ 2014-02-13  5:29   ` Florian Fainelli
  -1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

MoCA PHYs are using coaxial (BNC-like) connectors, update the
transceiver port type when replying to ethtool.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- rebased

 drivers/net/phy/phy.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
index fc918b6..643b5d6 100644
--- a/drivers/net/phy/phy.c
+++ b/drivers/net/phy/phy.c
@@ -305,7 +305,10 @@ int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd)
 
 	ethtool_cmd_speed_set(cmd, phydev->speed);
 	cmd->duplex = phydev->duplex;
-	cmd->port = PORT_MII;
+	if (phydev->interface == PHY_INTERFACE_MODE_MOCA)
+		cmd->port = PORT_BNC;
+	else
+		cmd->port = PORT_MII;
 	cmd->phy_address = phydev->addr;
 	cmd->transceiver = phy_is_internal(phydev) ?
 		XCVR_INTERNAL : XCVR_EXTERNAL;
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 03/10] net: phy: update port type for MoCA PHYs
@ 2014-02-13  5:29   ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

MoCA PHYs are using coaxial (BNC-like) connectors, update the
transceiver port type when replying to ethtool.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- rebased

 drivers/net/phy/phy.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
index fc918b6..643b5d6 100644
--- a/drivers/net/phy/phy.c
+++ b/drivers/net/phy/phy.c
@@ -305,7 +305,10 @@ int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd)
 
 	ethtool_cmd_speed_set(cmd, phydev->speed);
 	cmd->duplex = phydev->duplex;
-	cmd->port = PORT_MII;
+	if (phydev->interface == PHY_INTERFACE_MODE_MOCA)
+		cmd->port = PORT_BNC;
+	else
+		cmd->port = PORT_MII;
 	cmd->phy_address = phydev->addr;
 	cmd->transceiver = phy_is_internal(phydev) ?
 		XCVR_INTERNAL : XCVR_EXTERNAL;
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 04/10] net: phy: add Broadcom BCM7xxx internal PHY driver
       [not found] ` <1392269395-23513-1-git-send-email-f.fainelli-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2014-02-13  5:29     ` Florian Fainelli
  2014-02-13  5:29     ` Florian Fainelli
  1 sibling, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev-u79uwXL29TY76Z2rM5mHXA
  Cc: davem-fT/PcQaiUtIeIZ0/mPfg9Q, cernekee-Re5JQEeQqe8AvxtiuMwx3w,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Florian Fainelli

This patch adds support for the Broadcom BCM7xxx Set Top Box SoCs
internal PHYs. This driver supports the following generation of SoCs:

- BCM7366, BCM7439, BCM7445 (28nm process)
- all 40nm and 65nm (older MIPS-based SoCs)

The PHYs on these SoCs require a bunch of workarounds to operate
correctly, both during configuration time and at suspend/resume time,
the driver handles that for us.

Signed-off-by: Florian Fainelli <f.fainelli-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
Changes since v1:
- update the 28nm AFE workaround to clear the EEE LPI
  timer (value 0xeb17 previously now 0xeb19)

 drivers/net/phy/Kconfig   |   6 +
 drivers/net/phy/Makefile  |   1 +
 drivers/net/phy/bcm7xxx.c | 322 ++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/brcmphy.h   |   9 ++
 4 files changed, 338 insertions(+)
 create mode 100644 drivers/net/phy/bcm7xxx.c

diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig
index 9b5d46c..6a17f92 100644
--- a/drivers/net/phy/Kconfig
+++ b/drivers/net/phy/Kconfig
@@ -71,6 +71,12 @@ config BCM63XX_PHY
 	---help---
 	  Currently supports the 6348 and 6358 PHYs.
 
+config BCM7XXX_PHY
+	tristate "Drivers for Broadcom 7xxx SOCs internal PHYs"
+	---help---
+	  Currently supports the BCM7366, BCM7439, BCM7445, and
+	  40nm and 65nm generation of BCM7xxx Set Top Box SoCs.
+
 config BCM87XX_PHY
 	tristate "Driver for Broadcom BCM8706 and BCM8727 PHYs"
 	help
diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile
index 9013dfa..07d2402 100644
--- a/drivers/net/phy/Makefile
+++ b/drivers/net/phy/Makefile
@@ -12,6 +12,7 @@ obj-$(CONFIG_SMSC_PHY)		+= smsc.o
 obj-$(CONFIG_VITESSE_PHY)	+= vitesse.o
 obj-$(CONFIG_BROADCOM_PHY)	+= broadcom.o
 obj-$(CONFIG_BCM63XX_PHY)	+= bcm63xx.o
+obj-$(CONFIG_BCM7XXX_PHY)	+= bcm7xxx.o
 obj-$(CONFIG_BCM87XX_PHY)	+= bcm87xx.o
 obj-$(CONFIG_ICPLUS_PHY)	+= icplus.o
 obj-$(CONFIG_REALTEK_PHY)	+= realtek.o
diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c
new file mode 100644
index 0000000..6aea6e2
--- /dev/null
+++ b/drivers/net/phy/bcm7xxx.c
@@ -0,0 +1,322 @@
+/*
+ * Broadcom BCM7xxx internal transceivers support.
+ *
+ * Copyright (C) 2014, Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/phy.h>
+#include <linux/delay.h>
+#include <linux/brcmphy.h>
+
+static int bcm7445_config_init(struct phy_device *phydev)
+{
+	int ret;
+	const struct bcm7445_regs {
+		int reg;
+		u16 value;
+	} bcm7445_regs_cfg[] = {
+		/* increases ADC latency by 24ns */
+		{ 0x17, 0x0038 },
+		{ 0x15, 0xAB95 },
+		/* increases internal 1V LDO voltage by 5% */
+		{ 0x17, 0x2038 },
+		{ 0x15, 0xBB22 },
+		/* reduce RX low pass filter corner frequency */
+		{ 0x17, 0x6038 },
+		{ 0x15, 0xFFC5 },
+		/* reduce RX high pass filter corner frequency */
+		{ 0x17, 0x003a },
+		{ 0x15, 0x2002 },
+	};
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(bcm7445_regs_cfg); i++) {
+		ret = phy_write(phydev,
+				bcm7445_regs_cfg[i].reg,
+				bcm7445_regs_cfg[i].value);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static void phy_write_exp(struct phy_device *phydev,
+					u16 reg, u16 value)
+{
+	phy_write(phydev, 0x17, 0xf00 | reg);
+	phy_write(phydev, 0x15, value);
+}
+
+static void phy_write_misc(struct phy_device *phydev,
+					u16 reg, u16 chl, u16 value)
+{
+	int tmp;
+
+	phy_write(phydev, 0x18, 0x7);
+
+	tmp = phy_read(phydev, 0x18);
+	tmp |= 0x800;
+	phy_write(phydev, 0x18, tmp);
+
+	tmp = (chl * 0x2000) | reg;
+	phy_write(phydev, 0x17, tmp);
+
+	phy_write(phydev, 0x15, value);
+}
+
+static int bcm7xxx_28nm_afe_config_init(struct phy_device *phydev)
+{
+	/* write AFE_RXCONFIG_0 */
+	phy_write_misc(phydev, 0x38, 0x0000, 0xeb19);
+
+	/* write AFE_RXCONFIG_1 */
+	phy_write_misc(phydev, 0x38, 0x0001, 0x9a3f);
+
+	/* write AFE_RX_LP_COUNTER */
+	phy_write_misc(phydev, 0x38, 0x0003, 0x7fc7);
+
+	/* write AFE_HPF_TRIM_OTHERS */
+	phy_write_misc(phydev, 0x3A, 0x0000, 0x000b);
+
+	/* write AFTE_TX_CONFIG */
+	phy_write_misc(phydev, 0x39, 0x0000, 0x0800);
+
+	/* Increase VCO range to prevent unlocking problem of PLL at low
+	 * temp
+	 */
+	phy_write_misc(phydev, 0x0032, 0x0001, 0x0048);
+
+	/* Change Ki to 011 */
+	phy_write_misc(phydev, 0x0032, 0x0002, 0x021b);
+
+	/* Disable loading of TVCO buffer to bandgap, set bandgap trim
+	 * to 111
+	 */
+	phy_write_misc(phydev, 0x0033, 0x0000, 0x0e20);
+
+	/* Adjust bias current trim by -3 */
+	phy_write_misc(phydev, 0x000a, 0x0000, 0x690b);
+
+	/* Switch to CORE_BASE1E */
+	phy_write(phydev, 0x1e, 0xd);
+
+	/* Reset R_CAL/RC_CAL Engine */
+	phy_write_exp(phydev, 0x00b0, 0x0010);
+
+	/* Disable Reset R_CAL/RC_CAL Engine */
+	phy_write_exp(phydev, 0x00b0, 0x0000);
+
+	return 0;
+}
+
+static int bcm7xxx_28nm_config_init(struct phy_device *phydev)
+{
+	int ret;
+
+	ret = bcm7445_config_init(phydev);
+	if (ret)
+		return ret;
+
+	return bcm7xxx_28nm_afe_config_init(phydev);
+}
+
+static int phy_set_clr_bits(struct phy_device *dev, int location,
+					int set_mask, int clr_mask)
+{
+	int v, ret;
+
+	v = phy_read(dev, location);
+	if (v < 0)
+		return v;
+
+	v &= ~clr_mask;
+	v |= set_mask;
+
+	ret = phy_write(dev, location, v);
+	if (ret < 0)
+		return ret;
+
+	return v;
+}
+
+static int bcm7xxx_config_init(struct phy_device *phydev)
+{
+	/* Enable 64 clock MDIO */
+	phy_write(phydev, 0x1d, 0x1000);
+	phy_read(phydev, 0x1d);
+
+	/* Workaround only required for 100Mbits/sec */
+	if (!(phydev->dev_flags & PHY_BRCM_100MBPS_WAR))
+		return 0;
+
+	/* set shadow mode 2 */
+	phy_set_clr_bits(phydev, 0x1f, 0x0004, 0x0004);
+
+	/* set iddq_clkbias */
+	phy_write(phydev, 0x14, 0x0F00);
+	udelay(10);
+
+	/* reset iddq_clkbias */
+	phy_write(phydev, 0x14, 0x0C00);
+
+	phy_write(phydev, 0x13, 0x7555);
+
+	/* reset shadow mode 2 */
+	phy_set_clr_bits(phydev, 0x1f, 0x0004, 0);
+
+	return 0;
+}
+
+/* Workaround for putting the PHY in IDDQ mode, required
+ * for all BCM7XXX PHYs
+ */
+static int bcm7xxx_suspend(struct phy_device *phydev)
+{
+	int ret;
+	const struct bcm7xxx_regs {
+		int reg;
+		u16 value;
+	} bcm7xxx_suspend_cfg[] = {
+		{ 0x1f, 0x008b },
+		{ 0x10, 0x01c0 },
+		{ 0x14, 0x7000 },
+		{ 0x1f, 0x000f },
+		{ 0x10, 0x20d0 },
+		{ 0x1f, 0x000b },
+	};
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(bcm7xxx_suspend_cfg); i++) {
+		ret = phy_write(phydev,
+				bcm7xxx_suspend_cfg[i].reg,
+				bcm7xxx_suspend_cfg[i].value);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int bcm7xxx_dummy_config_init(struct phy_device *phydev)
+{
+	return 0;
+}
+
+static struct phy_driver bcm7xxx_driver[] = {
+{
+	.phy_id		= PHY_ID_BCM7366,
+	.phy_id_mask	= 0xfffffff0,
+	.name		= "Broadcom BCM7366",
+	.features	= PHY_GBIT_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_28nm_afe_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_28nm_afe_config_init,
+	.driver		= { .owner = THIS_MODULE },
+}, {
+	.phy_id		= PHY_ID_BCM7439,
+	.phy_id_mask	= 0xfffffff0,
+	.name		= "Broadcom BCM7439",
+	.features	= PHY_GBIT_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_28nm_afe_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_28nm_afe_config_init,
+	.driver		= { .owner = THIS_MODULE },
+}, {
+	.phy_id		= PHY_ID_BCM7445,
+	.phy_id_mask	= 0xfffffff0,
+	.name		= "Broadcom BCM7445",
+	.features	= PHY_GBIT_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_28nm_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_28nm_config_init,
+	.driver		= { .owner = THIS_MODULE },
+}, {
+	.name		= "Broadcom BCM7XXX 28nm",
+	.phy_id		= PHY_ID_BCM7XXX_28,
+	.phy_id_mask	= PHY_BCM_OUI_MASK,
+	.features	= PHY_GBIT_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_28nm_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_28nm_config_init,
+	.driver		= { .owner = THIS_MODULE },
+}, {
+	.phy_id		= PHY_BCM_OUI_4,
+	.phy_id_mask	= 0xffff0000,
+	.name		= "Broadcom BCM7XXX 40nm",
+	.features	= PHY_GBIT_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_config_init,
+	.driver		= { .owner = THIS_MODULE },
+}, {
+	.phy_id		= PHY_BCM_OUI_5,
+	.phy_id_mask	= 0xffffff00,
+	.name		= "Broadcom BCM7XXX 65nm",
+	.features	= PHY_BASIC_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_dummy_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_config_init,
+	.driver		= { .owner = THIS_MODULE },
+} };
+
+static struct mdio_device_id __maybe_unused bcm7xxx_tbl[] = {
+	{ PHY_ID_BCM7366, 0xfffffff0, },
+	{ PHY_ID_BCM7439, 0xfffffff0, },
+	{ PHY_ID_BCM7445, 0xfffffff0, },
+	{ PHY_ID_BCM7XXX_28, 0xfffffc00 },
+	{ PHY_BCM_OUI_4, 0xffff0000 },
+	{ PHY_BCM_OUI_5, 0xffffff00 },
+	{ }
+};
+
+static int __init bcm7xxx_phy_init(void)
+{
+	return phy_drivers_register(bcm7xxx_driver,
+			ARRAY_SIZE(bcm7xxx_driver));
+}
+
+static void __exit bcm7xxx_phy_exit(void)
+{
+	phy_drivers_unregister(bcm7xxx_driver,
+			ARRAY_SIZE(bcm7xxx_driver));
+}
+
+module_init(bcm7xxx_phy_init);
+module_exit(bcm7xxx_phy_exit);
+
+MODULE_DEVICE_TABLE(mdio, bcm7xxx_tbl);
+
+MODULE_DESCRIPTION("Broadcom BCM7xxx internal PHY driver");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Broadcom Corporation");
diff --git a/include/linux/brcmphy.h b/include/linux/brcmphy.h
index 677b4f0..e9fc98d 100644
--- a/include/linux/brcmphy.h
+++ b/include/linux/brcmphy.h
@@ -13,10 +13,17 @@
 #define PHY_ID_BCM5461			0x002060c0
 #define PHY_ID_BCM57780			0x03625d90
 
+#define PHY_ID_BCM7366			0x600d8490
+#define PHY_ID_BCM7439			0x600d8480
+#define PHY_ID_BCM7445			0x600d8510
+#define PHY_ID_BCM7XXX_28		0x600d8400
+
 #define PHY_BCM_OUI_MASK		0xfffffc00
 #define PHY_BCM_OUI_1			0x00206000
 #define PHY_BCM_OUI_2			0x0143bc00
 #define PHY_BCM_OUI_3			0x03625c00
+#define PHY_BCM_OUI_4			0x600d0000
+#define PHY_BCM_OUI_5			0x03625e00
 
 
 #define PHY_BCM_FLAGS_MODE_COPPER	0x00000001
@@ -31,6 +38,8 @@
 #define PHY_BRCM_EXT_IBND_TX_ENABLE	0x00002000
 #define PHY_BRCM_CLEAR_RGMII_MODE	0x00004000
 #define PHY_BRCM_DIS_TXCRXC_NOENRGY	0x00008000
+/* Broadcom BCM7xxx specific workarounds */
+#define PHY_BRCM_100MBPS_WAR		0x00010000
 #define PHY_BCM_FLAGS_VALID		0x80000000
 
 #endif /* _LINUX_BRCMPHY_H */
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 04/10] net: phy: add Broadcom BCM7xxx internal PHY driver
@ 2014-02-13  5:29     ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev-u79uwXL29TY76Z2rM5mHXA
  Cc: davem-fT/PcQaiUtIeIZ0/mPfg9Q, cernekee-Re5JQEeQqe8AvxtiuMwx3w,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Florian Fainelli

This patch adds support for the Broadcom BCM7xxx Set Top Box SoCs
internal PHYs. This driver supports the following generation of SoCs:

- BCM7366, BCM7439, BCM7445 (28nm process)
- all 40nm and 65nm (older MIPS-based SoCs)

The PHYs on these SoCs require a bunch of workarounds to operate
correctly, both during configuration time and at suspend/resume time,
the driver handles that for us.

Signed-off-by: Florian Fainelli <f.fainelli-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
Changes since v1:
- update the 28nm AFE workaround to clear the EEE LPI
  timer (value 0xeb17 previously now 0xeb19)

 drivers/net/phy/Kconfig   |   6 +
 drivers/net/phy/Makefile  |   1 +
 drivers/net/phy/bcm7xxx.c | 322 ++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/brcmphy.h   |   9 ++
 4 files changed, 338 insertions(+)
 create mode 100644 drivers/net/phy/bcm7xxx.c

diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig
index 9b5d46c..6a17f92 100644
--- a/drivers/net/phy/Kconfig
+++ b/drivers/net/phy/Kconfig
@@ -71,6 +71,12 @@ config BCM63XX_PHY
 	---help---
 	  Currently supports the 6348 and 6358 PHYs.
 
+config BCM7XXX_PHY
+	tristate "Drivers for Broadcom 7xxx SOCs internal PHYs"
+	---help---
+	  Currently supports the BCM7366, BCM7439, BCM7445, and
+	  40nm and 65nm generation of BCM7xxx Set Top Box SoCs.
+
 config BCM87XX_PHY
 	tristate "Driver for Broadcom BCM8706 and BCM8727 PHYs"
 	help
diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile
index 9013dfa..07d2402 100644
--- a/drivers/net/phy/Makefile
+++ b/drivers/net/phy/Makefile
@@ -12,6 +12,7 @@ obj-$(CONFIG_SMSC_PHY)		+= smsc.o
 obj-$(CONFIG_VITESSE_PHY)	+= vitesse.o
 obj-$(CONFIG_BROADCOM_PHY)	+= broadcom.o
 obj-$(CONFIG_BCM63XX_PHY)	+= bcm63xx.o
+obj-$(CONFIG_BCM7XXX_PHY)	+= bcm7xxx.o
 obj-$(CONFIG_BCM87XX_PHY)	+= bcm87xx.o
 obj-$(CONFIG_ICPLUS_PHY)	+= icplus.o
 obj-$(CONFIG_REALTEK_PHY)	+= realtek.o
diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c
new file mode 100644
index 0000000..6aea6e2
--- /dev/null
+++ b/drivers/net/phy/bcm7xxx.c
@@ -0,0 +1,322 @@
+/*
+ * Broadcom BCM7xxx internal transceivers support.
+ *
+ * Copyright (C) 2014, Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/phy.h>
+#include <linux/delay.h>
+#include <linux/brcmphy.h>
+
+static int bcm7445_config_init(struct phy_device *phydev)
+{
+	int ret;
+	const struct bcm7445_regs {
+		int reg;
+		u16 value;
+	} bcm7445_regs_cfg[] = {
+		/* increases ADC latency by 24ns */
+		{ 0x17, 0x0038 },
+		{ 0x15, 0xAB95 },
+		/* increases internal 1V LDO voltage by 5% */
+		{ 0x17, 0x2038 },
+		{ 0x15, 0xBB22 },
+		/* reduce RX low pass filter corner frequency */
+		{ 0x17, 0x6038 },
+		{ 0x15, 0xFFC5 },
+		/* reduce RX high pass filter corner frequency */
+		{ 0x17, 0x003a },
+		{ 0x15, 0x2002 },
+	};
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(bcm7445_regs_cfg); i++) {
+		ret = phy_write(phydev,
+				bcm7445_regs_cfg[i].reg,
+				bcm7445_regs_cfg[i].value);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static void phy_write_exp(struct phy_device *phydev,
+					u16 reg, u16 value)
+{
+	phy_write(phydev, 0x17, 0xf00 | reg);
+	phy_write(phydev, 0x15, value);
+}
+
+static void phy_write_misc(struct phy_device *phydev,
+					u16 reg, u16 chl, u16 value)
+{
+	int tmp;
+
+	phy_write(phydev, 0x18, 0x7);
+
+	tmp = phy_read(phydev, 0x18);
+	tmp |= 0x800;
+	phy_write(phydev, 0x18, tmp);
+
+	tmp = (chl * 0x2000) | reg;
+	phy_write(phydev, 0x17, tmp);
+
+	phy_write(phydev, 0x15, value);
+}
+
+static int bcm7xxx_28nm_afe_config_init(struct phy_device *phydev)
+{
+	/* write AFE_RXCONFIG_0 */
+	phy_write_misc(phydev, 0x38, 0x0000, 0xeb19);
+
+	/* write AFE_RXCONFIG_1 */
+	phy_write_misc(phydev, 0x38, 0x0001, 0x9a3f);
+
+	/* write AFE_RX_LP_COUNTER */
+	phy_write_misc(phydev, 0x38, 0x0003, 0x7fc7);
+
+	/* write AFE_HPF_TRIM_OTHERS */
+	phy_write_misc(phydev, 0x3A, 0x0000, 0x000b);
+
+	/* write AFTE_TX_CONFIG */
+	phy_write_misc(phydev, 0x39, 0x0000, 0x0800);
+
+	/* Increase VCO range to prevent unlocking problem of PLL at low
+	 * temp
+	 */
+	phy_write_misc(phydev, 0x0032, 0x0001, 0x0048);
+
+	/* Change Ki to 011 */
+	phy_write_misc(phydev, 0x0032, 0x0002, 0x021b);
+
+	/* Disable loading of TVCO buffer to bandgap, set bandgap trim
+	 * to 111
+	 */
+	phy_write_misc(phydev, 0x0033, 0x0000, 0x0e20);
+
+	/* Adjust bias current trim by -3 */
+	phy_write_misc(phydev, 0x000a, 0x0000, 0x690b);
+
+	/* Switch to CORE_BASE1E */
+	phy_write(phydev, 0x1e, 0xd);
+
+	/* Reset R_CAL/RC_CAL Engine */
+	phy_write_exp(phydev, 0x00b0, 0x0010);
+
+	/* Disable Reset R_CAL/RC_CAL Engine */
+	phy_write_exp(phydev, 0x00b0, 0x0000);
+
+	return 0;
+}
+
+static int bcm7xxx_28nm_config_init(struct phy_device *phydev)
+{
+	int ret;
+
+	ret = bcm7445_config_init(phydev);
+	if (ret)
+		return ret;
+
+	return bcm7xxx_28nm_afe_config_init(phydev);
+}
+
+static int phy_set_clr_bits(struct phy_device *dev, int location,
+					int set_mask, int clr_mask)
+{
+	int v, ret;
+
+	v = phy_read(dev, location);
+	if (v < 0)
+		return v;
+
+	v &= ~clr_mask;
+	v |= set_mask;
+
+	ret = phy_write(dev, location, v);
+	if (ret < 0)
+		return ret;
+
+	return v;
+}
+
+static int bcm7xxx_config_init(struct phy_device *phydev)
+{
+	/* Enable 64 clock MDIO */
+	phy_write(phydev, 0x1d, 0x1000);
+	phy_read(phydev, 0x1d);
+
+	/* Workaround only required for 100Mbits/sec */
+	if (!(phydev->dev_flags & PHY_BRCM_100MBPS_WAR))
+		return 0;
+
+	/* set shadow mode 2 */
+	phy_set_clr_bits(phydev, 0x1f, 0x0004, 0x0004);
+
+	/* set iddq_clkbias */
+	phy_write(phydev, 0x14, 0x0F00);
+	udelay(10);
+
+	/* reset iddq_clkbias */
+	phy_write(phydev, 0x14, 0x0C00);
+
+	phy_write(phydev, 0x13, 0x7555);
+
+	/* reset shadow mode 2 */
+	phy_set_clr_bits(phydev, 0x1f, 0x0004, 0);
+
+	return 0;
+}
+
+/* Workaround for putting the PHY in IDDQ mode, required
+ * for all BCM7XXX PHYs
+ */
+static int bcm7xxx_suspend(struct phy_device *phydev)
+{
+	int ret;
+	const struct bcm7xxx_regs {
+		int reg;
+		u16 value;
+	} bcm7xxx_suspend_cfg[] = {
+		{ 0x1f, 0x008b },
+		{ 0x10, 0x01c0 },
+		{ 0x14, 0x7000 },
+		{ 0x1f, 0x000f },
+		{ 0x10, 0x20d0 },
+		{ 0x1f, 0x000b },
+	};
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(bcm7xxx_suspend_cfg); i++) {
+		ret = phy_write(phydev,
+				bcm7xxx_suspend_cfg[i].reg,
+				bcm7xxx_suspend_cfg[i].value);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int bcm7xxx_dummy_config_init(struct phy_device *phydev)
+{
+	return 0;
+}
+
+static struct phy_driver bcm7xxx_driver[] = {
+{
+	.phy_id		= PHY_ID_BCM7366,
+	.phy_id_mask	= 0xfffffff0,
+	.name		= "Broadcom BCM7366",
+	.features	= PHY_GBIT_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_28nm_afe_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_28nm_afe_config_init,
+	.driver		= { .owner = THIS_MODULE },
+}, {
+	.phy_id		= PHY_ID_BCM7439,
+	.phy_id_mask	= 0xfffffff0,
+	.name		= "Broadcom BCM7439",
+	.features	= PHY_GBIT_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_28nm_afe_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_28nm_afe_config_init,
+	.driver		= { .owner = THIS_MODULE },
+}, {
+	.phy_id		= PHY_ID_BCM7445,
+	.phy_id_mask	= 0xfffffff0,
+	.name		= "Broadcom BCM7445",
+	.features	= PHY_GBIT_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_28nm_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_28nm_config_init,
+	.driver		= { .owner = THIS_MODULE },
+}, {
+	.name		= "Broadcom BCM7XXX 28nm",
+	.phy_id		= PHY_ID_BCM7XXX_28,
+	.phy_id_mask	= PHY_BCM_OUI_MASK,
+	.features	= PHY_GBIT_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_28nm_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_28nm_config_init,
+	.driver		= { .owner = THIS_MODULE },
+}, {
+	.phy_id		= PHY_BCM_OUI_4,
+	.phy_id_mask	= 0xffff0000,
+	.name		= "Broadcom BCM7XXX 40nm",
+	.features	= PHY_GBIT_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_config_init,
+	.driver		= { .owner = THIS_MODULE },
+}, {
+	.phy_id		= PHY_BCM_OUI_5,
+	.phy_id_mask	= 0xffffff00,
+	.name		= "Broadcom BCM7XXX 65nm",
+	.features	= PHY_BASIC_FEATURES |
+			  SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+	.flags		= PHY_IS_INTERNAL,
+	.config_init	= bcm7xxx_dummy_config_init,
+	.config_aneg	= genphy_config_aneg,
+	.read_status	= genphy_read_status,
+	.suspend	= bcm7xxx_suspend,
+	.resume		= bcm7xxx_config_init,
+	.driver		= { .owner = THIS_MODULE },
+} };
+
+static struct mdio_device_id __maybe_unused bcm7xxx_tbl[] = {
+	{ PHY_ID_BCM7366, 0xfffffff0, },
+	{ PHY_ID_BCM7439, 0xfffffff0, },
+	{ PHY_ID_BCM7445, 0xfffffff0, },
+	{ PHY_ID_BCM7XXX_28, 0xfffffc00 },
+	{ PHY_BCM_OUI_4, 0xffff0000 },
+	{ PHY_BCM_OUI_5, 0xffffff00 },
+	{ }
+};
+
+static int __init bcm7xxx_phy_init(void)
+{
+	return phy_drivers_register(bcm7xxx_driver,
+			ARRAY_SIZE(bcm7xxx_driver));
+}
+
+static void __exit bcm7xxx_phy_exit(void)
+{
+	phy_drivers_unregister(bcm7xxx_driver,
+			ARRAY_SIZE(bcm7xxx_driver));
+}
+
+module_init(bcm7xxx_phy_init);
+module_exit(bcm7xxx_phy_exit);
+
+MODULE_DEVICE_TABLE(mdio, bcm7xxx_tbl);
+
+MODULE_DESCRIPTION("Broadcom BCM7xxx internal PHY driver");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Broadcom Corporation");
diff --git a/include/linux/brcmphy.h b/include/linux/brcmphy.h
index 677b4f0..e9fc98d 100644
--- a/include/linux/brcmphy.h
+++ b/include/linux/brcmphy.h
@@ -13,10 +13,17 @@
 #define PHY_ID_BCM5461			0x002060c0
 #define PHY_ID_BCM57780			0x03625d90
 
+#define PHY_ID_BCM7366			0x600d8490
+#define PHY_ID_BCM7439			0x600d8480
+#define PHY_ID_BCM7445			0x600d8510
+#define PHY_ID_BCM7XXX_28		0x600d8400
+
 #define PHY_BCM_OUI_MASK		0xfffffc00
 #define PHY_BCM_OUI_1			0x00206000
 #define PHY_BCM_OUI_2			0x0143bc00
 #define PHY_BCM_OUI_3			0x03625c00
+#define PHY_BCM_OUI_4			0x600d0000
+#define PHY_BCM_OUI_5			0x03625e00
 
 
 #define PHY_BCM_FLAGS_MODE_COPPER	0x00000001
@@ -31,6 +38,8 @@
 #define PHY_BRCM_EXT_IBND_TX_ENABLE	0x00002000
 #define PHY_BRCM_CLEAR_RGMII_MODE	0x00004000
 #define PHY_BRCM_DIS_TXCRXC_NOENRGY	0x00008000
+/* Broadcom BCM7xxx specific workarounds */
+#define PHY_BRCM_100MBPS_WAR		0x00010000
 #define PHY_BCM_FLAGS_VALID		0x80000000
 
 #endif /* _LINUX_BRCMPHY_H */
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 05/10] net: bcmgenet: add driver definitions and private structure
  2014-02-13  5:29 ` Florian Fainelli
@ 2014-02-13  5:29   ` Florian Fainelli
  -1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

This patchs adds the bcmgenet.h header file which contains all the
hardware definitions for the GENETv1 to v4 hardware blocks as well as
the driver private structure and MIB counters.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- removed priv->phy_type usage and use priv->phy_interface which
- remove BRCM_PHY_TYPE_{INT,MOCA} defines, now replaced by their
  PHY_INTERFACE_MODE counterparts
- small line suppression cleanups

 drivers/net/ethernet/broadcom/genet/bcmgenet.h | 624 +++++++++++++++++++++++++
 1 file changed, 624 insertions(+)
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmgenet.h

diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
new file mode 100644
index 0000000..879fb30
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -0,0 +1,624 @@
+/*
+ * Copyright (c) 2014 Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ *
+*/
+#ifndef __BCMGENET_H__
+#define __BCMGENET_H__
+
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/spinlock.h>
+#include <linux/clk.h>
+#include <linux/mii.h>
+#include <linux/if_vlan.h>
+#include <linux/phy.h>
+
+/* total number of Buffer Descriptors, same for Rx/Tx */
+#define TOTAL_DESC				256
+
+/* which ring is descriptor based */
+#define DESC_INDEX				16
+
+/* Body(1500) + EH_SIZE(14) + VLANTAG(4) + BRCMTAG(6) + FCS(4) = 1528.
+ * 1536 is multiple of 256 bytes
+ */
+#define ENET_BRCM_TAG_LEN	6
+#define ENET_PAD		8
+#define ENET_MAX_MTU_SIZE	(ETH_DATA_LEN + ETH_HLEN + VLAN_HLEN + \
+				 ENET_BRCM_TAG_LEN + ETH_FCS_LEN + ENET_PAD)
+#define DMA_MAX_BURST_LENGTH    0x10
+
+/* misc. configuration */
+#define CLEAR_ALL_HFB			0xFF
+#define DMA_FC_THRESH_HI		(TOTAL_DESC >> 4)
+#define DMA_FC_THRESH_LO		5
+
+/* 64B receive/transmit status block */
+struct status_64 {
+	u32	length_status;		/* length and peripheral status */
+	u32	ext_status;		/* Extended status*/
+	u32	rx_csum;		/* partial rx checksum */
+	u32	unused1[9];		/* unused */
+	u32	tx_csum_info;		/* Tx checksum info. */
+	u32	unused2[3];		/* unused */
+};
+
+/* Rx status bits */
+#define STATUS_RX_EXT_MASK		0x1FFFFF
+#define STATUS_RX_CSUM_MASK		0xFFFF
+#define STATUS_RX_CSUM_OK		0x10000
+#define STATUS_RX_CSUM_FR		0x20000
+#define STATUS_RX_PROTO_TCP		0
+#define STATUS_RX_PROTO_UDP		1
+#define STATUS_RX_PROTO_ICMP		2
+#define STATUS_RX_PROTO_OTHER		3
+#define STATUS_RX_PROTO_MASK		3
+#define STATUS_RX_PROTO_SHIFT		18
+#define STATUS_FILTER_INDEX_MASK	0xFFFF
+/* Tx status bits */
+#define STATUS_TX_CSUM_START_MASK	0X7FFF
+#define STATUS_TX_CSUM_START_SHIFT	16
+#define STATUS_TX_CSUM_PROTO_UDP	0x8000
+#define STATUS_TX_CSUM_OFFSET_MASK	0x7FFF
+#define STATUS_TX_CSUM_LV		0x80000000
+
+/* DMA Descriptor */
+#define DMA_DESC_LENGTH_STATUS	0x00	/* in bytes of data in buffer */
+#define DMA_DESC_ADDRESS_LO	0x04	/* lower bits of PA */
+#define DMA_DESC_ADDRESS_HI	0x08	/* upper 32 bits of PA, GENETv4+ */
+
+/* Rx/Tx common counter group */
+struct bcmgenet_pkt_counters {
+	u32	cnt_64;		/* RO Received/Transmited 64 bytes packet */
+	u32	cnt_127;	/* RO Rx/Tx 127 bytes packet */
+	u32	cnt_255;	/* RO Rx/Tx 65-255 bytes packet */
+	u32	cnt_511;	/* RO Rx/Tx 256-511 bytes packet */
+	u32	cnt_1023;	/* RO Rx/Tx 512-1023 bytes packet */
+	u32	cnt_1518;	/* RO Rx/Tx 1024-1518 bytes packet */
+	u32	cnt_mgv;	/* RO Rx/Tx 1519-1522 good VLAN packet */
+	u32	cnt_2047;	/* RO Rx/Tx 1522-2047 bytes packet*/
+	u32	cnt_4095;	/* RO Rx/Tx 2048-4095 bytes packet*/
+	u32	cnt_9216;	/* RO Rx/Tx 4096-9216 bytes packet*/
+};
+
+/* RSV, Receive Status Vector */
+struct bcmgenet_rx_counters {
+	struct  bcmgenet_pkt_counters pkt_cnt;
+	u32	pkt;		/* RO (0x428) Received pkt count*/
+	u32	bytes;		/* RO Received byte count */
+	u32	mca;		/* RO # of Received multicast pkt */
+	u32	bca;		/* RO # of Receive broadcast pkt */
+	u32	fcs;		/* RO # of Received FCS error  */
+	u32	cf;		/* RO # of Received control frame pkt*/
+	u32	pf;		/* RO # of Received pause frame pkt */
+	u32	uo;		/* RO # of unknown op code pkt */
+	u32	aln;		/* RO # of alignment error count */
+	u32	flr;		/* RO # of frame length out of range count */
+	u32	cde;		/* RO # of code error pkt */
+	u32	fcr;		/* RO # of carrier sense error pkt */
+	u32	ovr;		/* RO # of oversize pkt*/
+	u32	jbr;		/* RO # of jabber count */
+	u32	mtue;		/* RO # of MTU error pkt*/
+	u32	pok;		/* RO # of Received good pkt */
+	u32	uc;		/* RO # of unicast pkt */
+	u32	ppp;		/* RO # of PPP pkt */
+	u32	rcrc;		/* RO (0x470),# of CRC match pkt */
+};
+
+/* TSV, Transmit Status Vector */
+struct bcmgenet_tx_counters {
+	struct bcmgenet_pkt_counters pkt_cnt;
+	u32	pkts;		/* RO (0x4a8) Transmited pkt */
+	u32	mca;		/* RO # of xmited multicast pkt */
+	u32	bca;		/* RO # of xmited broadcast pkt */
+	u32	pf;		/* RO # of xmited pause frame count */
+	u32	cf;		/* RO # of xmited control frame count */
+	u32	fcs;		/* RO # of xmited FCS error count */
+	u32	ovr;		/* RO # of xmited oversize pkt */
+	u32	drf;		/* RO # of xmited deferral pkt */
+	u32	edf;		/* RO # of xmited Excessive deferral pkt*/
+	u32	scl;		/* RO # of xmited single collision pkt */
+	u32	mcl;		/* RO # of xmited multiple collision pkt*/
+	u32	lcl;		/* RO # of xmited late collision pkt */
+	u32	ecl;		/* RO # of xmited excessive collision pkt*/
+	u32	frg;		/* RO # of xmited fragments pkt*/
+	u32	ncl;		/* RO # of xmited total collision count */
+	u32	jbr;		/* RO # of xmited jabber count*/
+	u32	bytes;		/* RO # of xmited byte count */
+	u32	pok;		/* RO # of xmited good pkt */
+	u32	uc;		/* RO (0x0x4f0)# of xmited unitcast pkt */
+};
+
+struct bcmgenet_mib_counters {
+	struct bcmgenet_rx_counters rx;
+	struct bcmgenet_tx_counters tx;
+	u32	rx_runt_cnt;
+	u32	rx_runt_fcs;
+	u32	rx_runt_fcs_align;
+	u32	rx_runt_bytes;
+	u32	rbuf_ovflow_cnt;
+	u32	rbuf_err_cnt;
+	u32	mdf_err_cnt;
+};
+
+#define UMAC_HD_BKP_CTRL		0x004
+#define	 HD_FC_EN			(1 << 0)
+#define  HD_FC_BKOFF_OK			(1 << 1)
+#define  IPG_CONFIG_RX_SHIFT		2
+#define  IPG_CONFIG_RX_MASK		0x1F
+
+#define UMAC_CMD			0x008
+#define  CMD_TX_EN			(1 << 0)
+#define  CMD_RX_EN			(1 << 1)
+#define  UMAC_SPEED_10			0
+#define  UMAC_SPEED_100			1
+#define  UMAC_SPEED_1000		2
+#define  UMAC_SPEED_2500		3
+#define  CMD_SPEED_SHIFT		2
+#define  CMD_SPEED_MASK			3
+#define  CMD_PROMISC			(1 << 4)
+#define  CMD_PAD_EN			(1 << 5)
+#define  CMD_CRC_FWD			(1 << 6)
+#define  CMD_PAUSE_FWD			(1 << 7)
+#define  CMD_RX_PAUSE_IGNORE		(1 << 8)
+#define  CMD_TX_ADDR_INS		(1 << 9)
+#define  CMD_HD_EN			(1 << 10)
+#define  CMD_SW_RESET			(1 << 13)
+#define  CMD_LCL_LOOP_EN		(1 << 15)
+#define  CMD_AUTO_CONFIG		(1 << 22)
+#define  CMD_CNTL_FRM_EN		(1 << 23)
+#define  CMD_NO_LEN_CHK			(1 << 24)
+#define  CMD_RMT_LOOP_EN		(1 << 25)
+#define  CMD_PRBL_EN			(1 << 27)
+#define  CMD_TX_PAUSE_IGNORE		(1 << 28)
+#define  CMD_TX_RX_EN			(1 << 29)
+#define  CMD_RUNT_FILTER_DIS		(1 << 30)
+
+#define UMAC_MAC0			0x00C
+#define UMAC_MAC1			0x010
+#define UMAC_MAX_FRAME_LEN		0x014
+
+#define UMAC_TX_FLUSH			0x334
+
+#define UMAC_MIB_START			0x400
+
+#define UMAC_MDIO_CMD			0x614
+#define  MDIO_START_BUSY		(1 << 29)
+#define  MDIO_READ_FAIL			(1 << 28)
+#define  MDIO_RD			(2 << 26)
+#define  MDIO_WR			(1 << 26)
+#define  MDIO_PMD_SHIFT			21
+#define  MDIO_PMD_MASK			0x1F
+#define  MDIO_REG_SHIFT			16
+#define  MDIO_REG_MASK			0x1F
+
+#define UMAC_RBUF_OVFL_CNT		0x61C
+
+#define UMAC_MPD_CTRL			0x620
+#define  MPD_EN				(1 << 0)
+#define  MPD_PW_EN			(1 << 27)
+#define  MPD_MSEQ_LEN_SHIFT		16
+#define  MPD_MSEQ_LEN_MASK		0xFF
+
+#define UMAC_MPD_PW_MS			0x624
+#define UMAC_MPD_PW_LS			0x628
+#define UMAC_RBUF_ERR_CNT		0x634
+#define UMAC_MDF_ERR_CNT		0x638
+#define UMAC_MDF_CTRL			0x650
+#define UMAC_MDF_ADDR			0x654
+#define UMAC_MIB_CTRL			0x580
+#define  MIB_RESET_RX			(1 << 0)
+#define  MIB_RESET_RUNT			(1 << 1)
+#define  MIB_RESET_TX			(1 << 2)
+
+#define RBUF_CTRL			0x00
+#define  RBUF_64B_EN			(1 << 0)
+#define  RBUF_ALIGN_2B			(1 << 1)
+#define  RBUF_BAD_DIS			(1 << 2)
+
+#define RBUF_STATUS			0x0C
+#define  RBUF_STATUS_WOL		(1 << 0)
+#define  RBUF_STATUS_MPD_INTR_ACTIVE	(1 << 1)
+#define  RBUF_STATUS_ACPI_INTR_ACTIVE	(1 << 2)
+
+#define RBUF_CHK_CTRL			0x14
+#define  RBUF_RXCHK_EN			(1 << 0)
+#define  RBUF_SKIP_FCS			(1 << 4)
+
+#define RBUF_TBUF_SIZE_CTRL		0xb4
+
+#define RBUF_HFB_CTRL_V1		0x38
+#define  RBUF_HFB_FILTER_EN_SHIFT	16
+#define  RBUF_HFB_FILTER_EN_MASK	0xffff0000
+#define  RBUF_HFB_EN			(1 << 0)
+#define  RBUF_HFB_256B			(1 << 1)
+#define  RBUF_ACPI_EN			(1 << 2)
+
+#define RBUF_HFB_LEN_V1			0x3C
+#define  RBUF_FLTR_LEN_MASK		0xFF
+#define  RBUF_FLTR_LEN_SHIFT		8
+
+#define TBUF_CTRL			0x00
+#define TBUF_BP_MC			0x0C
+
+#define TBUF_CTRL_V1			0x80
+#define TBUF_BP_MC_V1			0xA0
+
+#define HFB_CTRL			0x00
+#define HFB_FLT_ENABLE_V3PLUS		0x04
+#define HFB_FLT_LEN_V2			0x04
+#define HFB_FLT_LEN_V3PLUS		0x1C
+
+/* uniMac intrl2 registers */
+#define INTRL2_CPU_STAT			0x00
+#define INTRL2_CPU_SET			0x04
+#define INTRL2_CPU_CLEAR		0x08
+#define INTRL2_CPU_MASK_STATUS		0x0C
+#define INTRL2_CPU_MASK_SET		0x10
+#define INTRL2_CPU_MASK_CLEAR		0x14
+
+/* INTRL2 instance 0 definitions */
+#define UMAC_IRQ_SCB			(1 << 0)
+#define UMAC_IRQ_EPHY			(1 << 1)
+#define UMAC_IRQ_PHY_DET_R		(1 << 2)
+#define UMAC_IRQ_PHY_DET_F		(1 << 3)
+#define UMAC_IRQ_LINK_UP		(1 << 4)
+#define UMAC_IRQ_LINK_DOWN		(1 << 5)
+#define UMAC_IRQ_UMAC			(1 << 6)
+#define UMAC_IRQ_UMAC_TSV		(1 << 7)
+#define UMAC_IRQ_TBUF_UNDERRUN		(1 << 8)
+#define UMAC_IRQ_RBUF_OVERFLOW		(1 << 9)
+#define UMAC_IRQ_HFB_SM			(1 << 10)
+#define UMAC_IRQ_HFB_MM			(1 << 11)
+#define UMAC_IRQ_MPD_R			(1 << 12)
+#define UMAC_IRQ_RXDMA_MBDONE		(1 << 13)
+#define UMAC_IRQ_RXDMA_PDONE		(1 << 14)
+#define UMAC_IRQ_RXDMA_BDONE		(1 << 15)
+#define UMAC_IRQ_TXDMA_MBDONE		(1 << 16)
+#define UMAC_IRQ_TXDMA_PDONE		(1 << 17)
+#define UMAC_IRQ_TXDMA_BDONE		(1 << 18)
+/* Only valid for GENETv3+ */
+#define UMAC_IRQ_MDIO_DONE		(1 << 23)
+#define UMAC_IRQ_MDIO_ERROR		(1 << 24)
+
+/* Register block offsets */
+#define GENET_SYS_OFF			0x0000
+#define GENET_GR_BRIDGE_OFF		0x0040
+#define GENET_EXT_OFF			0x0080
+#define GENET_INTRL2_0_OFF		0x0200
+#define GENET_INTRL2_1_OFF		0x0240
+#define GENET_RBUF_OFF			0x0300
+#define GENET_UMAC_OFF			0x0800
+
+/* SYS block offsets and register definitions */
+#define SYS_REV_CTRL			0x00
+#define SYS_PORT_CTRL			0x04
+#define  PORT_MODE_INT_EPHY		0
+#define  PORT_MODE_INT_GPHY		1
+#define  PORT_MODE_EXT_EPHY		2
+#define  PORT_MODE_EXT_GPHY		3
+#define  PORT_MODE_EXT_RVMII_25		(4 | BIT(4))
+#define  PORT_MODE_EXT_RVMII_50		4
+#define  LED_ACT_SOURCE_MAC		(1 << 9)
+
+#define SYS_RBUF_FLUSH_CTRL		0x08
+#define SYS_TBUF_FLUSH_CTRL		0x0C
+#define RBUF_FLUSH_CTRL_V1		0x04
+
+/* Ext block register offsets and definitions */
+#define EXT_EXT_PWR_MGMT		0x00
+#define  EXT_PWR_DOWN_BIAS		(1 << 0)
+#define  EXT_PWR_DOWN_DLL		(1 << 1)
+#define  EXT_PWR_DOWN_PHY		(1 << 2)
+#define  EXT_PWR_DN_EN_LD		(1 << 3)
+#define  EXT_ENERGY_DET			(1 << 4)
+#define  EXT_IDDQ_FROM_PHY		(1 << 5)
+#define  EXT_PHY_RESET			(1 << 8)
+#define  EXT_ENERGY_DET_MASK		(1 << 12)
+
+#define EXT_RGMII_OOB_CTRL		0x0C
+#define  RGMII_MODE_EN			(1 << 0)
+#define  RGMII_LINK			(1 << 4)
+#define  OOB_DISABLE			(1 << 5)
+#define  ID_MODE_DIS			(1 << 16)
+
+#define EXT_GPHY_CTRL			0x1C
+#define  EXT_CFG_IDDQ_BIAS		(1 << 0)
+#define  EXT_CFG_PWR_DOWN		(1 << 1)
+#define  EXT_GPHY_RESET			(1 << 5)
+
+/* DMA rings size */
+#define DMA_RING_SIZE			(0x40)
+#define DMA_RINGS_SIZE			(DMA_RING_SIZE * (DESC_INDEX + 1))
+
+/* DMA registers common definitions */
+#define DMA_RW_POINTER_MASK		0x1FF
+#define DMA_P_INDEX_DISCARD_CNT_MASK	0xFFFF
+#define DMA_P_INDEX_DISCARD_CNT_SHIFT	16
+#define DMA_BUFFER_DONE_CNT_MASK	0xFFFF
+#define DMA_BUFFER_DONE_CNT_SHIFT	16
+#define DMA_P_INDEX_MASK		0xFFFF
+#define DMA_C_INDEX_MASK		0xFFFF
+
+/* DMA ring size register */
+#define DMA_RING_SIZE_MASK		0xFFFF
+#define DMA_RING_SIZE_SHIFT		16
+#define DMA_RING_BUFFER_SIZE_MASK	0xFFFF
+
+/* DMA interrupt threshold register */
+#define DMA_INTR_THRESHOLD_MASK		0x00FF
+
+/* DMA XON/XOFF register */
+#define DMA_XON_THREHOLD_MASK		0xFFFF
+#define DMA_XOFF_THRESHOLD_MASK		0xFFFF
+#define DMA_XOFF_THRESHOLD_SHIFT	16
+
+/* DMA flow period register */
+#define DMA_FLOW_PERIOD_MASK		0xFFFF
+#define DMA_MAX_PKT_SIZE_MASK		0xFFFF
+#define DMA_MAX_PKT_SIZE_SHIFT		16
+
+
+/* DMA control register */
+#define DMA_EN				(1 << 0)
+#define DMA_RING_BUF_EN_SHIFT		0x01
+#define DMA_RING_BUF_EN_MASK		0xFFFF
+#define DMA_TSB_SWAP_EN			(1 << 20)
+
+/* DMA status register */
+#define DMA_DISABLED			(1 << 0)
+#define DMA_DESC_RAM_INIT_BUSY		(1 << 1)
+
+/* DMA SCB burst size register */
+#define DMA_SCB_BURST_SIZE_MASK		0x1F
+
+/* DMA activity vector register */
+#define DMA_ACTIVITY_VECTOR_MASK	0x1FFFF
+
+/* DMA backpressure mask register */
+#define DMA_BACKPRESSURE_MASK		0x1FFFF
+#define DMA_PFC_ENABLE			(1 << 31)
+
+/* DMA backpressure status register */
+#define DMA_BACKPRESSURE_STATUS_MASK	0x1FFFF
+
+/* DMA override register */
+#define DMA_LITTLE_ENDIAN_MODE		(1 << 0)
+#define DMA_REGISTER_MODE		(1 << 1)
+
+/* DMA timeout register */
+#define DMA_TIMEOUT_MASK		0xFFFF
+
+/* TDMA rate limiting control register */
+#define DMA_RATE_LIMIT_EN_MASK		0xFFFF
+
+/* TDMA arbitration control register */
+#define DMA_ARBITER_MODE_MASK		0x03
+#define DMA_RING_BUF_PRIORITY_MASK	0x1F
+#define DMA_RING_BUF_PRIORITY_SHIFT	5
+#define DMA_RATE_ADJ_MASK		0xFF
+
+/* Tx/Rx Dma Descriptor common bits*/
+#define DMA_BUFLENGTH_MASK		0x0fff
+#define DMA_BUFLENGTH_SHIFT		16
+#define DMA_OWN				0x8000
+#define DMA_EOP				0x4000
+#define DMA_SOP				0x2000
+#define DMA_WRAP			0x1000
+/* Tx specific Dma descriptor bits */
+#define DMA_TX_UNDERRUN			0x0200
+#define DMA_TX_APPEND_CRC		0x0040
+#define DMA_TX_OW_CRC			0x0020
+#define DMA_TX_DO_CSUM			0x0010
+#define DMA_TX_QTAG_SHIFT		7
+
+/* Rx Specific Dma descriptor bits */
+#define DMA_RX_CHK_V3PLUS		0x8000
+#define DMA_RX_CHK_V12			0x1000
+#define DMA_RX_BRDCAST			0x0040
+#define DMA_RX_MULT			0x0020
+#define DMA_RX_LG			0x0010
+#define DMA_RX_NO			0x0008
+#define DMA_RX_RXER			0x0004
+#define DMA_RX_CRC_ERROR		0x0002
+#define DMA_RX_OV			0x0001
+#define DMA_RX_FI_MASK			0x001F
+#define DMA_RX_FI_SHIFT			0x0007
+#define DMA_DESC_ALLOC_MASK		0x00FF
+
+#define DMA_ARBITER_RR			0x00
+#define DMA_ARBITER_WRR			0x01
+#define DMA_ARBITER_SP			0x02
+
+struct enet_cb {
+	struct sk_buff      *skb;
+	void __iomem *bd_addr;
+	DEFINE_DMA_UNMAP_ADDR(dma_addr);
+	DEFINE_DMA_UNMAP_LEN(dma_len);
+};
+
+/* power management mode */
+enum bcmgenet_power_mode {
+	GENET_POWER_CABLE_SENSE = 0,
+	GENET_POWER_WOL_MAGIC,
+	GENET_POWER_WOL_ACPI,
+	GENET_POWER_PASSIVE,
+};
+
+struct bcmgenet_priv;
+
+/* We support both runtime GENET detection and compile-time
+ * to optimize code-paths for a given hardware
+ */
+enum bcmgenet_version {
+	GENET_V1 = 1,
+	GENET_V2,
+	GENET_V3,
+	GENET_V4
+};
+
+#define GENET_IS_V1(p)	((p)->version == GENET_V1)
+#define GENET_IS_V2(p)	((p)->version == GENET_V2)
+#define GENET_IS_V3(p)	((p)->version == GENET_V3)
+#define GENET_IS_V4(p)	((p)->version == GENET_V4)
+
+/* Hardware flags */
+#define GENET_HAS_40BITS	(1 << 0)
+#define GENET_HAS_EXT		(1 << 1)
+#define GENET_HAS_MDIO_INTR	(1 << 2)
+
+/* BCMGENET hardware parameters, keep this structure nicely aligned
+ * since it is going to be used in hot paths
+ */
+struct bcmgenet_hw_params {
+	u8		tx_queues;
+	u8		rx_queues;
+	u8		bds_cnt;
+	u8		bp_in_en_shift;
+	u32		bp_in_mask;
+	u8		hfb_filter_cnt;
+	u8		qtag_mask;
+	u16		tbuf_offset;
+	u32		hfb_offset;
+	u32		hfb_reg_offset;
+	u32		rdma_offset;
+	u32		tdma_offset;
+	u32		words_per_bd;
+	u32		flags;
+};
+
+struct bcmgenet_tx_ring {
+	spinlock_t	lock;		/* ring lock */
+	unsigned int	index;		/* ring index */
+	unsigned int	queue;		/* queue index */
+	struct enet_cb	*cbs;		/* tx ring buffer control block*/
+	unsigned int	size;		/* size of each tx ring */
+	unsigned int	c_index;	/* last consumer index of each ring*/
+	unsigned int	free_bds;	/* # of free bds for each ring */
+	unsigned int	write_ptr;	/* Tx ring write pointer SW copy */
+	unsigned int	prod_index;	/* Tx ring producer index SW copy */
+	unsigned int	cb_ptr;		/* Tx ring initial CB ptr */
+	unsigned int	end_ptr;	/* Tx ring end CB ptr */
+	void (*int_enable)(struct bcmgenet_priv *priv,
+				struct bcmgenet_tx_ring *);
+	void (*int_disable)(struct bcmgenet_priv *priv,
+				struct bcmgenet_tx_ring *);
+};
+
+/* device context */
+struct bcmgenet_priv {
+	void __iomem *base;
+	enum bcmgenet_version version;
+	struct net_device *dev;
+	spinlock_t lock;
+	spinlock_t bh_lock;
+	u32 int0_mask;
+	u32 int1_mask;
+
+	/* NAPI for descriptor based rx */
+	struct napi_struct napi ____cacheline_aligned;
+
+	/* transmit variables */
+	void __iomem *tx_bds;
+	struct enet_cb *tx_cbs;
+	unsigned int num_tx_bds;
+
+	struct bcmgenet_tx_ring tx_rings[DESC_INDEX + 1];
+
+	/* receive variables */
+	void __iomem *rx_bds;
+	void __iomem *rx_bd_assign_ptr;
+	int rx_bd_assign_index;
+	struct enet_cb *rx_cbs;
+	unsigned int num_rx_bds;
+	unsigned int rx_buf_len;
+	unsigned int rx_read_ptr;
+	unsigned int rx_c_index;
+
+	/* other misc variables */
+	struct bcmgenet_hw_params *hw_params;
+	wait_queue_head_t wq;
+	struct phy_device *phydev;
+	struct device_node *phy_dn;
+	struct mii_bus *mii_bus;
+	int old_duplex;
+	int old_link;
+	int old_pause;
+	phy_interface_t phy_interface;
+	u32 phy_supported;
+	int irq0;
+	int irq1;
+	int phy_addr;
+	int phy_speed;
+	int ext_phy;
+	unsigned int irq0_stat;
+	unsigned int irq1_stat;
+	unsigned int desc_64b_en;
+	unsigned int desc_rxchk_en;
+	unsigned int dma_rx_chk_bit;
+	unsigned int crc_fwd_en;
+	u32 msg_enable;
+
+	struct work_struct bcmgenet_irq_work;
+	struct clk *clk;
+	struct platform_device *pdev;
+
+	/* WOL */
+	unsigned long wol_enabled;
+	struct clk *clk_wol;
+	u32 wolopts;
+
+	struct mutex mib_mutex;
+	struct bcmgenet_mib_counters mib;
+};
+
+#define GENET_IO_MACRO(name, offset)					\
+static inline u32 bcmgenet_##name##_readl(struct bcmgenet_priv *priv,	\
+					u32 off)			\
+{									\
+	return __raw_readl(priv->base + offset + off);			\
+}									\
+static inline void bcmgenet_##name##_writel(struct bcmgenet_priv *priv,	\
+					u32 val, u32 off)		\
+{									\
+	__raw_writel(val, priv->base + offset + off);			\
+}
+
+GENET_IO_MACRO(ext, GENET_EXT_OFF);
+GENET_IO_MACRO(umac, GENET_UMAC_OFF);
+GENET_IO_MACRO(sys, GENET_SYS_OFF);
+
+/* interrupt l2 registers accessors */
+GENET_IO_MACRO(intrl2_0, GENET_INTRL2_0_OFF);
+GENET_IO_MACRO(intrl2_1, GENET_INTRL2_1_OFF);
+
+/* HFB register accessors  */
+GENET_IO_MACRO(hfb, priv->hw_params->hfb_offset);
+
+/* GENET v2+ HFB control and filter len helpers */
+GENET_IO_MACRO(hfb_reg, priv->hw_params->hfb_reg_offset);
+
+/* RBUF register accessors */
+GENET_IO_MACRO(rbuf, GENET_RBUF_OFF);
+
+/* MDIO routines */
+int bcmgenet_mii_init(struct net_device *dev);
+int bcmgenet_mii_config(struct net_device *dev);
+void bcmgenet_mii_exit(struct net_device *dev);
+void bcmgenet_mii_reset(struct net_device *dev);
+
+#endif /* __BCMGENET_H__ */
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 05/10] net: bcmgenet: add driver definitions and private structure
@ 2014-02-13  5:29   ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

This patchs adds the bcmgenet.h header file which contains all the
hardware definitions for the GENETv1 to v4 hardware blocks as well as
the driver private structure and MIB counters.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- removed priv->phy_type usage and use priv->phy_interface which
- remove BRCM_PHY_TYPE_{INT,MOCA} defines, now replaced by their
  PHY_INTERFACE_MODE counterparts
- small line suppression cleanups

 drivers/net/ethernet/broadcom/genet/bcmgenet.h | 624 +++++++++++++++++++++++++
 1 file changed, 624 insertions(+)
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmgenet.h

diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
new file mode 100644
index 0000000..879fb30
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -0,0 +1,624 @@
+/*
+ * Copyright (c) 2014 Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ *
+*/
+#ifndef __BCMGENET_H__
+#define __BCMGENET_H__
+
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/spinlock.h>
+#include <linux/clk.h>
+#include <linux/mii.h>
+#include <linux/if_vlan.h>
+#include <linux/phy.h>
+
+/* total number of Buffer Descriptors, same for Rx/Tx */
+#define TOTAL_DESC				256
+
+/* which ring is descriptor based */
+#define DESC_INDEX				16
+
+/* Body(1500) + EH_SIZE(14) + VLANTAG(4) + BRCMTAG(6) + FCS(4) = 1528.
+ * 1536 is multiple of 256 bytes
+ */
+#define ENET_BRCM_TAG_LEN	6
+#define ENET_PAD		8
+#define ENET_MAX_MTU_SIZE	(ETH_DATA_LEN + ETH_HLEN + VLAN_HLEN + \
+				 ENET_BRCM_TAG_LEN + ETH_FCS_LEN + ENET_PAD)
+#define DMA_MAX_BURST_LENGTH    0x10
+
+/* misc. configuration */
+#define CLEAR_ALL_HFB			0xFF
+#define DMA_FC_THRESH_HI		(TOTAL_DESC >> 4)
+#define DMA_FC_THRESH_LO		5
+
+/* 64B receive/transmit status block */
+struct status_64 {
+	u32	length_status;		/* length and peripheral status */
+	u32	ext_status;		/* Extended status*/
+	u32	rx_csum;		/* partial rx checksum */
+	u32	unused1[9];		/* unused */
+	u32	tx_csum_info;		/* Tx checksum info. */
+	u32	unused2[3];		/* unused */
+};
+
+/* Rx status bits */
+#define STATUS_RX_EXT_MASK		0x1FFFFF
+#define STATUS_RX_CSUM_MASK		0xFFFF
+#define STATUS_RX_CSUM_OK		0x10000
+#define STATUS_RX_CSUM_FR		0x20000
+#define STATUS_RX_PROTO_TCP		0
+#define STATUS_RX_PROTO_UDP		1
+#define STATUS_RX_PROTO_ICMP		2
+#define STATUS_RX_PROTO_OTHER		3
+#define STATUS_RX_PROTO_MASK		3
+#define STATUS_RX_PROTO_SHIFT		18
+#define STATUS_FILTER_INDEX_MASK	0xFFFF
+/* Tx status bits */
+#define STATUS_TX_CSUM_START_MASK	0X7FFF
+#define STATUS_TX_CSUM_START_SHIFT	16
+#define STATUS_TX_CSUM_PROTO_UDP	0x8000
+#define STATUS_TX_CSUM_OFFSET_MASK	0x7FFF
+#define STATUS_TX_CSUM_LV		0x80000000
+
+/* DMA Descriptor */
+#define DMA_DESC_LENGTH_STATUS	0x00	/* in bytes of data in buffer */
+#define DMA_DESC_ADDRESS_LO	0x04	/* lower bits of PA */
+#define DMA_DESC_ADDRESS_HI	0x08	/* upper 32 bits of PA, GENETv4+ */
+
+/* Rx/Tx common counter group */
+struct bcmgenet_pkt_counters {
+	u32	cnt_64;		/* RO Received/Transmited 64 bytes packet */
+	u32	cnt_127;	/* RO Rx/Tx 127 bytes packet */
+	u32	cnt_255;	/* RO Rx/Tx 65-255 bytes packet */
+	u32	cnt_511;	/* RO Rx/Tx 256-511 bytes packet */
+	u32	cnt_1023;	/* RO Rx/Tx 512-1023 bytes packet */
+	u32	cnt_1518;	/* RO Rx/Tx 1024-1518 bytes packet */
+	u32	cnt_mgv;	/* RO Rx/Tx 1519-1522 good VLAN packet */
+	u32	cnt_2047;	/* RO Rx/Tx 1522-2047 bytes packet*/
+	u32	cnt_4095;	/* RO Rx/Tx 2048-4095 bytes packet*/
+	u32	cnt_9216;	/* RO Rx/Tx 4096-9216 bytes packet*/
+};
+
+/* RSV, Receive Status Vector */
+struct bcmgenet_rx_counters {
+	struct  bcmgenet_pkt_counters pkt_cnt;
+	u32	pkt;		/* RO (0x428) Received pkt count*/
+	u32	bytes;		/* RO Received byte count */
+	u32	mca;		/* RO # of Received multicast pkt */
+	u32	bca;		/* RO # of Receive broadcast pkt */
+	u32	fcs;		/* RO # of Received FCS error  */
+	u32	cf;		/* RO # of Received control frame pkt*/
+	u32	pf;		/* RO # of Received pause frame pkt */
+	u32	uo;		/* RO # of unknown op code pkt */
+	u32	aln;		/* RO # of alignment error count */
+	u32	flr;		/* RO # of frame length out of range count */
+	u32	cde;		/* RO # of code error pkt */
+	u32	fcr;		/* RO # of carrier sense error pkt */
+	u32	ovr;		/* RO # of oversize pkt*/
+	u32	jbr;		/* RO # of jabber count */
+	u32	mtue;		/* RO # of MTU error pkt*/
+	u32	pok;		/* RO # of Received good pkt */
+	u32	uc;		/* RO # of unicast pkt */
+	u32	ppp;		/* RO # of PPP pkt */
+	u32	rcrc;		/* RO (0x470),# of CRC match pkt */
+};
+
+/* TSV, Transmit Status Vector */
+struct bcmgenet_tx_counters {
+	struct bcmgenet_pkt_counters pkt_cnt;
+	u32	pkts;		/* RO (0x4a8) Transmited pkt */
+	u32	mca;		/* RO # of xmited multicast pkt */
+	u32	bca;		/* RO # of xmited broadcast pkt */
+	u32	pf;		/* RO # of xmited pause frame count */
+	u32	cf;		/* RO # of xmited control frame count */
+	u32	fcs;		/* RO # of xmited FCS error count */
+	u32	ovr;		/* RO # of xmited oversize pkt */
+	u32	drf;		/* RO # of xmited deferral pkt */
+	u32	edf;		/* RO # of xmited Excessive deferral pkt*/
+	u32	scl;		/* RO # of xmited single collision pkt */
+	u32	mcl;		/* RO # of xmited multiple collision pkt*/
+	u32	lcl;		/* RO # of xmited late collision pkt */
+	u32	ecl;		/* RO # of xmited excessive collision pkt*/
+	u32	frg;		/* RO # of xmited fragments pkt*/
+	u32	ncl;		/* RO # of xmited total collision count */
+	u32	jbr;		/* RO # of xmited jabber count*/
+	u32	bytes;		/* RO # of xmited byte count */
+	u32	pok;		/* RO # of xmited good pkt */
+	u32	uc;		/* RO (0x0x4f0)# of xmited unitcast pkt */
+};
+
+struct bcmgenet_mib_counters {
+	struct bcmgenet_rx_counters rx;
+	struct bcmgenet_tx_counters tx;
+	u32	rx_runt_cnt;
+	u32	rx_runt_fcs;
+	u32	rx_runt_fcs_align;
+	u32	rx_runt_bytes;
+	u32	rbuf_ovflow_cnt;
+	u32	rbuf_err_cnt;
+	u32	mdf_err_cnt;
+};
+
+#define UMAC_HD_BKP_CTRL		0x004
+#define	 HD_FC_EN			(1 << 0)
+#define  HD_FC_BKOFF_OK			(1 << 1)
+#define  IPG_CONFIG_RX_SHIFT		2
+#define  IPG_CONFIG_RX_MASK		0x1F
+
+#define UMAC_CMD			0x008
+#define  CMD_TX_EN			(1 << 0)
+#define  CMD_RX_EN			(1 << 1)
+#define  UMAC_SPEED_10			0
+#define  UMAC_SPEED_100			1
+#define  UMAC_SPEED_1000		2
+#define  UMAC_SPEED_2500		3
+#define  CMD_SPEED_SHIFT		2
+#define  CMD_SPEED_MASK			3
+#define  CMD_PROMISC			(1 << 4)
+#define  CMD_PAD_EN			(1 << 5)
+#define  CMD_CRC_FWD			(1 << 6)
+#define  CMD_PAUSE_FWD			(1 << 7)
+#define  CMD_RX_PAUSE_IGNORE		(1 << 8)
+#define  CMD_TX_ADDR_INS		(1 << 9)
+#define  CMD_HD_EN			(1 << 10)
+#define  CMD_SW_RESET			(1 << 13)
+#define  CMD_LCL_LOOP_EN		(1 << 15)
+#define  CMD_AUTO_CONFIG		(1 << 22)
+#define  CMD_CNTL_FRM_EN		(1 << 23)
+#define  CMD_NO_LEN_CHK			(1 << 24)
+#define  CMD_RMT_LOOP_EN		(1 << 25)
+#define  CMD_PRBL_EN			(1 << 27)
+#define  CMD_TX_PAUSE_IGNORE		(1 << 28)
+#define  CMD_TX_RX_EN			(1 << 29)
+#define  CMD_RUNT_FILTER_DIS		(1 << 30)
+
+#define UMAC_MAC0			0x00C
+#define UMAC_MAC1			0x010
+#define UMAC_MAX_FRAME_LEN		0x014
+
+#define UMAC_TX_FLUSH			0x334
+
+#define UMAC_MIB_START			0x400
+
+#define UMAC_MDIO_CMD			0x614
+#define  MDIO_START_BUSY		(1 << 29)
+#define  MDIO_READ_FAIL			(1 << 28)
+#define  MDIO_RD			(2 << 26)
+#define  MDIO_WR			(1 << 26)
+#define  MDIO_PMD_SHIFT			21
+#define  MDIO_PMD_MASK			0x1F
+#define  MDIO_REG_SHIFT			16
+#define  MDIO_REG_MASK			0x1F
+
+#define UMAC_RBUF_OVFL_CNT		0x61C
+
+#define UMAC_MPD_CTRL			0x620
+#define  MPD_EN				(1 << 0)
+#define  MPD_PW_EN			(1 << 27)
+#define  MPD_MSEQ_LEN_SHIFT		16
+#define  MPD_MSEQ_LEN_MASK		0xFF
+
+#define UMAC_MPD_PW_MS			0x624
+#define UMAC_MPD_PW_LS			0x628
+#define UMAC_RBUF_ERR_CNT		0x634
+#define UMAC_MDF_ERR_CNT		0x638
+#define UMAC_MDF_CTRL			0x650
+#define UMAC_MDF_ADDR			0x654
+#define UMAC_MIB_CTRL			0x580
+#define  MIB_RESET_RX			(1 << 0)
+#define  MIB_RESET_RUNT			(1 << 1)
+#define  MIB_RESET_TX			(1 << 2)
+
+#define RBUF_CTRL			0x00
+#define  RBUF_64B_EN			(1 << 0)
+#define  RBUF_ALIGN_2B			(1 << 1)
+#define  RBUF_BAD_DIS			(1 << 2)
+
+#define RBUF_STATUS			0x0C
+#define  RBUF_STATUS_WOL		(1 << 0)
+#define  RBUF_STATUS_MPD_INTR_ACTIVE	(1 << 1)
+#define  RBUF_STATUS_ACPI_INTR_ACTIVE	(1 << 2)
+
+#define RBUF_CHK_CTRL			0x14
+#define  RBUF_RXCHK_EN			(1 << 0)
+#define  RBUF_SKIP_FCS			(1 << 4)
+
+#define RBUF_TBUF_SIZE_CTRL		0xb4
+
+#define RBUF_HFB_CTRL_V1		0x38
+#define  RBUF_HFB_FILTER_EN_SHIFT	16
+#define  RBUF_HFB_FILTER_EN_MASK	0xffff0000
+#define  RBUF_HFB_EN			(1 << 0)
+#define  RBUF_HFB_256B			(1 << 1)
+#define  RBUF_ACPI_EN			(1 << 2)
+
+#define RBUF_HFB_LEN_V1			0x3C
+#define  RBUF_FLTR_LEN_MASK		0xFF
+#define  RBUF_FLTR_LEN_SHIFT		8
+
+#define TBUF_CTRL			0x00
+#define TBUF_BP_MC			0x0C
+
+#define TBUF_CTRL_V1			0x80
+#define TBUF_BP_MC_V1			0xA0
+
+#define HFB_CTRL			0x00
+#define HFB_FLT_ENABLE_V3PLUS		0x04
+#define HFB_FLT_LEN_V2			0x04
+#define HFB_FLT_LEN_V3PLUS		0x1C
+
+/* uniMac intrl2 registers */
+#define INTRL2_CPU_STAT			0x00
+#define INTRL2_CPU_SET			0x04
+#define INTRL2_CPU_CLEAR		0x08
+#define INTRL2_CPU_MASK_STATUS		0x0C
+#define INTRL2_CPU_MASK_SET		0x10
+#define INTRL2_CPU_MASK_CLEAR		0x14
+
+/* INTRL2 instance 0 definitions */
+#define UMAC_IRQ_SCB			(1 << 0)
+#define UMAC_IRQ_EPHY			(1 << 1)
+#define UMAC_IRQ_PHY_DET_R		(1 << 2)
+#define UMAC_IRQ_PHY_DET_F		(1 << 3)
+#define UMAC_IRQ_LINK_UP		(1 << 4)
+#define UMAC_IRQ_LINK_DOWN		(1 << 5)
+#define UMAC_IRQ_UMAC			(1 << 6)
+#define UMAC_IRQ_UMAC_TSV		(1 << 7)
+#define UMAC_IRQ_TBUF_UNDERRUN		(1 << 8)
+#define UMAC_IRQ_RBUF_OVERFLOW		(1 << 9)
+#define UMAC_IRQ_HFB_SM			(1 << 10)
+#define UMAC_IRQ_HFB_MM			(1 << 11)
+#define UMAC_IRQ_MPD_R			(1 << 12)
+#define UMAC_IRQ_RXDMA_MBDONE		(1 << 13)
+#define UMAC_IRQ_RXDMA_PDONE		(1 << 14)
+#define UMAC_IRQ_RXDMA_BDONE		(1 << 15)
+#define UMAC_IRQ_TXDMA_MBDONE		(1 << 16)
+#define UMAC_IRQ_TXDMA_PDONE		(1 << 17)
+#define UMAC_IRQ_TXDMA_BDONE		(1 << 18)
+/* Only valid for GENETv3+ */
+#define UMAC_IRQ_MDIO_DONE		(1 << 23)
+#define UMAC_IRQ_MDIO_ERROR		(1 << 24)
+
+/* Register block offsets */
+#define GENET_SYS_OFF			0x0000
+#define GENET_GR_BRIDGE_OFF		0x0040
+#define GENET_EXT_OFF			0x0080
+#define GENET_INTRL2_0_OFF		0x0200
+#define GENET_INTRL2_1_OFF		0x0240
+#define GENET_RBUF_OFF			0x0300
+#define GENET_UMAC_OFF			0x0800
+
+/* SYS block offsets and register definitions */
+#define SYS_REV_CTRL			0x00
+#define SYS_PORT_CTRL			0x04
+#define  PORT_MODE_INT_EPHY		0
+#define  PORT_MODE_INT_GPHY		1
+#define  PORT_MODE_EXT_EPHY		2
+#define  PORT_MODE_EXT_GPHY		3
+#define  PORT_MODE_EXT_RVMII_25		(4 | BIT(4))
+#define  PORT_MODE_EXT_RVMII_50		4
+#define  LED_ACT_SOURCE_MAC		(1 << 9)
+
+#define SYS_RBUF_FLUSH_CTRL		0x08
+#define SYS_TBUF_FLUSH_CTRL		0x0C
+#define RBUF_FLUSH_CTRL_V1		0x04
+
+/* Ext block register offsets and definitions */
+#define EXT_EXT_PWR_MGMT		0x00
+#define  EXT_PWR_DOWN_BIAS		(1 << 0)
+#define  EXT_PWR_DOWN_DLL		(1 << 1)
+#define  EXT_PWR_DOWN_PHY		(1 << 2)
+#define  EXT_PWR_DN_EN_LD		(1 << 3)
+#define  EXT_ENERGY_DET			(1 << 4)
+#define  EXT_IDDQ_FROM_PHY		(1 << 5)
+#define  EXT_PHY_RESET			(1 << 8)
+#define  EXT_ENERGY_DET_MASK		(1 << 12)
+
+#define EXT_RGMII_OOB_CTRL		0x0C
+#define  RGMII_MODE_EN			(1 << 0)
+#define  RGMII_LINK			(1 << 4)
+#define  OOB_DISABLE			(1 << 5)
+#define  ID_MODE_DIS			(1 << 16)
+
+#define EXT_GPHY_CTRL			0x1C
+#define  EXT_CFG_IDDQ_BIAS		(1 << 0)
+#define  EXT_CFG_PWR_DOWN		(1 << 1)
+#define  EXT_GPHY_RESET			(1 << 5)
+
+/* DMA rings size */
+#define DMA_RING_SIZE			(0x40)
+#define DMA_RINGS_SIZE			(DMA_RING_SIZE * (DESC_INDEX + 1))
+
+/* DMA registers common definitions */
+#define DMA_RW_POINTER_MASK		0x1FF
+#define DMA_P_INDEX_DISCARD_CNT_MASK	0xFFFF
+#define DMA_P_INDEX_DISCARD_CNT_SHIFT	16
+#define DMA_BUFFER_DONE_CNT_MASK	0xFFFF
+#define DMA_BUFFER_DONE_CNT_SHIFT	16
+#define DMA_P_INDEX_MASK		0xFFFF
+#define DMA_C_INDEX_MASK		0xFFFF
+
+/* DMA ring size register */
+#define DMA_RING_SIZE_MASK		0xFFFF
+#define DMA_RING_SIZE_SHIFT		16
+#define DMA_RING_BUFFER_SIZE_MASK	0xFFFF
+
+/* DMA interrupt threshold register */
+#define DMA_INTR_THRESHOLD_MASK		0x00FF
+
+/* DMA XON/XOFF register */
+#define DMA_XON_THREHOLD_MASK		0xFFFF
+#define DMA_XOFF_THRESHOLD_MASK		0xFFFF
+#define DMA_XOFF_THRESHOLD_SHIFT	16
+
+/* DMA flow period register */
+#define DMA_FLOW_PERIOD_MASK		0xFFFF
+#define DMA_MAX_PKT_SIZE_MASK		0xFFFF
+#define DMA_MAX_PKT_SIZE_SHIFT		16
+
+
+/* DMA control register */
+#define DMA_EN				(1 << 0)
+#define DMA_RING_BUF_EN_SHIFT		0x01
+#define DMA_RING_BUF_EN_MASK		0xFFFF
+#define DMA_TSB_SWAP_EN			(1 << 20)
+
+/* DMA status register */
+#define DMA_DISABLED			(1 << 0)
+#define DMA_DESC_RAM_INIT_BUSY		(1 << 1)
+
+/* DMA SCB burst size register */
+#define DMA_SCB_BURST_SIZE_MASK		0x1F
+
+/* DMA activity vector register */
+#define DMA_ACTIVITY_VECTOR_MASK	0x1FFFF
+
+/* DMA backpressure mask register */
+#define DMA_BACKPRESSURE_MASK		0x1FFFF
+#define DMA_PFC_ENABLE			(1 << 31)
+
+/* DMA backpressure status register */
+#define DMA_BACKPRESSURE_STATUS_MASK	0x1FFFF
+
+/* DMA override register */
+#define DMA_LITTLE_ENDIAN_MODE		(1 << 0)
+#define DMA_REGISTER_MODE		(1 << 1)
+
+/* DMA timeout register */
+#define DMA_TIMEOUT_MASK		0xFFFF
+
+/* TDMA rate limiting control register */
+#define DMA_RATE_LIMIT_EN_MASK		0xFFFF
+
+/* TDMA arbitration control register */
+#define DMA_ARBITER_MODE_MASK		0x03
+#define DMA_RING_BUF_PRIORITY_MASK	0x1F
+#define DMA_RING_BUF_PRIORITY_SHIFT	5
+#define DMA_RATE_ADJ_MASK		0xFF
+
+/* Tx/Rx Dma Descriptor common bits*/
+#define DMA_BUFLENGTH_MASK		0x0fff
+#define DMA_BUFLENGTH_SHIFT		16
+#define DMA_OWN				0x8000
+#define DMA_EOP				0x4000
+#define DMA_SOP				0x2000
+#define DMA_WRAP			0x1000
+/* Tx specific Dma descriptor bits */
+#define DMA_TX_UNDERRUN			0x0200
+#define DMA_TX_APPEND_CRC		0x0040
+#define DMA_TX_OW_CRC			0x0020
+#define DMA_TX_DO_CSUM			0x0010
+#define DMA_TX_QTAG_SHIFT		7
+
+/* Rx Specific Dma descriptor bits */
+#define DMA_RX_CHK_V3PLUS		0x8000
+#define DMA_RX_CHK_V12			0x1000
+#define DMA_RX_BRDCAST			0x0040
+#define DMA_RX_MULT			0x0020
+#define DMA_RX_LG			0x0010
+#define DMA_RX_NO			0x0008
+#define DMA_RX_RXER			0x0004
+#define DMA_RX_CRC_ERROR		0x0002
+#define DMA_RX_OV			0x0001
+#define DMA_RX_FI_MASK			0x001F
+#define DMA_RX_FI_SHIFT			0x0007
+#define DMA_DESC_ALLOC_MASK		0x00FF
+
+#define DMA_ARBITER_RR			0x00
+#define DMA_ARBITER_WRR			0x01
+#define DMA_ARBITER_SP			0x02
+
+struct enet_cb {
+	struct sk_buff      *skb;
+	void __iomem *bd_addr;
+	DEFINE_DMA_UNMAP_ADDR(dma_addr);
+	DEFINE_DMA_UNMAP_LEN(dma_len);
+};
+
+/* power management mode */
+enum bcmgenet_power_mode {
+	GENET_POWER_CABLE_SENSE = 0,
+	GENET_POWER_WOL_MAGIC,
+	GENET_POWER_WOL_ACPI,
+	GENET_POWER_PASSIVE,
+};
+
+struct bcmgenet_priv;
+
+/* We support both runtime GENET detection and compile-time
+ * to optimize code-paths for a given hardware
+ */
+enum bcmgenet_version {
+	GENET_V1 = 1,
+	GENET_V2,
+	GENET_V3,
+	GENET_V4
+};
+
+#define GENET_IS_V1(p)	((p)->version == GENET_V1)
+#define GENET_IS_V2(p)	((p)->version == GENET_V2)
+#define GENET_IS_V3(p)	((p)->version == GENET_V3)
+#define GENET_IS_V4(p)	((p)->version == GENET_V4)
+
+/* Hardware flags */
+#define GENET_HAS_40BITS	(1 << 0)
+#define GENET_HAS_EXT		(1 << 1)
+#define GENET_HAS_MDIO_INTR	(1 << 2)
+
+/* BCMGENET hardware parameters, keep this structure nicely aligned
+ * since it is going to be used in hot paths
+ */
+struct bcmgenet_hw_params {
+	u8		tx_queues;
+	u8		rx_queues;
+	u8		bds_cnt;
+	u8		bp_in_en_shift;
+	u32		bp_in_mask;
+	u8		hfb_filter_cnt;
+	u8		qtag_mask;
+	u16		tbuf_offset;
+	u32		hfb_offset;
+	u32		hfb_reg_offset;
+	u32		rdma_offset;
+	u32		tdma_offset;
+	u32		words_per_bd;
+	u32		flags;
+};
+
+struct bcmgenet_tx_ring {
+	spinlock_t	lock;		/* ring lock */
+	unsigned int	index;		/* ring index */
+	unsigned int	queue;		/* queue index */
+	struct enet_cb	*cbs;		/* tx ring buffer control block*/
+	unsigned int	size;		/* size of each tx ring */
+	unsigned int	c_index;	/* last consumer index of each ring*/
+	unsigned int	free_bds;	/* # of free bds for each ring */
+	unsigned int	write_ptr;	/* Tx ring write pointer SW copy */
+	unsigned int	prod_index;	/* Tx ring producer index SW copy */
+	unsigned int	cb_ptr;		/* Tx ring initial CB ptr */
+	unsigned int	end_ptr;	/* Tx ring end CB ptr */
+	void (*int_enable)(struct bcmgenet_priv *priv,
+				struct bcmgenet_tx_ring *);
+	void (*int_disable)(struct bcmgenet_priv *priv,
+				struct bcmgenet_tx_ring *);
+};
+
+/* device context */
+struct bcmgenet_priv {
+	void __iomem *base;
+	enum bcmgenet_version version;
+	struct net_device *dev;
+	spinlock_t lock;
+	spinlock_t bh_lock;
+	u32 int0_mask;
+	u32 int1_mask;
+
+	/* NAPI for descriptor based rx */
+	struct napi_struct napi ____cacheline_aligned;
+
+	/* transmit variables */
+	void __iomem *tx_bds;
+	struct enet_cb *tx_cbs;
+	unsigned int num_tx_bds;
+
+	struct bcmgenet_tx_ring tx_rings[DESC_INDEX + 1];
+
+	/* receive variables */
+	void __iomem *rx_bds;
+	void __iomem *rx_bd_assign_ptr;
+	int rx_bd_assign_index;
+	struct enet_cb *rx_cbs;
+	unsigned int num_rx_bds;
+	unsigned int rx_buf_len;
+	unsigned int rx_read_ptr;
+	unsigned int rx_c_index;
+
+	/* other misc variables */
+	struct bcmgenet_hw_params *hw_params;
+	wait_queue_head_t wq;
+	struct phy_device *phydev;
+	struct device_node *phy_dn;
+	struct mii_bus *mii_bus;
+	int old_duplex;
+	int old_link;
+	int old_pause;
+	phy_interface_t phy_interface;
+	u32 phy_supported;
+	int irq0;
+	int irq1;
+	int phy_addr;
+	int phy_speed;
+	int ext_phy;
+	unsigned int irq0_stat;
+	unsigned int irq1_stat;
+	unsigned int desc_64b_en;
+	unsigned int desc_rxchk_en;
+	unsigned int dma_rx_chk_bit;
+	unsigned int crc_fwd_en;
+	u32 msg_enable;
+
+	struct work_struct bcmgenet_irq_work;
+	struct clk *clk;
+	struct platform_device *pdev;
+
+	/* WOL */
+	unsigned long wol_enabled;
+	struct clk *clk_wol;
+	u32 wolopts;
+
+	struct mutex mib_mutex;
+	struct bcmgenet_mib_counters mib;
+};
+
+#define GENET_IO_MACRO(name, offset)					\
+static inline u32 bcmgenet_##name##_readl(struct bcmgenet_priv *priv,	\
+					u32 off)			\
+{									\
+	return __raw_readl(priv->base + offset + off);			\
+}									\
+static inline void bcmgenet_##name##_writel(struct bcmgenet_priv *priv,	\
+					u32 val, u32 off)		\
+{									\
+	__raw_writel(val, priv->base + offset + off);			\
+}
+
+GENET_IO_MACRO(ext, GENET_EXT_OFF);
+GENET_IO_MACRO(umac, GENET_UMAC_OFF);
+GENET_IO_MACRO(sys, GENET_SYS_OFF);
+
+/* interrupt l2 registers accessors */
+GENET_IO_MACRO(intrl2_0, GENET_INTRL2_0_OFF);
+GENET_IO_MACRO(intrl2_1, GENET_INTRL2_1_OFF);
+
+/* HFB register accessors  */
+GENET_IO_MACRO(hfb, priv->hw_params->hfb_offset);
+
+/* GENET v2+ HFB control and filter len helpers */
+GENET_IO_MACRO(hfb_reg, priv->hw_params->hfb_reg_offset);
+
+/* RBUF register accessors */
+GENET_IO_MACRO(rbuf, GENET_RBUF_OFF);
+
+/* MDIO routines */
+int bcmgenet_mii_init(struct net_device *dev);
+int bcmgenet_mii_config(struct net_device *dev);
+void bcmgenet_mii_exit(struct net_device *dev);
+void bcmgenet_mii_reset(struct net_device *dev);
+
+#endif /* __BCMGENET_H__ */
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 06/10] net: bcmgenet: add main driver file
  2014-02-13  5:29 ` Florian Fainelli
@ 2014-02-13  5:29   ` Florian Fainelli
  -1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

This patch adds the BCMGENET main driver file which supports the
following:

- GENET hardware from V1 to V4
- support for reading the UniMAC MIB counters statistics
- support for the 5 transmit queues
- support for RX/TX checksum offload and SG

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- use module_platform_driver boilerplate macro
- renamed bcmgenet_plat_drv to bcmgenet_driver
- renamed bcmgenet_drv_{probe,remove} to bcmgenet_{probe,remove}
- removed priv->phy_type usage and use priv->phy_interface which
  contains the exact same value
- removed debug module parameters, unused
- added MODULDE_{AUTHOR,ALIAS,DESCRIPTION} macros
- remove hardcoded queue index check in bcmgenet_xmit

 drivers/net/ethernet/broadcom/genet/bcmgenet.c | 2664 ++++++++++++++++++++++++
 1 file changed, 2664 insertions(+)
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmgenet.c

diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
new file mode 100644
index 0000000..ab71e81
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -0,0 +1,2664 @@
+/*
+ * Broadcom GENET (Gigabit Ethernet) controller driver
+ *
+ * Copyright (c) 2014 Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+
+#define pr_fmt(fmt)				"bcmgenet: " fmt
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/types.h>
+#include <linux/fcntl.h>
+#include <linux/interrupt.h>
+#include <linux/string.h>
+#include <linux/if_ether.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/delay.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/pm.h>
+#include <linux/clk.h>
+#include <linux/version.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_net.h>
+#include <linux/of_platform.h>
+#include <net/arp.h>
+
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+#include <linux/netdevice.h>
+#include <linux/inetdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/phy.h>
+
+#include <asm/unaligned.h>
+
+#include "bcmgenet.h"
+
+/* Maximum number of hardware queues, downsized if needed */
+#define GENET_MAX_MQ_CNT	4
+
+/* Default highest priority queue for multi queue support */
+#define GENET_Q0_PRIORITY	0
+
+#define GENET_DEFAULT_BD_CNT	\
+	(TOTAL_DESC - priv->hw_params->tx_queues * priv->hw_params->bds_cnt)
+
+#define RX_BUF_LENGTH		2048
+#define SKB_ALIGNMENT		32
+
+/* Tx/Rx DMA register offset, skip 256 descriptors */
+#define WORDS_PER_BD(p)		(p->hw_params->words_per_bd)
+#define DMA_DESC_SIZE		(WORDS_PER_BD(priv) * sizeof(u32))
+
+#define GENET_TDMA_REG_OFF	(priv->hw_params->tdma_offset + \
+				TOTAL_DESC * DMA_DESC_SIZE)
+
+#define GENET_RDMA_REG_OFF	(priv->hw_params->rdma_offset + \
+				TOTAL_DESC * DMA_DESC_SIZE)
+
+static inline void dmadesc_set_length_status(struct bcmgenet_priv *priv,
+						void __iomem *d, u32 value)
+{
+	__raw_writel(value, d + DMA_DESC_LENGTH_STATUS);
+}
+
+static inline u32 dmadesc_get_length_status(struct bcmgenet_priv *priv,
+						void __iomem *d)
+{
+	return __raw_readl(d + DMA_DESC_LENGTH_STATUS);
+}
+
+static inline void dmadesc_set_addr(struct bcmgenet_priv *priv,
+				    void __iomem *d,
+				    dma_addr_t addr)
+{
+	__raw_writel(lower_32_bits(addr), d + DMA_DESC_ADDRESS_LO);
+
+	/* Register writes to GISB bus can take couple hundred nanoseconds
+	 * and are done for each packet, save these expensive writes unless
+	 * the platform is explicitely configured for 64-bits/LPAE.
+	 */
+#ifdef CONFIG_PHYS_ADDR_T_64BIT
+	if (priv->hw_params->flags & GENET_HAS_40BITS)
+		__raw_writel(upper_32_bits(addr), d + DMA_DESC_ADDRESS_HI);
+#endif
+}
+
+/* Combined address + length/status setter */
+static inline void dmadesc_set(struct bcmgenet_priv *priv,
+				void __iomem *d, dma_addr_t addr, u32 val)
+{
+	dmadesc_set_length_status(priv, d, val);
+	dmadesc_set_addr(priv, d, addr);
+}
+
+static inline dma_addr_t dmadesc_get_addr(struct bcmgenet_priv *priv,
+					  void __iomem *d)
+{
+	dma_addr_t addr;
+
+	addr = __raw_readl(d + DMA_DESC_ADDRESS_LO);
+
+	/* Register writes to GISB bus can take couple hundred nanoseconds
+	 * and are done for each packet, save these expensive writes unless
+	 * the platform is explicitely configured for 64-bits/LPAE.
+	 */
+#ifdef CONFIG_PHYS_ADDR_T_64BIT
+	if (priv->hw_params->flags & GENET_HAS_40BITS)
+		addr |= (u64)__raw_readl(d + DMA_DESC_ADDRESS_HI) << 32;
+#endif
+	return addr;
+}
+
+#define GENET_VER_FMT	"%1d.%1d EPHY: 0x%04x"
+
+#define GENET_MSG_DEFAULT	(NETIF_MSG_DRV | NETIF_MSG_PROBE | \
+				NETIF_MSG_LINK)
+
+static inline u32 bcmgenet_rbuf_ctrl_get(struct bcmgenet_priv *priv)
+{
+	if (GENET_IS_V1(priv))
+		return bcmgenet_rbuf_readl(priv, RBUF_FLUSH_CTRL_V1);
+	else
+		return bcmgenet_sys_readl(priv, SYS_RBUF_FLUSH_CTRL);
+}
+
+static inline void bcmgenet_rbuf_ctrl_set(struct bcmgenet_priv *priv, u32 val)
+{
+	if (GENET_IS_V1(priv))
+		bcmgenet_rbuf_writel(priv, val, RBUF_FLUSH_CTRL_V1);
+	else
+		bcmgenet_sys_writel(priv, val, SYS_RBUF_FLUSH_CTRL);
+}
+
+/* These macros are defined to deal with register map change
+ * between GENET1.1 and GENET2. Only those currently being used
+ * by driver are defined.
+ */
+static inline u32 bcmgenet_tbuf_ctrl_get(struct bcmgenet_priv *priv)
+{
+	if (GENET_IS_V1(priv))
+		return bcmgenet_rbuf_readl(priv, TBUF_CTRL_V1);
+	else
+		return __raw_readl(priv->base +
+				priv->hw_params->tbuf_offset + TBUF_CTRL);
+}
+
+static inline void bcmgenet_tbuf_ctrl_set(struct bcmgenet_priv *priv, u32 val)
+{
+	if (GENET_IS_V1(priv))
+		bcmgenet_rbuf_writel(priv, val, TBUF_CTRL_V1);
+	else
+		__raw_writel(val, priv->base +
+				priv->hw_params->tbuf_offset + TBUF_CTRL);
+}
+
+static inline u32 bcmgenet_bp_mc_get(struct bcmgenet_priv *priv)
+{
+	if (GENET_IS_V1(priv))
+		return bcmgenet_rbuf_readl(priv, TBUF_BP_MC_V1);
+	else
+		return __raw_readl(priv->base +
+				priv->hw_params->tbuf_offset + TBUF_BP_MC);
+}
+
+static inline void bcmgenet_bp_mc_set(struct bcmgenet_priv *priv, u32 val)
+{
+	if (GENET_IS_V1(priv))
+		bcmgenet_rbuf_writel(priv, val, TBUF_BP_MC_V1);
+	else
+		__raw_writel(val, priv->base +
+				priv->hw_params->tbuf_offset + TBUF_BP_MC);
+}
+
+/* RX/TX DMA register accessors */
+enum dma_reg {
+	DMA_RING_CFG = 0,
+	DMA_CTRL,
+	DMA_STATUS,
+	DMA_SCB_BURST_SIZE,
+	DMA_ARB_CTRL,
+	DMA_PRIORITY,
+	DMA_RING_PRIORITY,
+};
+
+static const u8 bcmgenet_dma_regs_v3plus[] = {
+	[DMA_RING_CFG]		= 0x00,
+	[DMA_CTRL]		= 0x04,
+	[DMA_STATUS]		= 0x08,
+	[DMA_SCB_BURST_SIZE]	= 0x0C,
+	[DMA_ARB_CTRL]		= 0x2C,
+	[DMA_PRIORITY]		= 0x30,
+	[DMA_RING_PRIORITY]	= 0x38,
+};
+
+static const u8 bcmgenet_dma_regs_v2[] = {
+	[DMA_RING_CFG]		= 0x00,
+	[DMA_CTRL]		= 0x04,
+	[DMA_STATUS]		= 0x08,
+	[DMA_SCB_BURST_SIZE]	= 0x0C,
+	[DMA_ARB_CTRL]		= 0x30,
+	[DMA_PRIORITY]		= 0x34,
+	[DMA_RING_PRIORITY]	= 0x3C,
+};
+
+static const u8 bcmgenet_dma_regs_v1[] = {
+	[DMA_CTRL]		= 0x00,
+	[DMA_STATUS]		= 0x04,
+	[DMA_SCB_BURST_SIZE]	= 0x0C,
+	[DMA_ARB_CTRL]		= 0x30,
+	[DMA_PRIORITY]		= 0x34,
+	[DMA_RING_PRIORITY]	= 0x3C,
+};
+
+/* Set at runtime once bcmgenet version is known */
+static const u8 *bcmgenet_dma_regs;
+
+static inline struct bcmgenet_priv *dev_to_priv(struct device *dev)
+{
+	return netdev_priv(dev_get_drvdata(dev));
+}
+
+static inline u32 bcmgenet_tdma_readl(struct bcmgenet_priv *priv,
+					enum dma_reg r)
+{
+	return __raw_readl(priv->base + GENET_TDMA_REG_OFF +
+			DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
+}
+
+static inline void bcmgenet_tdma_writel(struct bcmgenet_priv *priv,
+					u32 val, enum dma_reg r)
+{
+	__raw_writel(val, priv->base + GENET_TDMA_REG_OFF +
+			DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
+}
+
+static inline u32 bcmgenet_rdma_readl(struct bcmgenet_priv *priv,
+					enum dma_reg r)
+{
+	return __raw_readl(priv->base + GENET_RDMA_REG_OFF +
+			DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
+}
+
+static inline void bcmgenet_rdma_writel(struct bcmgenet_priv *priv,
+					u32 val, enum dma_reg r)
+{
+	__raw_writel(val, priv->base + GENET_RDMA_REG_OFF +
+			DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
+}
+
+/* RDMA/TDMA ring registers and accessors
+ * we merge the common fields and just prefix with T/D the registers
+ * having different meaning depending on the direction
+ */
+enum dma_ring_reg {
+	TDMA_READ_PTR = 0,
+	RDMA_WRITE_PTR = TDMA_READ_PTR,
+	TDMA_READ_PTR_HI,
+	RDMA_WRITE_PTR_HI = TDMA_READ_PTR_HI,
+	TDMA_CONS_INDEX,
+	RDMA_PROD_INDEX = TDMA_CONS_INDEX,
+	TDMA_PROD_INDEX,
+	RDMA_CONS_INDEX = TDMA_PROD_INDEX,
+	DMA_RING_BUF_SIZE,
+	DMA_START_ADDR,
+	DMA_START_ADDR_HI,
+	DMA_END_ADDR,
+	DMA_END_ADDR_HI,
+	DMA_MBUF_DONE_THRESH,
+	TDMA_FLOW_PERIOD,
+	RDMA_XON_XOFF_THRESH = TDMA_FLOW_PERIOD,
+	TDMA_WRITE_PTR,
+	RDMA_READ_PTR = TDMA_WRITE_PTR,
+	TDMA_WRITE_PTR_HI,
+	RDMA_READ_PTR_HI = TDMA_WRITE_PTR_HI
+};
+
+/* GENET v4 supports 40-bits pointer addressing
+ * for obvious reasons the LO and HI word parts
+ * are contiguous, but this offsets the other
+ * registers.
+ */
+static const u8 genet_dma_ring_regs_v4[] = {
+	[TDMA_READ_PTR]			= 0x00,
+	[TDMA_READ_PTR_HI]		= 0x04,
+	[TDMA_CONS_INDEX]		= 0x08,
+	[TDMA_PROD_INDEX]		= 0x0C,
+	[DMA_RING_BUF_SIZE]		= 0x10,
+	[DMA_START_ADDR]		= 0x14,
+	[DMA_START_ADDR_HI]		= 0x18,
+	[DMA_END_ADDR]			= 0x1C,
+	[DMA_END_ADDR_HI]		= 0x20,
+	[DMA_MBUF_DONE_THRESH]		= 0x24,
+	[TDMA_FLOW_PERIOD]		= 0x28,
+	[TDMA_WRITE_PTR]		= 0x2C,
+	[TDMA_WRITE_PTR_HI]		= 0x30,
+};
+
+static const u8 genet_dma_ring_regs_v123[] = {
+	[TDMA_READ_PTR]			= 0x00,
+	[TDMA_CONS_INDEX]		= 0x04,
+	[TDMA_PROD_INDEX]		= 0x08,
+	[DMA_RING_BUF_SIZE]		= 0x0C,
+	[DMA_START_ADDR]		= 0x10,
+	[DMA_END_ADDR]			= 0x14,
+	[DMA_MBUF_DONE_THRESH]		= 0x18,
+	[TDMA_FLOW_PERIOD]		= 0x1C,
+	[TDMA_WRITE_PTR]		= 0x20,
+};
+
+/* Set at runtime once GENET version is known */
+static const u8 *genet_dma_ring_regs;
+
+static inline u32 bcmgenet_tdma_ring_readl(struct bcmgenet_priv *priv,
+						unsigned int ring,
+						enum dma_ring_reg r)
+{
+	return __raw_readl(priv->base + GENET_TDMA_REG_OFF +
+			(DMA_RING_SIZE * ring) +
+			genet_dma_ring_regs[r]);
+}
+
+static inline void bcmgenet_tdma_ring_writel(struct bcmgenet_priv *priv,
+						unsigned int ring,
+						u32 val,
+						enum dma_ring_reg r)
+{
+	__raw_writel(val, priv->base + GENET_TDMA_REG_OFF +
+			(DMA_RING_SIZE * ring) +
+			genet_dma_ring_regs[r]);
+}
+
+static inline u32 bcmgenet_rdma_ring_readl(struct bcmgenet_priv *priv,
+						unsigned int ring,
+						enum dma_ring_reg r)
+{
+	return __raw_readl(priv->base + GENET_RDMA_REG_OFF +
+			(DMA_RING_SIZE * ring) +
+			genet_dma_ring_regs[r]);
+}
+
+static inline void bcmgenet_rdma_ring_writel(struct bcmgenet_priv *priv,
+						unsigned int ring,
+						u32 val,
+						enum dma_ring_reg r)
+{
+	__raw_writel(val, priv->base + GENET_RDMA_REG_OFF +
+			(DMA_RING_SIZE * ring) +
+			genet_dma_ring_regs[r]);
+}
+
+static int bcmgenet_get_settings(struct net_device *dev,
+		struct ethtool_cmd *cmd)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	if (!netif_running(dev))
+		return -EINVAL;
+
+	if (!priv->phydev)
+		return -ENODEV;
+
+	return phy_ethtool_gset(priv->phydev, cmd);
+}
+
+static int bcmgenet_set_settings(struct net_device *dev,
+		struct ethtool_cmd *cmd)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	if (!netif_running(dev))
+		return -EINVAL;
+
+	if (!priv->phydev)
+		return -ENODEV;
+
+	return phy_ethtool_sset(priv->phydev, cmd);
+}
+
+static int bcmgenet_set_rx_csum(struct net_device *dev,
+				netdev_features_t wanted)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	u32 rbuf_chk_ctrl;
+	int rx_csum_en;
+
+	rx_csum_en = !!(wanted & NETIF_F_RXCSUM);
+
+	spin_lock_bh(&priv->bh_lock);
+	rbuf_chk_ctrl = bcmgenet_rbuf_readl(priv, RBUF_CHK_CTRL);
+
+	/* enable rx checksumming */
+	if (!rx_csum_en)
+		rbuf_chk_ctrl &= ~RBUF_RXCHK_EN;
+	else
+		rbuf_chk_ctrl |= RBUF_RXCHK_EN;
+	priv->desc_rxchk_en = rx_csum_en;
+	bcmgenet_rbuf_writel(priv, rbuf_chk_ctrl, RBUF_CHK_CTRL);
+
+	spin_unlock_bh(&priv->bh_lock);
+
+	return 0;
+}
+static int bcmgenet_set_tx_csum(struct net_device *dev,
+				netdev_features_t wanted)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int desc_64b_en;
+	u32 tbuf_ctrl, rbuf_ctrl;
+
+	spin_lock_bh(&priv->bh_lock);
+	tbuf_ctrl = bcmgenet_tbuf_ctrl_get(priv);
+	rbuf_ctrl = bcmgenet_rbuf_readl(priv, RBUF_CTRL);
+
+	desc_64b_en = !!(wanted & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM));
+
+	/* enable 64bytes descriptor in both directions (RBUF and TBUF) */
+	if (!desc_64b_en) {
+		tbuf_ctrl &= ~RBUF_64B_EN;
+		rbuf_ctrl &= ~RBUF_64B_EN;
+	} else {
+		tbuf_ctrl |= RBUF_64B_EN;
+		rbuf_ctrl |= RBUF_64B_EN;
+	}
+	priv->desc_64b_en = desc_64b_en;
+
+	bcmgenet_tbuf_ctrl_set(priv, tbuf_ctrl);
+	bcmgenet_rbuf_writel(priv, rbuf_ctrl, RBUF_CTRL);
+	spin_unlock_bh(&priv->bh_lock);
+	return 0;
+}
+
+static int bcmgenet_set_features(struct net_device *dev,
+		netdev_features_t features)
+{
+	netdev_features_t changed = features ^ dev->features;
+	netdev_features_t wanted = dev->wanted_features;
+	int ret = 0;
+
+	if (changed & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM))
+		ret = bcmgenet_set_tx_csum(dev, wanted);
+	if (changed & (NETIF_F_RXCSUM))
+		ret = bcmgenet_set_rx_csum(dev, wanted);
+
+	return ret;
+}
+
+static u32 bcmgenet_get_msglevel(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	return priv->msg_enable;
+}
+
+static void bcmgenet_set_msglevel(struct net_device *dev, u32 level)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	priv->msg_enable = level;
+}
+
+/* standard ethtool support functions. */
+enum bcmgenet_stat_type {
+	BCMGENET_STAT_NETDEV = -1,
+	BCMGENET_STAT_MIB_RX,
+	BCMGENET_STAT_MIB_TX,
+	BCMGENET_STAT_RUNT,
+	BCMGENET_STAT_MISC,
+};
+
+struct bcmgenet_stats {
+	char stat_string[ETH_GSTRING_LEN];
+	int stat_sizeof;
+	int stat_offset;
+	enum bcmgenet_stat_type type;
+	/* reg offset from UMAC base for misc counters */
+	u16 reg_offset;
+};
+
+#define STAT_NETDEV(m) { \
+	.stat_string = __stringify(m), \
+	.stat_sizeof = sizeof(((struct net_device_stats *)0)->m), \
+	.stat_offset = offsetof(struct net_device_stats, m), \
+	.type = BCMGENET_STAT_NETDEV, \
+}
+
+#define STAT_GENET_MIB(str, m, _type) { \
+	.stat_string = str, \
+	.stat_sizeof = sizeof(((struct bcmgenet_priv *)0)->m), \
+	.stat_offset = offsetof(struct bcmgenet_priv, m), \
+	.type = _type, \
+}
+
+#define STAT_GENET_MIB_RX(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_MIB_RX)
+#define STAT_GENET_MIB_TX(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_MIB_TX)
+#define STAT_GENET_RUNT(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_RUNT)
+
+#define STAT_GENET_MISC(str, m, offset) { \
+	.stat_string = str, \
+	.stat_sizeof = sizeof(((struct bcmgenet_priv *)0)->m), \
+	.stat_offset = offsetof(struct bcmgenet_priv, m), \
+	.type = BCMGENET_STAT_MISC, \
+	.reg_offset = offset, \
+}
+
+
+/* There is a 0xC gap between the end of RX and beginning of TX stats and then
+ * between the end of TX stats and the beginning of the RX RUNT
+ */
+#define BCMGENET_STAT_OFFSET	0xc
+
+/* Hardware counters must be kept in sync because the order/offset
+ * is important here (order in structure declaration = order in hardware)
+ */
+static const struct bcmgenet_stats bcmgenet_gstrings_stats[] = {
+	/* general stats */
+	STAT_NETDEV(rx_packets),
+	STAT_NETDEV(tx_packets),
+	STAT_NETDEV(rx_bytes),
+	STAT_NETDEV(tx_bytes),
+	STAT_NETDEV(rx_errors),
+	STAT_NETDEV(tx_errors),
+	STAT_NETDEV(rx_dropped),
+	STAT_NETDEV(tx_dropped),
+	STAT_NETDEV(multicast),
+	/* UniMAC RSV counters */
+	STAT_GENET_MIB_RX("rx_64_octets", mib.rx.pkt_cnt.cnt_64),
+	STAT_GENET_MIB_RX("rx_65_127_oct", mib.rx.pkt_cnt.cnt_127),
+	STAT_GENET_MIB_RX("rx_128_255_oct", mib.rx.pkt_cnt.cnt_255),
+	STAT_GENET_MIB_RX("rx_256_511_oct", mib.rx.pkt_cnt.cnt_511),
+	STAT_GENET_MIB_RX("rx_512_1023_oct", mib.rx.pkt_cnt.cnt_1023),
+	STAT_GENET_MIB_RX("rx_1024_1518_oct", mib.rx.pkt_cnt.cnt_1518),
+	STAT_GENET_MIB_RX("rx_vlan_1519_1522_oct", mib.rx.pkt_cnt.cnt_mgv),
+	STAT_GENET_MIB_RX("rx_1522_2047_oct", mib.rx.pkt_cnt.cnt_2047),
+	STAT_GENET_MIB_RX("rx_2048_4095_oct", mib.rx.pkt_cnt.cnt_4095),
+	STAT_GENET_MIB_RX("rx_4096_9216_oct", mib.rx.pkt_cnt.cnt_9216),
+	STAT_GENET_MIB_RX("rx_pkts", mib.rx.pkt),
+	STAT_GENET_MIB_RX("rx_bytes", mib.rx.bytes),
+	STAT_GENET_MIB_RX("rx_multicast", mib.rx.mca),
+	STAT_GENET_MIB_RX("rx_broadcast", mib.rx.bca),
+	STAT_GENET_MIB_RX("rx_fcs", mib.rx.fcs),
+	STAT_GENET_MIB_RX("rx_control", mib.rx.cf),
+	STAT_GENET_MIB_RX("rx_pause", mib.rx.pf),
+	STAT_GENET_MIB_RX("rx_unknown", mib.rx.uo),
+	STAT_GENET_MIB_RX("rx_align", mib.rx.aln),
+	STAT_GENET_MIB_RX("rx_outrange", mib.rx.flr),
+	STAT_GENET_MIB_RX("rx_code", mib.rx.cde),
+	STAT_GENET_MIB_RX("rx_carrier", mib.rx.fcr),
+	STAT_GENET_MIB_RX("rx_oversize", mib.rx.ovr),
+	STAT_GENET_MIB_RX("rx_jabber", mib.rx.jbr),
+	STAT_GENET_MIB_RX("rx_mtu_err", mib.rx.mtue),
+	STAT_GENET_MIB_RX("rx_good_pkts", mib.rx.pok),
+	STAT_GENET_MIB_RX("rx_unicast", mib.rx.uc),
+	STAT_GENET_MIB_RX("rx_ppp", mib.rx.ppp),
+	STAT_GENET_MIB_RX("rx_crc", mib.rx.rcrc),
+	/* UniMAC TSV counters */
+	STAT_GENET_MIB_TX("tx_64_octets", mib.tx.pkt_cnt.cnt_64),
+	STAT_GENET_MIB_TX("tx_65_127_oct", mib.tx.pkt_cnt.cnt_127),
+	STAT_GENET_MIB_TX("tx_128_255_oct", mib.tx.pkt_cnt.cnt_255),
+	STAT_GENET_MIB_TX("tx_256_511_oct", mib.tx.pkt_cnt.cnt_511),
+	STAT_GENET_MIB_TX("tx_512_1023_oct", mib.tx.pkt_cnt.cnt_1023),
+	STAT_GENET_MIB_TX("tx_1024_1518_oct", mib.tx.pkt_cnt.cnt_1518),
+	STAT_GENET_MIB_TX("tx_vlan_1519_1522_oct", mib.tx.pkt_cnt.cnt_mgv),
+	STAT_GENET_MIB_TX("tx_1522_2047_oct", mib.tx.pkt_cnt.cnt_2047),
+	STAT_GENET_MIB_TX("tx_2048_4095_oct", mib.tx.pkt_cnt.cnt_4095),
+	STAT_GENET_MIB_TX("tx_4096_9216_oct", mib.tx.pkt_cnt.cnt_9216),
+	STAT_GENET_MIB_TX("tx_pkts", mib.tx.pkts),
+	STAT_GENET_MIB_TX("tx_multicast", mib.tx.mca),
+	STAT_GENET_MIB_TX("tx_broadcast", mib.tx.bca),
+	STAT_GENET_MIB_TX("tx_pause", mib.tx.pf),
+	STAT_GENET_MIB_TX("tx_control", mib.tx.cf),
+	STAT_GENET_MIB_TX("tx_fcs_err", mib.tx.fcs),
+	STAT_GENET_MIB_TX("tx_oversize", mib.tx.ovr),
+	STAT_GENET_MIB_TX("tx_defer", mib.tx.drf),
+	STAT_GENET_MIB_TX("tx_excess_defer", mib.tx.edf),
+	STAT_GENET_MIB_TX("tx_single_col", mib.tx.scl),
+	STAT_GENET_MIB_TX("tx_multi_col", mib.tx.mcl),
+	STAT_GENET_MIB_TX("tx_late_col", mib.tx.lcl),
+	STAT_GENET_MIB_TX("tx_excess_col", mib.tx.ecl),
+	STAT_GENET_MIB_TX("tx_frags", mib.tx.frg),
+	STAT_GENET_MIB_TX("tx_total_col", mib.tx.ncl),
+	STAT_GENET_MIB_TX("tx_jabber", mib.tx.jbr),
+	STAT_GENET_MIB_TX("tx_bytes", mib.tx.bytes),
+	STAT_GENET_MIB_TX("tx_good_pkts", mib.tx.pok),
+	STAT_GENET_MIB_TX("tx_unicast", mib.tx.uc),
+	/* UniMAC RUNT counters */
+	STAT_GENET_RUNT("rx_runt_pkts", mib.rx_runt_cnt),
+	STAT_GENET_RUNT("rx_runt_valid_fcs", mib.rx_runt_fcs),
+	STAT_GENET_RUNT("rx_runt_inval_fcs_align", mib.rx_runt_fcs_align),
+	STAT_GENET_RUNT("rx_runt_bytes", mib.rx_runt_bytes),
+	/* Misc UniMAC counters */
+	STAT_GENET_MISC("rbuf_ovflow_cnt", mib.rbuf_ovflow_cnt,
+			UMAC_RBUF_OVFL_CNT),
+	STAT_GENET_MISC("rbuf_err_cnt", mib.rbuf_err_cnt, UMAC_RBUF_ERR_CNT),
+	STAT_GENET_MISC("mdf_err_cnt", mib.mdf_err_cnt, UMAC_MDF_ERR_CNT),
+};
+
+#define BCMGENET_STATS_LEN	ARRAY_SIZE(bcmgenet_gstrings_stats)
+
+static void bcmgenet_get_drvinfo(struct net_device *dev,
+		struct ethtool_drvinfo *info)
+{
+	strlcpy(info->driver, "bcmgenet", sizeof(info->driver));
+	strlcpy(info->version, "v2.0", sizeof(info->version));
+	info->n_stats = BCMGENET_STATS_LEN;
+
+}
+
+static int bcmgenet_get_sset_count(struct net_device *dev, int string_set)
+{
+	switch (string_set) {
+	case ETH_SS_STATS:
+		return BCMGENET_STATS_LEN;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static void bcmgenet_get_strings(struct net_device *dev,
+				u32 stringset, u8 *data)
+{
+	int i;
+
+	switch (stringset) {
+	case ETH_SS_STATS:
+		for (i = 0; i < BCMGENET_STATS_LEN; i++) {
+			memcpy(data + i * ETH_GSTRING_LEN,
+				bcmgenet_gstrings_stats[i].stat_string,
+				ETH_GSTRING_LEN);
+		}
+		break;
+	}
+}
+
+static void bcmgenet_update_mib_counters(struct bcmgenet_priv *priv)
+{
+	int i, j = 0;
+
+	for (i = 0; i < BCMGENET_STATS_LEN; i++) {
+		const struct bcmgenet_stats *s;
+		u32 val = 0;
+		char *p;
+		u8 offset = 0;
+
+		s = &bcmgenet_gstrings_stats[i];
+		switch (s->type) {
+		case BCMGENET_STAT_NETDEV:
+			continue;
+		case BCMGENET_STAT_MIB_RX:
+		case BCMGENET_STAT_MIB_TX:
+		case BCMGENET_STAT_RUNT:
+			if (s->type != BCMGENET_STAT_MIB_RX)
+				offset = BCMGENET_STAT_OFFSET;
+			val = bcmgenet_umac_readl(priv, UMAC_MIB_START +
+								j + offset);
+			break;
+		case BCMGENET_STAT_MISC:
+			val = bcmgenet_umac_readl(priv, s->reg_offset);
+			/* clear if overflowed */
+			if (val == ~0)
+				bcmgenet_umac_writel(priv, 0, s->reg_offset);
+			break;
+		}
+
+		j += s->stat_sizeof;
+		p = (char *)priv + s->stat_offset;
+		*(u32 *)p = val;
+	}
+}
+
+static void bcmgenet_get_ethtool_stats(struct net_device *dev,
+					struct ethtool_stats *stats,
+					u64 *data)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int i;
+
+	mutex_lock(&priv->mib_mutex);
+	if (netif_running(dev))
+		bcmgenet_update_mib_counters(priv);
+
+	for (i = 0; i < BCMGENET_STATS_LEN; i++) {
+		const struct bcmgenet_stats *s;
+		char *p;
+
+		s = &bcmgenet_gstrings_stats[i];
+		if (s->type == BCMGENET_STAT_NETDEV)
+			p = (char *)&dev->stats;
+		else
+			p = (char *)priv;
+		p += s->stat_offset;
+		data[i] = *(u32 *)p;
+	}
+	mutex_unlock(&priv->mib_mutex);
+}
+
+/* standard ethtool support functions. */
+static struct ethtool_ops bcmgenet_ethtool_ops = {
+	.get_strings		= bcmgenet_get_strings,
+	.get_sset_count		= bcmgenet_get_sset_count,
+	.get_ethtool_stats	= bcmgenet_get_ethtool_stats,
+	.get_settings		= bcmgenet_get_settings,
+	.set_settings		= bcmgenet_set_settings,
+	.get_drvinfo		= bcmgenet_get_drvinfo,
+	.get_link		= ethtool_op_get_link,
+	.get_msglevel		= bcmgenet_get_msglevel,
+	.set_msglevel		= bcmgenet_set_msglevel,
+};
+
+/* Power down the unimac, based on mode. */
+static void bcmgenet_power_down(struct bcmgenet_priv *priv,
+				enum bcmgenet_power_mode mode)
+{
+	u32 reg;
+
+	switch (mode) {
+	case GENET_POWER_CABLE_SENSE:
+		if (priv->phydev)
+			phy_detach(priv->phydev);
+		break;
+
+	case GENET_POWER_PASSIVE:
+		/* Power down LED */
+		bcmgenet_mii_reset(priv->dev);
+		if (priv->hw_params->flags & GENET_HAS_EXT) {
+			reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+			reg |= (EXT_PWR_DOWN_PHY |
+				EXT_PWR_DOWN_DLL | EXT_PWR_DOWN_BIAS);
+			bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+		}
+		break;
+	default:
+		break;
+	}
+
+}
+
+static void bcmgenet_power_up(struct bcmgenet_priv *priv,
+				enum bcmgenet_power_mode mode)
+{
+	u32 reg;
+
+	switch (mode) {
+	case GENET_POWER_CABLE_SENSE:
+		/* enable APD */
+		if (priv->hw_params->flags & GENET_HAS_EXT) {
+			reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+			reg |= EXT_PWR_DN_EN_LD;
+			bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+			bcmgenet_mii_reset(priv->dev);
+		}
+		break;
+
+	case GENET_POWER_PASSIVE:
+		if (priv->hw_params->flags & GENET_HAS_EXT) {
+			reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+			reg &= ~EXT_PWR_DOWN_DLL;
+			reg &= ~EXT_PWR_DOWN_PHY;
+			reg &= ~EXT_PWR_DOWN_BIAS;
+			/* enable APD */
+			reg |= EXT_PWR_DN_EN_LD;
+			bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+			bcmgenet_mii_reset(priv->dev);
+		}
+		break;
+	default:
+		break;
+	}
+}
+
+/* ioctl handle special commands that are not present in ethtool. */
+static int bcmgenet_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int val = 0;
+
+	if (!netif_running(dev))
+		return -EINVAL;
+
+	switch (cmd) {
+	case SIOCGMIIPHY:
+	case SIOCGMIIREG:
+	case SIOCSMIIREG:
+		if (!priv->phydev)
+			val = -ENODEV;
+		else
+			val = phy_mii_ioctl(priv->phydev, rq, cmd);
+		break;
+
+	default:
+		val = -EINVAL;
+		break;
+	}
+
+	return val;
+}
+
+static struct enet_cb *bcmgenet_get_txcb(struct bcmgenet_priv *priv,
+					 struct bcmgenet_tx_ring *ring)
+{
+	struct enet_cb *tx_cb_ptr;
+
+	tx_cb_ptr = ring->cbs;
+	tx_cb_ptr += ring->write_ptr - ring->cb_ptr;
+	tx_cb_ptr->bd_addr = priv->tx_bds + ring->write_ptr * DMA_DESC_SIZE;
+	/* Advancing local write pointer */
+	if (ring->write_ptr == ring->end_ptr)
+		ring->write_ptr = ring->cb_ptr;
+	else
+		ring->write_ptr++;
+
+	return tx_cb_ptr;
+}
+
+/* Simple helper to free a control block's resources */
+static void bcmgenet_free_cb(struct enet_cb *cb)
+{
+	dev_kfree_skb_any(cb->skb);
+	cb->skb = NULL;
+	dma_unmap_addr_set(cb, dma_addr, 0);
+}
+
+static inline void bcmgenet_tx_ring16_int_disable(struct bcmgenet_priv *priv,
+						  struct bcmgenet_tx_ring *ring)
+{
+	bcmgenet_intrl2_0_writel(priv,
+			UMAC_IRQ_TXDMA_BDONE | UMAC_IRQ_TXDMA_PDONE,
+			INTRL2_CPU_MASK_SET);
+}
+
+static inline void bcmgenet_tx_ring16_int_enable(struct bcmgenet_priv *priv,
+						 struct bcmgenet_tx_ring *ring)
+{
+	bcmgenet_intrl2_0_writel(priv,
+			UMAC_IRQ_TXDMA_BDONE | UMAC_IRQ_TXDMA_PDONE,
+			INTRL2_CPU_MASK_CLEAR);
+}
+
+static inline void bcmgenet_tx_ring_int_enable(struct bcmgenet_priv *priv,
+						struct bcmgenet_tx_ring *ring)
+{
+	bcmgenet_intrl2_1_writel(priv,
+			(1 << ring->index), INTRL2_CPU_MASK_CLEAR);
+	priv->int1_mask &= ~(1 << ring->index);
+}
+
+static inline void bcmgenet_tx_ring_int_disable(struct bcmgenet_priv *priv,
+						struct bcmgenet_tx_ring *ring)
+{
+	bcmgenet_intrl2_1_writel(priv,
+			(1 << ring->index), INTRL2_CPU_MASK_SET);
+	priv->int1_mask |= (1 << ring->index);
+}
+
+/* Unlocked version of the reclaim routine */
+static void __bcmgenet_tx_reclaim(struct net_device *dev,
+				struct bcmgenet_tx_ring *ring)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int last_tx_cn, last_c_index, num_tx_bds;
+	struct enet_cb *tx_cb_ptr;
+	unsigned int c_index;
+
+	/* Compute how many buffers are transmited since last xmit call */
+	c_index = bcmgenet_tdma_ring_readl(priv, ring->index, TDMA_CONS_INDEX);
+
+	last_c_index = ring->c_index;
+	num_tx_bds = ring->size;
+
+	c_index &= (num_tx_bds - 1);
+
+	if (c_index >= last_c_index)
+		last_tx_cn = c_index - last_c_index;
+	else
+		last_tx_cn = num_tx_bds - last_c_index + c_index;
+
+	netif_dbg(priv, tx_done, dev,
+			"%s ring=%d index=%d last_tx_cn=%d last_index=%d\n",
+			__func__, ring->index,
+			c_index, last_tx_cn, last_c_index);
+
+	/* Reclaim transmitted buffers */
+	while (last_tx_cn-- > 0) {
+		tx_cb_ptr = ring->cbs + last_c_index;
+		if (tx_cb_ptr->skb) {
+			dev->stats.tx_bytes += tx_cb_ptr->skb->len;
+			dma_unmap_single(&dev->dev,
+					dma_unmap_addr(tx_cb_ptr, dma_addr),
+					tx_cb_ptr->skb->len,
+					DMA_TO_DEVICE);
+			bcmgenet_free_cb(tx_cb_ptr);
+		} else if (dma_unmap_addr(tx_cb_ptr, dma_addr)) {
+			dev->stats.tx_bytes +=
+				dma_unmap_len(tx_cb_ptr, dma_len);
+			dma_unmap_page(&dev->dev,
+					dma_unmap_addr(tx_cb_ptr, dma_addr),
+					dma_unmap_len(tx_cb_ptr, dma_len),
+					DMA_TO_DEVICE);
+			dma_unmap_addr_set(tx_cb_ptr, dma_addr, 0);
+		}
+		dev->stats.tx_packets++;
+		ring->free_bds += 1;
+
+		last_c_index++;
+		last_c_index &= (num_tx_bds - 1);
+	}
+
+	if (ring->free_bds > (MAX_SKB_FRAGS + 1))
+		ring->int_disable(priv, ring);
+
+	if (__netif_subqueue_stopped(dev, ring->queue))
+		netif_wake_subqueue(dev, ring->queue);
+
+	ring->c_index = c_index;
+}
+
+static void bcmgenet_tx_reclaim(struct net_device *dev,
+		struct bcmgenet_tx_ring *ring)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ring->lock, flags);
+	__bcmgenet_tx_reclaim(dev, ring);
+	spin_unlock_irqrestore(&ring->lock, flags);
+}
+
+static void bcmgenet_tx_reclaim_all(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int i;
+
+	if (netif_is_multiqueue(dev)) {
+		for (i = 0; i < priv->hw_params->tx_queues; i++)
+			bcmgenet_tx_reclaim(dev, &priv->tx_rings[i]);
+	}
+
+	bcmgenet_tx_reclaim(dev, &priv->tx_rings[DESC_INDEX]);
+}
+
+/* Transmits a single SKB (either head of a fragment or a single SKB)
+ * caller must hold priv->lock
+ */
+static int bcmgenet_xmit_single(struct net_device *dev,
+				struct sk_buff *skb,
+				u16 dma_desc_flags,
+				struct bcmgenet_tx_ring *ring)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct device *kdev = &priv->pdev->dev;
+	struct enet_cb *tx_cb_ptr;
+	unsigned int skb_len;
+	dma_addr_t mapping;
+	u32 length_status;
+	int ret;
+
+	tx_cb_ptr = bcmgenet_get_txcb(priv, ring);
+
+	if (unlikely(!tx_cb_ptr))
+		BUG();
+
+	tx_cb_ptr->skb = skb;
+
+	skb_len = skb_headlen(skb) < ETH_ZLEN ? ETH_ZLEN : skb_headlen(skb);
+
+	mapping = dma_map_single(kdev, skb->data, skb_len, DMA_TO_DEVICE);
+	ret = dma_mapping_error(kdev, mapping);
+	if (ret) {
+		netif_err(priv, tx_err, dev, "Tx DMA map failed\n");
+		dev_kfree_skb(skb);
+		return ret;
+	}
+
+	dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping);
+	dma_unmap_len_set(tx_cb_ptr, dma_len, skb->len);
+	length_status = (skb_len << DMA_BUFLENGTH_SHIFT) | dma_desc_flags |
+			(priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT) |
+			DMA_TX_APPEND_CRC;
+
+	if (skb->ip_summed == CHECKSUM_PARTIAL)
+		length_status |= DMA_TX_DO_CSUM;
+
+	dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping, length_status);
+
+	/* Decrement total BD count and advance our write pointer */
+	ring->free_bds -= 1;
+	ring->prod_index += 1;
+	ring->prod_index &= DMA_P_INDEX_MASK;
+
+	return 0;
+}
+
+/* Transmit a SKB fragement */
+static int bcmgenet_xmit_frag(struct net_device *dev,
+				skb_frag_t *frag,
+				u16 dma_desc_flags,
+				struct bcmgenet_tx_ring *ring)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct device *kdev = &priv->pdev->dev;
+	struct enet_cb *tx_cb_ptr;
+	dma_addr_t mapping;
+	int ret;
+
+	tx_cb_ptr = bcmgenet_get_txcb(priv, ring);
+
+	if (unlikely(!tx_cb_ptr))
+		BUG();
+	tx_cb_ptr->skb = NULL;
+
+	mapping = skb_frag_dma_map(kdev, frag, 0,
+		skb_frag_size(frag), DMA_TO_DEVICE);
+	ret = dma_mapping_error(kdev, mapping);
+	if (ret) {
+		netif_err(priv, tx_err, dev, "%s: Tx DMA map failed\n",
+				__func__);
+		return ret;
+	}
+
+	dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping);
+	dma_unmap_len_set(tx_cb_ptr, dma_len, frag->size);
+
+	dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping,
+			(frag->size << DMA_BUFLENGTH_SHIFT) | dma_desc_flags |
+			(priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT));
+
+
+	ring->free_bds -= 1;
+	ring->prod_index += 1;
+	ring->prod_index &= DMA_P_INDEX_MASK;
+
+	return 0;
+}
+
+/* Reallocate the SKB to put enough headroom in front of it and insert
+ * the transmit checksum offsets in the descriptors
+ */
+static int bcmgenet_put_tx_csum(struct net_device *dev, struct sk_buff *skb)
+{
+	struct status_64 *status = NULL;
+	struct sk_buff *new_skb;
+	u16 offset;
+	u8 ip_proto;
+	u16 ip_ver;
+	u32 tx_csum_info;
+
+	if (unlikely(skb_headroom(skb) < sizeof(*status))) {
+		/* If 64 byte status block enabled, must make sure skb has
+		 * enough headroom for us to insert 64B status block.
+		 */
+		new_skb = skb_realloc_headroom(skb, sizeof(*status));
+		dev_kfree_skb(skb);
+		if (!new_skb) {
+			dev->stats.tx_errors++;
+			dev->stats.tx_dropped++;
+			return -ENOMEM;
+		}
+		skb = new_skb;
+	}
+
+	skb_push(skb, sizeof(*status));
+	status = (struct status_64 *)skb->data;
+
+	if (skb->ip_summed  == CHECKSUM_PARTIAL) {
+		ip_ver = htons(skb->protocol);
+		switch (ip_ver) {
+		case ETH_P_IP:
+			ip_proto = ip_hdr(skb)->protocol;
+			break;
+		case ETH_P_IPV6:
+			ip_proto = ipv6_hdr(skb)->nexthdr;
+			break;
+		default:
+			return 0;
+		}
+
+		offset = skb_checksum_start_offset(skb) - sizeof(*status);
+		tx_csum_info = (offset << STATUS_TX_CSUM_START_SHIFT) |
+				(offset + skb->csum_offset);
+
+		/* Set the length valid bit for TCP and UDP and just set
+		 * the special UDP flag for IPv4, else just set to 0.
+		 */
+		if (ip_proto == IPPROTO_TCP || ip_proto == IPPROTO_UDP) {
+			tx_csum_info |= STATUS_TX_CSUM_LV;
+			if (ip_proto == IPPROTO_UDP && ip_ver == ETH_P_IP)
+				tx_csum_info |= STATUS_TX_CSUM_PROTO_UDP;
+		} else
+			tx_csum_info = 0;
+
+		status->tx_csum_info = tx_csum_info;
+	}
+
+	return 0;
+}
+
+static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct bcmgenet_tx_ring *ring = NULL;
+	unsigned long flags = 0;
+	int nr_frags, index;
+	u16 dma_desc_flags;
+	int ret;
+	int i;
+
+	index = skb_get_queue_mapping(skb);
+	/* Mapping strategy:
+	 * queue_mapping = 0, unclassified, packet xmited through ring16
+	 * queue_mapping = 1, goes to ring 0. (highest priority queue
+	 * queue_mapping = 2, goes to ring 1.
+	 * queue_mapping = 3, goes to ring 2.
+	 * queue_mapping = 4, goes to ring 3.
+	 */
+	if (index == 0)
+		index = DESC_INDEX;
+	else
+		index -= 1;
+
+	if ((index != DESC_INDEX) && (index > priv->hw_params->tx_queues - 1)) {
+		netdev_err(dev, "%s: queue_mapping %d is invalid\n",
+				__func__, skb_get_queue_mapping(skb));
+		dev->stats.tx_errors++;
+		dev->stats.tx_dropped++;
+		ret = NETDEV_TX_OK;
+		goto out;
+	}
+	nr_frags = skb_shinfo(skb)->nr_frags;
+	ring = &priv->tx_rings[index];
+
+	spin_lock_irqsave(&ring->lock, flags);
+	if (ring->free_bds <= nr_frags + 1) {
+		netif_stop_subqueue(dev, ring->queue);
+		netdev_err(dev, "%s: tx ring %d full when queue %d awake\n",
+				__func__, index, ring->queue);
+		ret = NETDEV_TX_BUSY;
+		goto out;
+	}
+
+	/* reclaim xmited skb every 8 packets. */
+	/*if (ring->free_bds < ring->size - 8)*/
+		/*__bcmgenet_tx_reclaim(dev, ring);*/
+
+	/* set the SKB transmit checksum */
+	if (priv->desc_64b_en) {
+		ret = bcmgenet_put_tx_csum(dev, skb);
+		if (ret) {
+			ret = NETDEV_TX_OK;
+			goto out;
+		}
+	}
+
+	dma_desc_flags = DMA_SOP;
+	if (nr_frags == 0)
+		dma_desc_flags |= DMA_EOP;
+
+	/* Transmit single SKB or head of fragment list */
+	ret = bcmgenet_xmit_single(dev, skb, dma_desc_flags, ring);
+	if (ret) {
+		ret = NETDEV_TX_OK;
+		goto out;
+	}
+
+	/* xmit fragment */
+	for (i = 0; i < nr_frags; i++) {
+		ret = bcmgenet_xmit_frag(dev,
+				&skb_shinfo(skb)->frags[i],
+				(i == nr_frags - 1) ? DMA_EOP : 0, ring);
+		if (ret) {
+			ret = NETDEV_TX_OK;
+			goto out;
+		}
+	}
+
+	/* we kept a software copy of how much we should advance the TDMA
+	 * producer index, now write it down to the hardware
+	 */
+	bcmgenet_tdma_ring_writel(priv, ring->index,
+			ring->prod_index, TDMA_PROD_INDEX);
+
+	if (ring->free_bds <= (MAX_SKB_FRAGS + 1)) {
+		netif_stop_subqueue(dev, ring->queue);
+		ring->int_enable(priv, ring);
+	}
+
+out:
+	spin_unlock_irqrestore(&ring->lock, flags);
+
+	return ret;
+}
+
+
+static int bcmgenet_rx_refill(struct bcmgenet_priv *priv,
+				struct enet_cb *cb)
+{
+	struct device *kdev = &priv->pdev->dev;
+	struct sk_buff *skb;
+	dma_addr_t mapping;
+	int ret;
+
+	skb = netdev_alloc_skb(priv->dev,
+				priv->rx_buf_len + SKB_ALIGNMENT);
+	if (!skb)
+		return -ENOMEM;
+
+	/* a caller did not release this control block */
+	WARN_ON(cb->skb != NULL);
+	cb->skb = skb;
+	mapping = dma_map_single(kdev, skb->data,
+			priv->rx_buf_len, DMA_FROM_DEVICE);
+	ret = dma_mapping_error(kdev, mapping);
+	if (ret) {
+		bcmgenet_free_cb(cb);
+		netif_err(priv, rx_err, priv->dev,
+				"%s DMA map failed\n", __func__);
+		return ret;
+	}
+
+	dma_unmap_addr_set(cb, dma_addr, mapping);
+	/* assign packet, prepare descriptor, and advance pointer */
+
+	dmadesc_set_addr(priv, priv->rx_bd_assign_ptr, mapping);
+
+	/* turn on the newly assigned BD for DMA to use */
+	priv->rx_bd_assign_index++;
+	priv->rx_bd_assign_index &= (priv->num_rx_bds - 1);
+
+	priv->rx_bd_assign_ptr = priv->rx_bds +
+		(priv->rx_bd_assign_index * DMA_DESC_SIZE);
+
+	return 0;
+}
+
+/* bcmgenet_desc_rx - descriptor based rx process.
+ * this could be called from bottom half, or from NAPI polling method.
+ */
+static unsigned int bcmgenet_desc_rx(struct bcmgenet_priv *priv,
+				     unsigned int budget)
+{
+	struct net_device *dev = priv->dev;
+	struct enet_cb *cb;
+	struct sk_buff *skb;
+	u32 dma_length_status;
+	unsigned long dma_flag;
+	int len, err;
+	unsigned int rxpktprocessed = 0, rxpkttoprocess;
+	unsigned int p_index;
+	unsigned int chksum_ok = 0;
+
+	p_index = bcmgenet_rdma_ring_readl(priv,
+			DESC_INDEX, RDMA_PROD_INDEX);
+	p_index &= DMA_P_INDEX_MASK;
+
+	if (p_index < priv->rx_c_index)
+		rxpkttoprocess = (DMA_C_INDEX_MASK + 1) -
+			priv->rx_c_index + p_index;
+	else
+		rxpkttoprocess = p_index - priv->rx_c_index;
+
+	netif_dbg(priv, rx_status, dev,
+		"RDMA: rxpkttoprocess=%d\n", rxpkttoprocess);
+
+	while ((rxpktprocessed < rxpkttoprocess) &&
+			(rxpktprocessed < budget)) {
+
+		/* Unmap the packet contents such that we can use the
+		 * RSV from the 64 bytes descriptor when enabled and save
+		 * a 32-bits register read
+		 */
+		cb = &priv->rx_cbs[priv->rx_read_ptr];
+		skb = cb->skb;
+		dma_unmap_single(&dev->dev, dma_unmap_addr(cb, dma_addr),
+				priv->rx_buf_len, DMA_FROM_DEVICE);
+
+		if (!priv->desc_64b_en) {
+			dma_length_status = dmadesc_get_length_status(priv,
+							priv->rx_bds +
+							(priv->rx_read_ptr *
+							 DMA_DESC_SIZE));
+		} else {
+			struct status_64 *status;
+			status = (struct status_64 *)skb->data;
+			dma_length_status = status->length_status;
+		}
+
+		/* DMA flags and length are still valid no matter how
+		 * we got the Receive Status Vector (64B RSB or register)
+		 */
+		dma_flag = dma_length_status & 0xffff;
+		len = dma_length_status >> DMA_BUFLENGTH_SHIFT;
+
+		netif_dbg(priv, rx_status, dev,
+			"%s: p_ind=%d c_ind=%d read_ptr=%d len_stat=0x%08x\n",
+			__func__, p_index, priv->rx_c_index, priv->rx_read_ptr,
+			dma_length_status);
+
+		rxpktprocessed++;
+
+		priv->rx_read_ptr++;
+		priv->rx_read_ptr &= (priv->num_rx_bds - 1);
+
+		/* out of memory, just drop packets at the hardware level */
+		if (unlikely(!skb)) {
+			dev->stats.rx_dropped++;
+			dev->stats.rx_errors++;
+			goto refill;
+		}
+
+		if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) {
+			netif_err(priv, rx_status, dev,
+					"Droping fragmented packet!\n");
+			dev->stats.rx_dropped++;
+			dev->stats.rx_errors++;
+			dev_kfree_skb_any(cb->skb);
+			cb->skb = NULL;
+			goto refill;
+		}
+		/* report errors */
+		if (unlikely(dma_flag & (DMA_RX_CRC_ERROR |
+						DMA_RX_OV |
+						DMA_RX_NO |
+						DMA_RX_LG |
+						DMA_RX_RXER))) {
+			netif_err(priv, rx_status, dev, "dma_flag=0x%x\n",
+						(unsigned int)dma_flag);
+			if (dma_flag & DMA_RX_CRC_ERROR)
+				dev->stats.rx_crc_errors++;
+			if (dma_flag & DMA_RX_OV)
+				dev->stats.rx_over_errors++;
+			if (dma_flag & DMA_RX_NO)
+				dev->stats.rx_frame_errors++;
+			if (dma_flag & DMA_RX_LG)
+				dev->stats.rx_length_errors++;
+			dev->stats.rx_dropped++;
+			dev->stats.rx_errors++;
+
+			/* discard the packet and advance consumer index.*/
+			dev_kfree_skb_any(cb->skb);
+			cb->skb = NULL;
+			goto refill;
+		} /* error packet */
+
+		chksum_ok = (dma_flag & priv->dma_rx_chk_bit) &&
+				priv->desc_rxchk_en;
+
+		skb_put(skb, len);
+		if (priv->desc_64b_en) {
+			skb_pull(skb, 64);
+			len -= 64;
+		}
+
+		if (likely(chksum_ok))
+			skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+		/* remove hardware 2bytes added for IP alignment */
+		skb_pull(skb, 2);
+		len -= 2;
+
+		if (priv->crc_fwd_en) {
+			skb_trim(skb, len - ETH_FCS_LEN);
+			len -= ETH_FCS_LEN;
+		}
+
+		/*Finish setting up the received SKB and send it to the kernel*/
+		skb->protocol = eth_type_trans(skb, priv->dev);
+		dev->stats.rx_packets++;
+		dev->stats.rx_bytes += len;
+		if (dma_flag & DMA_RX_MULT)
+			dev->stats.multicast++;
+
+		/* Notify kernel */
+		napi_gro_receive(&priv->napi, skb);
+		cb->skb = NULL;
+		netif_dbg(priv, rx_status, dev, "pushed up to kernel\n");
+
+		/* refill RX path on the current control block */
+refill:
+		err = bcmgenet_rx_refill(priv, cb);
+		if (err)
+			netif_err(priv, rx_err, dev, "Rx refill failed\n");
+	}
+
+	return rxpktprocessed;
+}
+
+/* Assign skb to RX DMA descriptor. */
+static int bcmgenet_alloc_rx_buffers(struct bcmgenet_priv *priv)
+{
+	struct enet_cb *cb;
+	int ret = 0;
+	int i;
+	u32 reg;
+
+	netif_dbg(priv, hw, priv->dev, "%s:\n", __func__);
+
+	/* This function may be called from irq bottom-half. */
+	spin_lock_bh(&priv->bh_lock);
+
+	/* loop here for each buffer needing assign */
+	for (i = 0; i < priv->num_rx_bds; i++) {
+		cb = &priv->rx_cbs[priv->rx_bd_assign_index];
+		if (cb->skb)
+			continue;
+
+		/* set the DMA descriptor length once and for all
+		 * it will only change if we support dynamically sizing
+		 * priv->rx_buf_len, but we do not
+		 */
+		dmadesc_set_length_status(priv, priv->rx_bd_assign_ptr,
+				priv->rx_buf_len << DMA_BUFLENGTH_SHIFT);
+
+		ret = bcmgenet_rx_refill(priv, cb);
+		if (ret)
+			break;
+
+	}
+
+	/* Enable rx DMA incase it was disabled due to running out of rx BD */
+	reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
+	reg |= DMA_EN;
+	bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
+
+	spin_unlock_bh(&priv->bh_lock);
+
+	return ret;
+}
+
+static void bcmgenet_free_rx_buffers(struct bcmgenet_priv *priv)
+{
+	struct enet_cb *cb;
+	int i;
+
+	for (i = 0; i < priv->num_rx_bds; i++) {
+		cb = &priv->rx_cbs[i];
+
+		if (dma_unmap_addr(cb, dma_addr)) {
+			dma_unmap_single(&priv->dev->dev,
+					dma_unmap_addr(cb, dma_addr),
+					priv->rx_buf_len, DMA_FROM_DEVICE);
+			dma_unmap_addr_set(cb, dma_addr, 0);
+		}
+
+		if (cb->skb)
+			bcmgenet_free_cb(cb);
+	}
+}
+
+static int reset_umac(struct bcmgenet_priv *priv)
+{
+	struct device *kdev = &priv->pdev->dev;
+	unsigned int timeout = 0;
+	u32 reg;
+
+	/* 7358a0/7552a0: bad default in RBUF_FLUSH_CTRL.umac_sw_rst */
+	bcmgenet_rbuf_ctrl_set(priv, 0);
+	udelay(10);
+
+	/* disable MAC while updating its registers */
+	bcmgenet_umac_writel(priv, 0, UMAC_CMD);
+
+	/* issue soft reset, wait for it to complete */
+	bcmgenet_umac_writel(priv, CMD_SW_RESET, UMAC_CMD);
+	while (timeout++ < 1000) {
+		reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+		if (!(reg & CMD_SW_RESET))
+			break;
+		udelay(1);
+	}
+
+	if (timeout == 1000) {
+		dev_err(kdev,
+			"timeout waiting for MAC to come out of resetn\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+/* init_umac: Initializes the uniMac controller */
+static int init_umac(struct bcmgenet_priv *priv)
+{
+	struct device *kdev = &priv->pdev->dev;
+	int ret;
+	u32 reg, cpu_mask_clear;
+
+	dev_dbg(&priv->pdev->dev, "bcmgenet: init_umac\n");
+
+	ret = reset_umac(priv);
+	if (ret)
+		return ret;
+
+	bcmgenet_umac_writel(priv, 0, UMAC_CMD);
+	/* clear tx/rx counter */
+	bcmgenet_umac_writel(priv,
+		MIB_RESET_RX | MIB_RESET_TX | MIB_RESET_RUNT, UMAC_MIB_CTRL);
+	bcmgenet_umac_writel(priv, 0, UMAC_MIB_CTRL);
+
+	bcmgenet_umac_writel(priv, ENET_MAX_MTU_SIZE, UMAC_MAX_FRAME_LEN);
+
+	/* init rx registers, enable ip header optimization */
+	reg = bcmgenet_rbuf_readl(priv, RBUF_CTRL);
+	reg |= RBUF_ALIGN_2B;
+	bcmgenet_rbuf_writel(priv, reg, RBUF_CTRL);
+
+	if (!GENET_IS_V1(priv) && !GENET_IS_V2(priv))
+		bcmgenet_rbuf_writel(priv, 1, RBUF_TBUF_SIZE_CTRL);
+
+	/* Mask all interrupts.*/
+	bcmgenet_intrl2_0_writel(priv, 0xFFFFFFFF, INTRL2_CPU_MASK_SET);
+	bcmgenet_intrl2_0_writel(priv, 0xFFFFFFFF, INTRL2_CPU_CLEAR);
+	bcmgenet_intrl2_0_writel(priv, 0, INTRL2_CPU_MASK_CLEAR);
+
+	cpu_mask_clear = UMAC_IRQ_RXDMA_BDONE;
+
+	dev_dbg(kdev, "%s:Enabling RXDMA_BDONE interrupt\n", __func__);
+
+	/* Monitor cable plug/unpluged event for internal PHY */
+	if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL)
+		cpu_mask_clear |= (UMAC_IRQ_LINK_DOWN | UMAC_IRQ_LINK_UP);
+	else if (priv->ext_phy)
+		cpu_mask_clear |= (UMAC_IRQ_LINK_DOWN | UMAC_IRQ_LINK_UP);
+	else if (priv->phy_interface == PHY_INTERFACE_MODE_MOCA) {
+		reg = bcmgenet_bp_mc_get(priv);
+		reg |= BIT(priv->hw_params->bp_in_en_shift);
+
+		/* bp_mask: back pressure mask */
+		if (netif_is_multiqueue(priv->dev))
+			reg |= priv->hw_params->bp_in_mask;
+		else
+			reg &= ~priv->hw_params->bp_in_mask;
+		bcmgenet_bp_mc_set(priv, reg);
+	}
+
+	/* Enable MDIO interrupts on GENET v3+ */
+	if (priv->hw_params->flags & GENET_HAS_MDIO_INTR)
+		cpu_mask_clear |= UMAC_IRQ_MDIO_DONE | UMAC_IRQ_MDIO_ERROR;
+
+	bcmgenet_intrl2_0_writel(priv, cpu_mask_clear,
+		INTRL2_CPU_MASK_CLEAR);
+
+	/* Enable rx/tx engine.*/
+	dev_dbg(kdev, "done init umac\n");
+
+	return 0;
+}
+
+/* Initialize all house-keeping variables for a TX ring, along
+ * with corresponding hardware registers
+ */
+static void bcmgenet_init_tx_ring(struct bcmgenet_priv *priv,
+				  unsigned int index, unsigned int size,
+				  unsigned int write_ptr, unsigned int end_ptr)
+{
+	struct bcmgenet_tx_ring *ring = &priv->tx_rings[index];
+	u32 words_per_bd = WORDS_PER_BD(priv);
+	u32 flow_period_val = 0;
+	unsigned int first_bd;
+
+	spin_lock_init(&ring->lock);
+	ring->index = index;
+	if (index == DESC_INDEX) {
+		ring->queue = 0;
+		ring->int_enable = bcmgenet_tx_ring16_int_enable;
+		ring->int_disable = bcmgenet_tx_ring16_int_disable;
+	} else {
+		ring->queue = index + 1;
+		ring->int_enable = bcmgenet_tx_ring_int_enable;
+		ring->int_disable = bcmgenet_tx_ring_int_disable;
+	}
+	ring->cbs = priv->tx_cbs + write_ptr;
+	ring->size = size;
+	ring->c_index = 0;
+	ring->free_bds = size;
+	ring->write_ptr = write_ptr;
+	ring->cb_ptr = write_ptr;
+	ring->end_ptr = end_ptr - 1;
+	ring->prod_index = 0;
+
+	/* Set flow period for ring != 16 */
+	if (index != DESC_INDEX)
+		flow_period_val = ENET_MAX_MTU_SIZE << 16;
+
+	bcmgenet_tdma_ring_writel(priv, index, 0, TDMA_PROD_INDEX);
+	bcmgenet_tdma_ring_writel(priv, index, 0, TDMA_CONS_INDEX);
+	bcmgenet_tdma_ring_writel(priv, index, 1, DMA_MBUF_DONE_THRESH);
+	/* Disable rate control for now */
+	bcmgenet_tdma_ring_writel(priv, index, flow_period_val,
+			TDMA_FLOW_PERIOD);
+	/* Unclassified traffic goes to ring 16 */
+	bcmgenet_tdma_ring_writel(priv, index,
+			((size << DMA_RING_SIZE_SHIFT) | RX_BUF_LENGTH),
+			DMA_RING_BUF_SIZE);
+
+	first_bd = write_ptr;
+
+	/* Set start and end address, read and write pointers */
+	bcmgenet_tdma_ring_writel(priv, index, first_bd * words_per_bd,
+			DMA_START_ADDR);
+	bcmgenet_tdma_ring_writel(priv, index, first_bd * words_per_bd,
+			TDMA_READ_PTR);
+	bcmgenet_tdma_ring_writel(priv, index, first_bd,
+			TDMA_WRITE_PTR);
+	bcmgenet_tdma_ring_writel(priv, index, end_ptr * words_per_bd - 1,
+			DMA_END_ADDR);
+}
+
+/* Initialize a RDMA ring */
+static int bcmgenet_init_rx_ring(struct bcmgenet_priv *priv,
+				  unsigned int index, unsigned int size)
+{
+	u32 words_per_bd = WORDS_PER_BD(priv);
+	int ret;
+
+	priv->num_rx_bds = TOTAL_DESC;
+	priv->rx_bds = priv->base + priv->hw_params->rdma_offset;
+	priv->rx_bd_assign_ptr = priv->rx_bds;
+	priv->rx_bd_assign_index = 0;
+	priv->rx_c_index = 0;
+	priv->rx_read_ptr = 0;
+	priv->rx_cbs = kzalloc(priv->num_rx_bds * sizeof(struct enet_cb),
+				GFP_KERNEL);
+	if (!priv->rx_cbs)
+		return -ENOMEM;
+
+	ret = bcmgenet_alloc_rx_buffers(priv);
+	if (ret) {
+		kfree(priv->rx_cbs);
+		return ret;
+	}
+
+	bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_WRITE_PTR);
+	bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_PROD_INDEX);
+	bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_CONS_INDEX);
+	bcmgenet_rdma_ring_writel(priv, index,
+		((size << DMA_RING_SIZE_SHIFT) | RX_BUF_LENGTH),
+		DMA_RING_BUF_SIZE);
+	bcmgenet_rdma_ring_writel(priv, index, 0, DMA_START_ADDR);
+	bcmgenet_rdma_ring_writel(priv, index,
+		words_per_bd * size - 1, DMA_END_ADDR);
+	bcmgenet_rdma_ring_writel(priv, index,
+			(DMA_FC_THRESH_LO << DMA_XOFF_THRESHOLD_SHIFT) |
+			DMA_FC_THRESH_HI, RDMA_XON_XOFF_THRESH);
+	bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_READ_PTR);
+
+	return ret;
+}
+
+/* init multi xmit queues, only available for GENET2+
+ * the queue is partitioned as follows:
+ *
+ * queue 0 - 3 is priority based, each one has 32 descriptors,
+ * with queue 0 being the highest priority queue.
+ *
+ * queue 16 is the default tx queue with GENET_DEFAULT_BD_CNT
+ * descriptors: 256 - (number of tx queues * bds per queues) = 128
+ * descriptors.
+ *
+ * The transmit control block pool is then partitioned as following:
+ * - tx_cbs[0...127] are for queue 16
+ * - tx_ring_cbs[0] points to tx_cbs[128..159]
+ * - tx_ring_cbs[1] points to tx_cbs[160..191]
+ * - tx_ring_cbs[2] points to tx_cbs[192..223]
+ * - tx_ring_cbs[3] points to tx_cbs[224..255]
+ */
+static void bcmgenet_init_multiq(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	unsigned int i, dma_enable;
+	u32 reg, dma_ctrl, ring_cfg = 0, dma_priority = 0;
+
+	if (!netif_is_multiqueue(dev)) {
+		netdev_warn(dev, "called with non multi queue aware HW\n");
+		return;
+	}
+
+	dma_ctrl = bcmgenet_tdma_readl(priv, DMA_CTRL);
+	dma_enable = dma_ctrl & DMA_EN;
+	dma_ctrl &= ~DMA_EN;
+	bcmgenet_tdma_writel(priv, dma_ctrl, DMA_CTRL);
+
+	/* Enable strict priority arbiter mode */
+	bcmgenet_tdma_writel(priv, DMA_ARBITER_SP, DMA_ARB_CTRL);
+
+	for (i = 0; i < priv->hw_params->tx_queues; i++) {
+		/* first 64 tx_cbs are reserved for default tx queue
+		 * (ring 16)
+		 */
+		bcmgenet_init_tx_ring(priv, i, priv->hw_params->bds_cnt,
+					i * priv->hw_params->bds_cnt,
+					(i + 1) * priv->hw_params->bds_cnt);
+
+		/* Configure ring as decriptor ring and setup priority */
+		ring_cfg |= (1 << i);
+		dma_priority |= ((GENET_Q0_PRIORITY + i) <<
+				(GENET_MAX_MQ_CNT + 1) * i);
+		dma_ctrl |= (1 << (i + DMA_RING_BUF_EN_SHIFT));
+	}
+
+	/* Enable rings */
+	reg = bcmgenet_tdma_readl(priv, DMA_RING_CFG);
+	reg |= ring_cfg;
+	bcmgenet_tdma_writel(priv, reg, DMA_RING_CFG);
+
+	/* Use configured rings priority and set ring #16 priority */
+	reg = bcmgenet_tdma_readl(priv, DMA_RING_PRIORITY);
+	reg |= ((GENET_Q0_PRIORITY + priv->hw_params->tx_queues) << 20);
+	reg |= dma_priority;
+	bcmgenet_tdma_writel(priv, reg, DMA_PRIORITY);
+
+	/* Configure ring as descriptor ring and re-enable DMA if enabled */
+	reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
+	reg |= dma_ctrl;
+	if (dma_enable)
+		reg |= DMA_EN;
+	bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
+}
+
+static void bcmgenet_fini_dma(struct bcmgenet_priv *priv)
+{
+	int i;
+
+	/* disable DMA */
+	bcmgenet_rdma_writel(priv, 0, DMA_CTRL);
+	bcmgenet_tdma_writel(priv, 0, DMA_CTRL);
+
+	for (i = 0; i < priv->num_tx_bds; i++) {
+		if (priv->tx_cbs[i].skb != NULL) {
+			dev_kfree_skb(priv->tx_cbs[i].skb);
+			priv->tx_cbs[i].skb = NULL;
+		}
+	}
+	bcmgenet_free_rx_buffers(priv);
+	kfree(priv->rx_cbs);
+	kfree(priv->tx_cbs);
+}
+
+/* init_edma: Initialize DMA control register */
+static int bcmgenet_init_dma(struct bcmgenet_priv *priv)
+{
+	int ret;
+
+	netif_dbg(priv, hw, priv->dev, "bcmgenet: init_edma\n");
+
+	/* by default, enable ring 16 (descriptor based) */
+	ret = bcmgenet_init_rx_ring(priv, DESC_INDEX, TOTAL_DESC);
+	if (ret) {
+		netdev_err(priv->dev, "failed to initialize RX ring\n");
+		return ret;
+	}
+
+	/* init rDma */
+	bcmgenet_rdma_writel(priv, DMA_MAX_BURST_LENGTH, DMA_SCB_BURST_SIZE);
+
+	/* Init tDma */
+	bcmgenet_tdma_writel(priv, DMA_MAX_BURST_LENGTH, DMA_SCB_BURST_SIZE);
+
+	/* Initialize commont TX ring structures */
+	priv->tx_bds = priv->base + priv->hw_params->tdma_offset;
+	priv->num_tx_bds = TOTAL_DESC;
+	priv->tx_cbs = kzalloc(priv->num_tx_bds * sizeof(struct enet_cb),
+				GFP_KERNEL);
+	if (!priv->tx_cbs) {
+		bcmgenet_fini_dma(priv);
+		return -ENOMEM;
+	}
+
+	/* initialize multi xmit queue */
+	bcmgenet_init_multiq(priv->dev);
+
+	/* initialize special ring 16 */
+	bcmgenet_init_tx_ring(priv, DESC_INDEX, GENET_DEFAULT_BD_CNT,
+			priv->hw_params->tx_queues * priv->hw_params->bds_cnt,
+			TOTAL_DESC);
+
+	return 0;
+}
+
+/* NAPI polling method*/
+static int bcmgenet_poll(struct napi_struct *napi, int budget)
+{
+	struct bcmgenet_priv *priv = container_of(napi,
+			struct bcmgenet_priv, napi);
+	unsigned int work_done;
+
+	work_done = bcmgenet_desc_rx(priv, budget);
+
+	/* tx reclaim */
+	bcmgenet_tx_reclaim(priv->dev, &priv->tx_rings[DESC_INDEX]);
+	/* Advancing our consumer index*/
+	priv->rx_c_index += work_done;
+	priv->rx_c_index &= DMA_C_INDEX_MASK;
+	bcmgenet_rdma_ring_writel(priv, DESC_INDEX,
+				priv->rx_c_index, RDMA_CONS_INDEX);
+	if (work_done < budget) {
+		napi_complete(napi);
+		bcmgenet_intrl2_0_writel(priv,
+			UMAC_IRQ_RXDMA_BDONE, INTRL2_CPU_MASK_CLEAR);
+	}
+
+	return work_done;
+}
+
+/* Interrupt bottom half */
+static void bcmgenet_irq_task(struct work_struct *work)
+{
+	struct bcmgenet_priv *priv = container_of(
+			work, struct bcmgenet_priv, bcmgenet_irq_work);
+	struct net_device *dev;
+	u32 reg;
+
+	dev = priv->dev;
+
+	netif_dbg(priv, intr, dev, "%s\n", __func__);
+	/* Cable plugged/unplugged event */
+	if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL) {
+		if (priv->irq0_stat & UMAC_IRQ_PHY_DET_R) {
+			priv->irq0_stat &= ~UMAC_IRQ_PHY_DET_R;
+			netif_crit(priv, link, dev,
+				"cable plugged in, powering up\n");
+			bcmgenet_power_up(priv, GENET_POWER_CABLE_SENSE);
+		} else if (priv->irq0_stat & UMAC_IRQ_PHY_DET_F) {
+			priv->irq0_stat &= ~UMAC_IRQ_PHY_DET_F;
+			netif_crit(priv, link, dev,
+				"cable unplugged, powering down\n");
+			bcmgenet_power_down(priv, GENET_POWER_CABLE_SENSE);
+		}
+	}
+	if (priv->irq0_stat & UMAC_IRQ_MPD_R) {
+		priv->irq0_stat &= ~UMAC_IRQ_MPD_R;
+		netif_crit(priv, wol, dev,
+			"magic packet detected, waking up\n");
+		/* disable mpd interrupt */
+		bcmgenet_intrl2_0_writel(priv,
+			UMAC_IRQ_MPD_R, INTRL2_CPU_MASK_SET);
+		/* disable CRC forward.*/
+		reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+		reg &= ~CMD_CRC_FWD;
+		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+		priv->crc_fwd_en = 0;
+		bcmgenet_power_up(priv, GENET_POWER_WOL_MAGIC);
+
+	} else if (priv->irq0_stat & (UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM)) {
+		priv->irq0_stat &= ~(UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM);
+		netif_crit(priv, wol, dev,
+			"ACPI pattern matched, waking up\n");
+		/* disable HFB match interrupts */
+		bcmgenet_intrl2_0_writel(priv,
+			UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM, INTRL2_CPU_MASK_SET);
+		bcmgenet_power_up(priv, GENET_POWER_WOL_ACPI);
+	}
+
+	/* Link UP/DOWN event */
+	if ((priv->hw_params->flags & GENET_HAS_MDIO_INTR) &&
+		(priv->irq0_stat & (UMAC_IRQ_LINK_UP|UMAC_IRQ_LINK_DOWN))) {
+		if (priv->phydev)
+			phy_mac_interrupt(priv->phydev,
+				(priv->irq0_stat & UMAC_IRQ_LINK_UP));
+		priv->irq0_stat &= ~(UMAC_IRQ_LINK_UP|UMAC_IRQ_LINK_DOWN);
+	}
+}
+
+/* bcmgenet_isr1: interrupt handler for ring buffer. */
+static irqreturn_t bcmgenet_isr1(int irq, void *dev_id)
+{
+	struct bcmgenet_priv *priv = dev_id;
+	unsigned int index;
+
+	/* Save irq status for bottom-half processing. */
+	priv->irq1_stat =
+		bcmgenet_intrl2_1_readl(priv, INTRL2_CPU_STAT) &
+		~priv->int1_mask;
+	/* clear inerrupts*/
+	bcmgenet_intrl2_1_writel(priv, priv->irq1_stat, INTRL2_CPU_CLEAR);
+
+	netif_dbg(priv, intr, priv->dev,
+		"%s: IRQ=0x%x\n", __func__, priv->irq1_stat);
+	/* Check the MBDONE interrupts.
+	 * packet is done, reclaim descriptors
+	 */
+	if (priv->irq1_stat & 0x0000ffff) {
+		index = 0;
+		for (index = 0; index < 16; index++) {
+			if (priv->irq1_stat & (1 << index))
+				bcmgenet_tx_reclaim(priv->dev,
+						&priv->tx_rings[index]);
+		}
+	}
+	return IRQ_HANDLED;
+}
+
+/* bcmgenet_isr0: Handle various interrupts. */
+static irqreturn_t bcmgenet_isr0(int irq, void *dev_id)
+{
+	struct bcmgenet_priv *priv = dev_id;
+
+	/* Save irq status for bottom-half processing. */
+	priv->irq0_stat =
+		bcmgenet_intrl2_0_readl(priv, INTRL2_CPU_STAT) &
+		~bcmgenet_intrl2_0_readl(priv, INTRL2_CPU_MASK_STATUS);
+	/* clear inerrupts*/
+	bcmgenet_intrl2_0_writel(priv, priv->irq0_stat, INTRL2_CPU_CLEAR);
+
+	netif_dbg(priv, intr, priv->dev,
+		"IRQ=0x%x\n", priv->irq0_stat);
+
+	if (priv->irq0_stat & (UMAC_IRQ_RXDMA_BDONE | UMAC_IRQ_RXDMA_PDONE)) {
+		/* We use NAPI(software interrupt throttling, if
+		 * Rx Descriptor throttling is not used.
+		 * Disable interrupt, will be enabled in the poll method.
+		 */
+		if (likely(napi_schedule_prep(&priv->napi))) {
+			bcmgenet_intrl2_0_writel(priv,
+				UMAC_IRQ_RXDMA_BDONE, INTRL2_CPU_MASK_SET);
+			__napi_schedule(&priv->napi);
+		}
+	}
+	if (priv->irq0_stat &
+			(UMAC_IRQ_TXDMA_BDONE | UMAC_IRQ_TXDMA_PDONE)) {
+		/* Tx reclaim */
+		bcmgenet_tx_reclaim(priv->dev, &priv->tx_rings[DESC_INDEX]);
+	}
+	if (priv->irq0_stat & (UMAC_IRQ_PHY_DET_R |
+				UMAC_IRQ_PHY_DET_F |
+				UMAC_IRQ_LINK_UP |
+				UMAC_IRQ_LINK_DOWN |
+				UMAC_IRQ_HFB_SM |
+				UMAC_IRQ_HFB_MM |
+				UMAC_IRQ_MPD_R)) {
+		/* all other interested interrupts handled in bottom half */
+		schedule_work(&priv->bcmgenet_irq_work);
+	}
+
+	if ((priv->hw_params->flags & GENET_HAS_MDIO_INTR) &&
+		priv->irq0_stat & (UMAC_IRQ_MDIO_DONE | UMAC_IRQ_MDIO_ERROR)) {
+		priv->irq0_stat &= ~(UMAC_IRQ_MDIO_DONE | UMAC_IRQ_MDIO_ERROR);
+		wake_up(&priv->wq);
+	}
+
+	return IRQ_HANDLED;
+}
+
+static void bcmgenet_umac_reset(struct bcmgenet_priv *priv)
+{
+	u32 reg;
+
+	reg = bcmgenet_rbuf_ctrl_get(priv);
+	reg |= BIT(1);
+	bcmgenet_rbuf_ctrl_set(priv, reg);
+	udelay(10);
+
+	reg &= ~BIT(1);
+	bcmgenet_rbuf_ctrl_set(priv, reg);
+	udelay(10);
+}
+
+static void bcmgenet_set_hw_addr(struct bcmgenet_priv *priv,
+				  unsigned char *addr)
+{
+	bcmgenet_umac_writel(priv, (addr[0] << 24) | (addr[1] << 16) |
+			(addr[2] << 8) | addr[3], UMAC_MAC0);
+	bcmgenet_umac_writel(priv, (addr[4] << 8) | addr[5], UMAC_MAC1);
+}
+
+static int bcmgenet_wol_resume(struct bcmgenet_priv *priv)
+{
+	int ret;
+
+	/* From WOL-enabled suspend, switch to regular clock */
+	clk_disable(priv->clk_wol);
+	/* init umac registers to synchronize s/w with h/w */
+	ret = init_umac(priv);
+	if (ret)
+		return ret;
+
+	if (priv->phydev)
+		phy_init_hw(priv->phydev);
+	/* Speed settings must be restored */
+	bcmgenet_mii_config(priv->dev);
+
+	return 0;
+}
+
+/* Returns a reusable dma control register value */
+static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)
+{
+	u32 reg;
+	u32 dma_ctrl;
+
+	/* disable DMA */
+	dma_ctrl = 1 << (DESC_INDEX + DMA_RING_BUF_EN_SHIFT) | DMA_EN;
+	reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
+	reg &= ~dma_ctrl;
+	bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
+
+	reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
+	reg &= ~dma_ctrl;
+	bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
+
+	bcmgenet_umac_writel(priv, 1, UMAC_TX_FLUSH);
+	udelay(10);
+	bcmgenet_umac_writel(priv, 0, UMAC_TX_FLUSH);
+
+	return dma_ctrl;
+}
+
+static void bcmgenet_enable_dma(struct bcmgenet_priv *priv, u32 dma_ctrl)
+{
+	u32 reg;
+
+	reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
+	reg |= dma_ctrl;
+	bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
+
+	reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
+	reg |= dma_ctrl;
+	bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
+}
+
+static int bcmgenet_open(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	unsigned long dma_ctrl;
+	u32 reg;
+	int ret;
+
+	netif_dbg(priv, ifup, dev, "bcmgenet_open\n");
+
+	/* Turn on the clock */
+	if (!IS_ERR(priv->clk))
+		clk_prepare_enable(priv->clk);
+
+	/* take MAC out of reset */
+	bcmgenet_umac_reset(priv);
+
+	ret = init_umac(priv);
+	if (ret)
+		goto err_clk_disable;
+
+	/* disable ethernet MAC while updating its registers */
+	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+	reg &= ~(CMD_TX_EN | CMD_RX_EN);
+	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+
+	bcmgenet_set_hw_addr(priv, dev->dev_addr);
+
+	if (priv->wol_enabled) {
+		ret = bcmgenet_wol_resume(priv);
+		if (ret)
+			return ret;
+	}
+
+	if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL) {
+		reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+		reg |= EXT_ENERGY_DET_MASK;
+		bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+	}
+
+	if (test_and_clear_bit(GENET_POWER_WOL_MAGIC, &priv->wol_enabled))
+		bcmgenet_power_up(priv, GENET_POWER_WOL_MAGIC);
+	if (test_and_clear_bit(GENET_POWER_WOL_ACPI, &priv->wol_enabled))
+		bcmgenet_power_up(priv, GENET_POWER_WOL_ACPI);
+
+	/* Disable RX/TX DMA and flush TX queues */
+	dma_ctrl = bcmgenet_dma_disable(priv);
+
+	/* Reinitialize TDMA and RDMA and SW housekeeping */
+	ret = bcmgenet_init_dma(priv);
+	if (ret) {
+		netdev_err(dev, "failed to initialize DMA\n");
+		goto err_fini_dma;
+	}
+
+	/* Always enable ring 16 - descriptor ring */
+	bcmgenet_enable_dma(priv, dma_ctrl);
+
+	ret = request_irq(priv->irq0, bcmgenet_isr0, IRQF_SHARED,
+			dev->name, priv);
+	if (ret < 0) {
+		netdev_err(dev, "can't request IRQ %d\n", priv->irq0);
+		goto err_fini_dma;
+	}
+
+	ret = request_irq(priv->irq1, bcmgenet_isr1, IRQF_SHARED,
+				dev->name, priv);
+	if (ret < 0) {
+		netdev_err(dev, "can't request IRQ %d\n", priv->irq1);
+		goto err_irq0;
+	}
+
+	/* Start the network engine */
+	napi_enable(&priv->napi);
+
+	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+	reg |= (CMD_TX_EN | CMD_RX_EN);
+	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+
+	/* Make sure we reflect the value of CRC_CMD_FWD */
+	priv->crc_fwd_en = !!(reg & CMD_CRC_FWD);
+
+	device_set_wakeup_capable(&dev->dev, 1);
+
+	if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL)
+		bcmgenet_power_up(priv, GENET_POWER_PASSIVE);
+
+	netif_tx_start_all_queues(dev);
+
+	if (priv->phydev)
+		phy_start(priv->phydev);
+
+	return 0;
+
+err_irq0:
+	free_irq(priv->irq0, dev);
+err_fini_dma:
+	bcmgenet_fini_dma(priv);
+err_clk_disable:
+	if (!IS_ERR(priv->clk))
+		clk_disable_unprepare(priv->clk);
+	return ret;
+}
+
+static int bcmgenet_dma_teardown(struct bcmgenet_priv *priv)
+{
+	int timeout = 0;
+	u32 reg;
+
+	/* Disable TDMA to stop add more frames in TX DMA */
+	reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
+	reg &= ~DMA_EN;
+	bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
+
+	/* Check TDMA status register to confirm TDMA is disabled */
+	while (!(bcmgenet_tdma_readl(priv, DMA_STATUS) & DMA_DISABLED)) {
+		if (timeout++ == 5000) {
+			netdev_warn(priv->dev,
+				"Timed out while disabling TX DMA\n");
+			return -ETIMEDOUT;
+		}
+		udelay(1);
+	}
+
+	/* Wait 10ms for packet drain in both tx and rx dma */
+	usleep_range(10000, 20000);
+
+	/* Disable RDMA */
+	reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
+	reg &= ~DMA_EN;
+	bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
+
+	timeout = 0;
+	/* Check RDMA status register to confirm RDMA is disabled */
+	while (!(bcmgenet_rdma_readl(priv, DMA_STATUS) & DMA_DISABLED)) {
+		if (timeout++ == 5000) {
+			netdev_warn(priv->dev,
+				"Timed out while disabling RX DMA\n");
+			return -ETIMEDOUT;
+		}
+		udelay(1);
+	}
+
+	return 0;
+}
+
+static int bcmgenet_close(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int ret;
+	u32 reg;
+
+	netif_dbg(priv, ifdown, dev, "bcmgenet_close\n");
+
+	if (priv->phydev)
+		phy_stop(priv->phydev);
+
+	/* Disable MAC receive */
+	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+	reg &= ~CMD_RX_EN;
+	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+
+	netif_tx_stop_all_queues(dev);
+
+	ret = bcmgenet_dma_teardown(priv);
+	if (ret)
+		return ret;
+
+	/* Disable MAC transmit. TX DMA disabled have to done before this */
+	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+	reg &= ~CMD_TX_EN;
+	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+
+	napi_disable(&priv->napi);
+
+	/* tx reclaim */
+	bcmgenet_tx_reclaim_all(dev);
+	bcmgenet_fini_dma(priv);
+
+	free_irq(priv->irq0, priv);
+	free_irq(priv->irq1, priv);
+
+	/* Wait for pending work items to complete - we are stopping
+	 * the clock now. Since interrupts are disabled, no new work
+	 * will be scheduled.
+	 */
+	cancel_work_sync(&priv->bcmgenet_irq_work);
+
+	if (device_may_wakeup(&dev->dev)) {
+		if (priv->wolopts & (WAKE_MAGIC | WAKE_MAGICSECURE))
+			bcmgenet_power_down(priv, GENET_POWER_WOL_MAGIC);
+		if (priv->wolopts & WAKE_ARP)
+			bcmgenet_power_down(priv, GENET_POWER_WOL_ACPI);
+	} else if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL)
+		bcmgenet_power_down(priv, GENET_POWER_PASSIVE);
+
+	if (priv->wol_enabled)
+		clk_enable(priv->clk_wol);
+
+	if (!IS_ERR(priv->clk))
+		clk_disable_unprepare(priv->clk);
+
+	return 0;
+}
+
+static void bcmgenet_timeout(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	BUG_ON(dev == NULL);
+
+	netif_dbg(priv, tx_err, dev, "bcmgenet_timeout\n");
+
+	dev->trans_start = jiffies;
+
+	dev->stats.tx_errors++;
+
+	netif_tx_wake_all_queues(dev);
+}
+
+#define MAX_MC_COUNT	16
+
+static inline void bcmgenet_set_mdf_addr(struct bcmgenet_priv *priv,
+					 unsigned char *addr,
+					 int *i,
+					 int *mc)
+{
+	u32 reg;
+
+	bcmgenet_umac_writel(priv, addr[0] << 8 | addr[1],
+			UMAC_MDF_ADDR + (*i * 4));
+	bcmgenet_umac_writel(priv,
+			addr[2] << 24 | addr[3] << 16 |
+			addr[4] << 8 | addr[5],
+			UMAC_MDF_ADDR + ((*i + 1) * 4));
+	reg = bcmgenet_umac_readl(priv, UMAC_MDF_CTRL);
+	reg |= (1 << (MAX_MC_COUNT - *mc));
+	bcmgenet_umac_writel(priv, reg, UMAC_MDF_CTRL);
+	*i += 2;
+	(*mc)++;
+}
+
+static void bcmgenet_set_rx_mode(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct netdev_hw_addr *ha;
+	int i, mc;
+	u32 reg;
+
+	netif_dbg(priv, hw, dev, "%s: %08X\n", __func__, dev->flags);
+
+	/* Promiscous mode */
+	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+	if (dev->flags & IFF_PROMISC) {
+		reg |= CMD_PROMISC;
+		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+		bcmgenet_umac_writel(priv, 0, UMAC_MDF_CTRL);
+		return;
+	} else {
+		reg &= ~CMD_PROMISC;
+		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+	}
+
+	/* UniMac doesn't support ALLMULTI */
+	if (dev->flags & IFF_ALLMULTI)
+		return;
+
+	/* update MDF filter */
+	i = 0;
+	mc = 0;
+	/* Broadcast */
+	bcmgenet_set_mdf_addr(priv, dev->broadcast, &i, &mc);
+	/* my own address.*/
+	bcmgenet_set_mdf_addr(priv, dev->dev_addr, &i, &mc);
+	/* Unicast list*/
+	if (netdev_uc_count(dev) > (MAX_MC_COUNT - mc))
+		return;
+
+	if (!netdev_uc_empty(dev))
+		netdev_for_each_uc_addr(ha, dev)
+			bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc);
+	/* Multicast */
+	if (netdev_mc_empty(dev) || netdev_mc_count(dev) >= (MAX_MC_COUNT - mc))
+		return;
+
+	netdev_for_each_mc_addr(ha, dev)
+		bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc);
+}
+
+/* Set the hardware MAC address. */
+static int bcmgenet_set_mac_addr(struct net_device *dev, void *p)
+{
+	struct sockaddr *addr = p;
+
+	if (netif_running(dev))
+		return -EBUSY;
+
+	ether_addr_copy(dev->dev_addr, addr->sa_data);
+
+	return 0;
+}
+
+static u16 bcmgenet_select_queue(struct net_device *dev,
+		struct sk_buff *skb, void *accel_priv)
+{
+	return netif_is_multiqueue(dev) ? skb->queue_mapping : 0;
+}
+
+static const struct net_device_ops bcmgenet_netdev_ops = {
+	.ndo_open = bcmgenet_open,
+	.ndo_stop = bcmgenet_close,
+	.ndo_start_xmit = bcmgenet_xmit,
+	.ndo_select_queue = bcmgenet_select_queue,
+	.ndo_tx_timeout = bcmgenet_timeout,
+	.ndo_set_rx_mode = bcmgenet_set_rx_mode,
+	.ndo_set_mac_address = bcmgenet_set_mac_addr,
+	.ndo_do_ioctl = bcmgenet_ioctl,
+	.ndo_set_features = bcmgenet_set_features,
+};
+
+/* Array of GENET hardware parameters/characteristics */
+static struct bcmgenet_hw_params bcmgenet_hw_params[] = {
+	[GENET_V1] = {
+		.tx_queues = 0,
+		.rx_queues = 0,
+		.bds_cnt = 0,
+		.bp_in_en_shift = 16,
+		.bp_in_mask = 0xffff,
+		.hfb_filter_cnt = 16,
+		.qtag_mask = 0x1F,
+		.hfb_offset = 0x1000,
+		.rdma_offset = 0x2000,
+		.tdma_offset = 0x3000,
+		.words_per_bd = 2,
+	},
+	[GENET_V2] = {
+		.tx_queues = 4,
+		.rx_queues = 4,
+		.bds_cnt = 32,
+		.bp_in_en_shift = 16,
+		.bp_in_mask = 0xffff,
+		.hfb_filter_cnt = 16,
+		.qtag_mask = 0x1F,
+		.tbuf_offset = 0x0600,
+		.hfb_offset = 0x1000,
+		.hfb_reg_offset = 0x2000,
+		.rdma_offset = 0x3000,
+		.tdma_offset = 0x4000,
+		.words_per_bd = 2,
+		.flags = GENET_HAS_EXT,
+	},
+	[GENET_V3] = {
+		.tx_queues = 4,
+		.rx_queues = 4,
+		.bds_cnt = 32,
+		.bp_in_en_shift = 17,
+		.bp_in_mask = 0x1ffff,
+		.hfb_filter_cnt = 48,
+		.qtag_mask = 0x3F,
+		.tbuf_offset = 0x0600,
+		.hfb_offset = 0x8000,
+		.hfb_reg_offset = 0xfc00,
+		.rdma_offset = 0x10000,
+		.tdma_offset = 0x11000,
+		.words_per_bd = 2,
+		.flags = GENET_HAS_EXT | GENET_HAS_MDIO_INTR,
+	},
+	[GENET_V4] = {
+		.tx_queues = 4,
+		.rx_queues = 4,
+		.bds_cnt = 32,
+		.bp_in_en_shift = 17,
+		.bp_in_mask = 0x1ffff,
+		.hfb_filter_cnt = 48,
+		.qtag_mask = 0x3F,
+		.tbuf_offset = 0x0600,
+		.hfb_offset = 0x8000,
+		.hfb_reg_offset = 0xfc00,
+		.rdma_offset = 0x2000,
+		.tdma_offset = 0x4000,
+		.words_per_bd = 3,
+		.flags = GENET_HAS_40BITS | GENET_HAS_EXT | GENET_HAS_MDIO_INTR,
+	},
+};
+
+/* Infer hardware parameters from the detected GENET version */
+static void bcmgenet_set_hw_params(struct bcmgenet_priv *priv)
+{
+	struct bcmgenet_hw_params *params;
+	u32 reg;
+	u8 major;
+
+	if (GENET_IS_V4(priv)) {
+		bcmgenet_dma_regs = bcmgenet_dma_regs_v3plus;
+		genet_dma_ring_regs = genet_dma_ring_regs_v4;
+		priv->dma_rx_chk_bit = DMA_RX_CHK_V3PLUS;
+		priv->version = GENET_V4;
+	} else if (GENET_IS_V3(priv)) {
+		bcmgenet_dma_regs = bcmgenet_dma_regs_v3plus;
+		genet_dma_ring_regs = genet_dma_ring_regs_v123;
+		priv->dma_rx_chk_bit = DMA_RX_CHK_V3PLUS;
+		priv->version = GENET_V3;
+	} else if (GENET_IS_V2(priv)) {
+		bcmgenet_dma_regs = bcmgenet_dma_regs_v2;
+		genet_dma_ring_regs = genet_dma_ring_regs_v123;
+		priv->dma_rx_chk_bit = DMA_RX_CHK_V12;
+		priv->version = GENET_V2;
+	} else if (GENET_IS_V1(priv)) {
+		bcmgenet_dma_regs = bcmgenet_dma_regs_v1;
+		genet_dma_ring_regs = genet_dma_ring_regs_v123;
+		priv->dma_rx_chk_bit = DMA_RX_CHK_V12;
+		priv->version = GENET_V1;
+	}
+
+	/* enum genet_version starts at 1 */
+	priv->hw_params = &bcmgenet_hw_params[priv->version];
+	params = priv->hw_params;
+
+	/* Read GENET HW version */
+	reg = bcmgenet_sys_readl(priv, SYS_REV_CTRL);
+	major = (reg >> 24 & 0x0f);
+	if (major == 5)
+		major = 4;
+	else if (major == 0)
+		major = 1;
+	if (major != priv->version) {
+		dev_err(&priv->pdev->dev,
+			"GENET version mismatch, got: %d, configured for: %d\n",
+			major, priv->version);
+	}
+
+	/* Print the GENET core version */
+	dev_info(&priv->pdev->dev, "GENET " GENET_VER_FMT,
+		major, (reg >> 16) & 0x0f, reg & 0xffff);
+
+#ifdef CONFIG_PHYS_ADDR_T_64BIT
+	if (!(params->flags & GENET_HAS_40BITS))
+		pr_warn("GENET does not support 40-bits PA\n");
+#endif
+
+	pr_debug("Configuration for version: %d\n"
+		"TXq: %1d, RXq: %1d, BDs: %1d\n"
+		"BP << en: %2d, BP msk: 0x%05x\n"
+		"HFB count: %2d, QTAQ msk: 0x%05x\n"
+		"TBUF: 0x%04x, HFB: 0x%04x, HFBreg: 0x%04x\n"
+		"RDMA: 0x%05x, TDMA: 0x%05x\n"
+		"Words/BD: %d\n",
+		priv->version,
+		params->tx_queues, params->rx_queues, params->bds_cnt,
+		params->bp_in_en_shift, params->bp_in_mask,
+		params->hfb_filter_cnt, params->qtag_mask,
+		params->tbuf_offset, params->hfb_offset,
+		params->hfb_reg_offset,
+		params->rdma_offset, params->tdma_offset,
+		params->words_per_bd);
+}
+
+static int bcmgenet_probe(struct platform_device *pdev)
+{
+	struct device_node *dn = pdev->dev.of_node;
+	struct bcmgenet_priv *priv;
+	struct net_device *dev;
+	const void *macaddr;
+	struct resource *r;
+	int err = -EIO;
+
+	/* Up to GENET_MAX_MQ_CNT + 1 TX queues and a single RX queue */
+	dev = alloc_etherdev_mqs(sizeof(*priv), GENET_MAX_MQ_CNT + 1, 1);
+	if (!dev) {
+		dev_err(&pdev->dev, "can't allocate net device\n");
+		return -ENOMEM;
+	}
+
+	priv = netdev_priv(dev);
+	priv->irq0 = platform_get_irq(pdev, 0);
+	priv->irq1 = platform_get_irq(pdev, 1);
+	if (!priv->irq0 || !priv->irq1) {
+		dev_err(&pdev->dev, "can't find IRQs\n");
+		err = -EINVAL;
+		goto err;
+	}
+
+	macaddr = of_get_mac_address(dn);
+	if (!macaddr) {
+		dev_err(&pdev->dev, "can't find MAC address\n");
+		err = -EINVAL;
+		goto err;
+	}
+
+	r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	priv->base = devm_request_and_ioremap(&pdev->dev, r);
+	if (!priv->base) {
+		dev_err(&pdev->dev, "can't ioremap\n");
+		err = -EINVAL;
+		goto err;
+	}
+
+	dev->base_addr = (unsigned long)priv->base;
+	SET_NETDEV_DEV(dev, &pdev->dev);
+	dev_set_drvdata(&pdev->dev, dev);
+	ether_addr_copy(dev->dev_addr, macaddr);
+	dev->irq = priv->irq0;
+	dev->watchdog_timeo = 2 * HZ;
+	SET_ETHTOOL_OPS(dev, &bcmgenet_ethtool_ops);
+	dev->netdev_ops = &bcmgenet_netdev_ops;
+	netif_napi_add(dev, &priv->napi, bcmgenet_poll, 64);
+
+	priv->msg_enable = netif_msg_init(-1, GENET_MSG_DEFAULT);
+
+	/* Set hardware features */
+	dev->hw_features |= NETIF_F_SG | NETIF_F_IP_CSUM |
+		NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM;
+
+	/* Set the needed headroom to account for any possible
+	 * features enabling/disabling at runtime
+	 */
+	dev->needed_headroom += 64;
+
+	netdev_boot_setup_check(dev);
+
+	priv->dev = dev;
+	priv->pdev = pdev;
+
+	if (of_device_is_compatible(dn, "brcm,genet-v4"))
+		priv->version = GENET_V4;
+	else if (of_device_is_compatible(dn, "brcm,genet-v3"))
+		priv->version = GENET_V3;
+	else if (of_device_is_compatible(dn, "brcm,genet-v2"))
+		priv->version = GENET_V2;
+	else if (of_device_is_compatible(dn, "brcm,genet-v1"))
+		priv->version = GENET_V1;
+	else {
+		dev_err(&pdev->dev, "unknown GENET version\n");
+		err = -EINVAL;
+		goto err;
+	}
+
+	bcmgenet_set_hw_params(priv);
+
+	spin_lock_init(&priv->lock);
+	spin_lock_init(&priv->bh_lock);
+	mutex_init(&priv->mib_mutex);
+	/* Mii wait queue */
+	init_waitqueue_head(&priv->wq);
+	/* Always use RX_BUF_LENGTH (2KB) buffer for all chips */
+	priv->rx_buf_len = RX_BUF_LENGTH;
+	INIT_WORK(&priv->bcmgenet_irq_work, bcmgenet_irq_task);
+
+	priv->clk = devm_clk_get(&priv->pdev->dev, "enet");
+	if (IS_ERR(priv->clk))
+		dev_warn(&priv->pdev->dev, "failed to get enet clock\n");
+
+	priv->clk_wol = devm_clk_get(&priv->pdev->dev, "enet-wol");
+	if (IS_ERR(priv->clk_wol))
+		dev_warn(&priv->pdev->dev, "failed to get enet-wol clock\n");
+
+	if (!IS_ERR(priv->clk))
+		clk_prepare_enable(priv->clk);
+
+	err = reset_umac(priv);
+	if (err)
+		goto err_clk_disable;
+
+	err = bcmgenet_mii_init(dev);
+	if (err)
+		goto err_clk_disable;
+
+	/* setup number of real queues  + 1 (GENET_V1 has 0 hardware queues
+	 * just the ring 16 descriptor based TX
+	 */
+	netif_set_real_num_tx_queues(priv->dev, priv->hw_params->tx_queues + 1);
+	netif_set_real_num_rx_queues(priv->dev, priv->hw_params->rx_queues + 1);
+
+	err = register_netdev(dev);
+	if (err)
+		goto err_clk_disable;
+
+	/* Turn off the clocks */
+	if (!IS_ERR(priv->clk))
+		clk_disable_unprepare(priv->clk);
+
+	return err;
+
+err_clk_disable:
+	if (!IS_ERR(priv->clk))
+		clk_disable_unprepare(priv->clk);
+err:
+	free_netdev(dev);
+	return err;
+}
+
+static int bcmgenet_remove(struct platform_device *pdev)
+{
+	struct bcmgenet_priv *priv = dev_to_priv(&pdev->dev);
+
+	dev_set_drvdata(&pdev->dev, NULL);
+	unregister_netdev(priv->dev);
+	bcmgenet_mii_exit(priv->dev);
+	free_netdev(priv->dev);
+
+	return 0;
+}
+
+static const struct of_device_id bcmgenet_match[] = {
+	{ .compatible = "brcm,genet-v1", },
+	{ .compatible = "brcm,genet-v2", },
+	{ .compatible = "brcm,genet-v3", },
+	{ .compatible = "brcm,genet-v4", },
+	{ },
+};
+
+static struct platform_driver bcmgenet_driver = {
+	.probe	= bcmgenet_probe,
+	.remove	= bcmgenet_remove,
+	.driver	= {
+		.name	= "bcmgenet",
+		.owner	= THIS_MODULE,
+		.of_match_table = bcmgenet_match,
+	},
+};
+module_platform_driver(bcmgenet_driver);
+
+MODULE_AUTHOR("Broadcom Corporation");
+MODULE_DESCRIPTION("Broadcom GENET Ethernet controller driver");
+MODULE_ALIAS("platform:bcmgenet");
+MODULE_LICENSE("GPL");
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 06/10] net: bcmgenet: add main driver file
@ 2014-02-13  5:29   ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

This patch adds the BCMGENET main driver file which supports the
following:

- GENET hardware from V1 to V4
- support for reading the UniMAC MIB counters statistics
- support for the 5 transmit queues
- support for RX/TX checksum offload and SG

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- use module_platform_driver boilerplate macro
- renamed bcmgenet_plat_drv to bcmgenet_driver
- renamed bcmgenet_drv_{probe,remove} to bcmgenet_{probe,remove}
- removed priv->phy_type usage and use priv->phy_interface which
  contains the exact same value
- removed debug module parameters, unused
- added MODULDE_{AUTHOR,ALIAS,DESCRIPTION} macros
- remove hardcoded queue index check in bcmgenet_xmit

 drivers/net/ethernet/broadcom/genet/bcmgenet.c | 2664 ++++++++++++++++++++++++
 1 file changed, 2664 insertions(+)
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmgenet.c

diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
new file mode 100644
index 0000000..ab71e81
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -0,0 +1,2664 @@
+/*
+ * Broadcom GENET (Gigabit Ethernet) controller driver
+ *
+ * Copyright (c) 2014 Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+
+#define pr_fmt(fmt)				"bcmgenet: " fmt
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/types.h>
+#include <linux/fcntl.h>
+#include <linux/interrupt.h>
+#include <linux/string.h>
+#include <linux/if_ether.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/delay.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/pm.h>
+#include <linux/clk.h>
+#include <linux/version.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_net.h>
+#include <linux/of_platform.h>
+#include <net/arp.h>
+
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+#include <linux/netdevice.h>
+#include <linux/inetdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/phy.h>
+
+#include <asm/unaligned.h>
+
+#include "bcmgenet.h"
+
+/* Maximum number of hardware queues, downsized if needed */
+#define GENET_MAX_MQ_CNT	4
+
+/* Default highest priority queue for multi queue support */
+#define GENET_Q0_PRIORITY	0
+
+#define GENET_DEFAULT_BD_CNT	\
+	(TOTAL_DESC - priv->hw_params->tx_queues * priv->hw_params->bds_cnt)
+
+#define RX_BUF_LENGTH		2048
+#define SKB_ALIGNMENT		32
+
+/* Tx/Rx DMA register offset, skip 256 descriptors */
+#define WORDS_PER_BD(p)		(p->hw_params->words_per_bd)
+#define DMA_DESC_SIZE		(WORDS_PER_BD(priv) * sizeof(u32))
+
+#define GENET_TDMA_REG_OFF	(priv->hw_params->tdma_offset + \
+				TOTAL_DESC * DMA_DESC_SIZE)
+
+#define GENET_RDMA_REG_OFF	(priv->hw_params->rdma_offset + \
+				TOTAL_DESC * DMA_DESC_SIZE)
+
+static inline void dmadesc_set_length_status(struct bcmgenet_priv *priv,
+						void __iomem *d, u32 value)
+{
+	__raw_writel(value, d + DMA_DESC_LENGTH_STATUS);
+}
+
+static inline u32 dmadesc_get_length_status(struct bcmgenet_priv *priv,
+						void __iomem *d)
+{
+	return __raw_readl(d + DMA_DESC_LENGTH_STATUS);
+}
+
+static inline void dmadesc_set_addr(struct bcmgenet_priv *priv,
+				    void __iomem *d,
+				    dma_addr_t addr)
+{
+	__raw_writel(lower_32_bits(addr), d + DMA_DESC_ADDRESS_LO);
+
+	/* Register writes to GISB bus can take couple hundred nanoseconds
+	 * and are done for each packet, save these expensive writes unless
+	 * the platform is explicitely configured for 64-bits/LPAE.
+	 */
+#ifdef CONFIG_PHYS_ADDR_T_64BIT
+	if (priv->hw_params->flags & GENET_HAS_40BITS)
+		__raw_writel(upper_32_bits(addr), d + DMA_DESC_ADDRESS_HI);
+#endif
+}
+
+/* Combined address + length/status setter */
+static inline void dmadesc_set(struct bcmgenet_priv *priv,
+				void __iomem *d, dma_addr_t addr, u32 val)
+{
+	dmadesc_set_length_status(priv, d, val);
+	dmadesc_set_addr(priv, d, addr);
+}
+
+static inline dma_addr_t dmadesc_get_addr(struct bcmgenet_priv *priv,
+					  void __iomem *d)
+{
+	dma_addr_t addr;
+
+	addr = __raw_readl(d + DMA_DESC_ADDRESS_LO);
+
+	/* Register writes to GISB bus can take couple hundred nanoseconds
+	 * and are done for each packet, save these expensive writes unless
+	 * the platform is explicitely configured for 64-bits/LPAE.
+	 */
+#ifdef CONFIG_PHYS_ADDR_T_64BIT
+	if (priv->hw_params->flags & GENET_HAS_40BITS)
+		addr |= (u64)__raw_readl(d + DMA_DESC_ADDRESS_HI) << 32;
+#endif
+	return addr;
+}
+
+#define GENET_VER_FMT	"%1d.%1d EPHY: 0x%04x"
+
+#define GENET_MSG_DEFAULT	(NETIF_MSG_DRV | NETIF_MSG_PROBE | \
+				NETIF_MSG_LINK)
+
+static inline u32 bcmgenet_rbuf_ctrl_get(struct bcmgenet_priv *priv)
+{
+	if (GENET_IS_V1(priv))
+		return bcmgenet_rbuf_readl(priv, RBUF_FLUSH_CTRL_V1);
+	else
+		return bcmgenet_sys_readl(priv, SYS_RBUF_FLUSH_CTRL);
+}
+
+static inline void bcmgenet_rbuf_ctrl_set(struct bcmgenet_priv *priv, u32 val)
+{
+	if (GENET_IS_V1(priv))
+		bcmgenet_rbuf_writel(priv, val, RBUF_FLUSH_CTRL_V1);
+	else
+		bcmgenet_sys_writel(priv, val, SYS_RBUF_FLUSH_CTRL);
+}
+
+/* These macros are defined to deal with register map change
+ * between GENET1.1 and GENET2. Only those currently being used
+ * by driver are defined.
+ */
+static inline u32 bcmgenet_tbuf_ctrl_get(struct bcmgenet_priv *priv)
+{
+	if (GENET_IS_V1(priv))
+		return bcmgenet_rbuf_readl(priv, TBUF_CTRL_V1);
+	else
+		return __raw_readl(priv->base +
+				priv->hw_params->tbuf_offset + TBUF_CTRL);
+}
+
+static inline void bcmgenet_tbuf_ctrl_set(struct bcmgenet_priv *priv, u32 val)
+{
+	if (GENET_IS_V1(priv))
+		bcmgenet_rbuf_writel(priv, val, TBUF_CTRL_V1);
+	else
+		__raw_writel(val, priv->base +
+				priv->hw_params->tbuf_offset + TBUF_CTRL);
+}
+
+static inline u32 bcmgenet_bp_mc_get(struct bcmgenet_priv *priv)
+{
+	if (GENET_IS_V1(priv))
+		return bcmgenet_rbuf_readl(priv, TBUF_BP_MC_V1);
+	else
+		return __raw_readl(priv->base +
+				priv->hw_params->tbuf_offset + TBUF_BP_MC);
+}
+
+static inline void bcmgenet_bp_mc_set(struct bcmgenet_priv *priv, u32 val)
+{
+	if (GENET_IS_V1(priv))
+		bcmgenet_rbuf_writel(priv, val, TBUF_BP_MC_V1);
+	else
+		__raw_writel(val, priv->base +
+				priv->hw_params->tbuf_offset + TBUF_BP_MC);
+}
+
+/* RX/TX DMA register accessors */
+enum dma_reg {
+	DMA_RING_CFG = 0,
+	DMA_CTRL,
+	DMA_STATUS,
+	DMA_SCB_BURST_SIZE,
+	DMA_ARB_CTRL,
+	DMA_PRIORITY,
+	DMA_RING_PRIORITY,
+};
+
+static const u8 bcmgenet_dma_regs_v3plus[] = {
+	[DMA_RING_CFG]		= 0x00,
+	[DMA_CTRL]		= 0x04,
+	[DMA_STATUS]		= 0x08,
+	[DMA_SCB_BURST_SIZE]	= 0x0C,
+	[DMA_ARB_CTRL]		= 0x2C,
+	[DMA_PRIORITY]		= 0x30,
+	[DMA_RING_PRIORITY]	= 0x38,
+};
+
+static const u8 bcmgenet_dma_regs_v2[] = {
+	[DMA_RING_CFG]		= 0x00,
+	[DMA_CTRL]		= 0x04,
+	[DMA_STATUS]		= 0x08,
+	[DMA_SCB_BURST_SIZE]	= 0x0C,
+	[DMA_ARB_CTRL]		= 0x30,
+	[DMA_PRIORITY]		= 0x34,
+	[DMA_RING_PRIORITY]	= 0x3C,
+};
+
+static const u8 bcmgenet_dma_regs_v1[] = {
+	[DMA_CTRL]		= 0x00,
+	[DMA_STATUS]		= 0x04,
+	[DMA_SCB_BURST_SIZE]	= 0x0C,
+	[DMA_ARB_CTRL]		= 0x30,
+	[DMA_PRIORITY]		= 0x34,
+	[DMA_RING_PRIORITY]	= 0x3C,
+};
+
+/* Set at runtime once bcmgenet version is known */
+static const u8 *bcmgenet_dma_regs;
+
+static inline struct bcmgenet_priv *dev_to_priv(struct device *dev)
+{
+	return netdev_priv(dev_get_drvdata(dev));
+}
+
+static inline u32 bcmgenet_tdma_readl(struct bcmgenet_priv *priv,
+					enum dma_reg r)
+{
+	return __raw_readl(priv->base + GENET_TDMA_REG_OFF +
+			DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
+}
+
+static inline void bcmgenet_tdma_writel(struct bcmgenet_priv *priv,
+					u32 val, enum dma_reg r)
+{
+	__raw_writel(val, priv->base + GENET_TDMA_REG_OFF +
+			DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
+}
+
+static inline u32 bcmgenet_rdma_readl(struct bcmgenet_priv *priv,
+					enum dma_reg r)
+{
+	return __raw_readl(priv->base + GENET_RDMA_REG_OFF +
+			DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
+}
+
+static inline void bcmgenet_rdma_writel(struct bcmgenet_priv *priv,
+					u32 val, enum dma_reg r)
+{
+	__raw_writel(val, priv->base + GENET_RDMA_REG_OFF +
+			DMA_RINGS_SIZE + bcmgenet_dma_regs[r]);
+}
+
+/* RDMA/TDMA ring registers and accessors
+ * we merge the common fields and just prefix with T/D the registers
+ * having different meaning depending on the direction
+ */
+enum dma_ring_reg {
+	TDMA_READ_PTR = 0,
+	RDMA_WRITE_PTR = TDMA_READ_PTR,
+	TDMA_READ_PTR_HI,
+	RDMA_WRITE_PTR_HI = TDMA_READ_PTR_HI,
+	TDMA_CONS_INDEX,
+	RDMA_PROD_INDEX = TDMA_CONS_INDEX,
+	TDMA_PROD_INDEX,
+	RDMA_CONS_INDEX = TDMA_PROD_INDEX,
+	DMA_RING_BUF_SIZE,
+	DMA_START_ADDR,
+	DMA_START_ADDR_HI,
+	DMA_END_ADDR,
+	DMA_END_ADDR_HI,
+	DMA_MBUF_DONE_THRESH,
+	TDMA_FLOW_PERIOD,
+	RDMA_XON_XOFF_THRESH = TDMA_FLOW_PERIOD,
+	TDMA_WRITE_PTR,
+	RDMA_READ_PTR = TDMA_WRITE_PTR,
+	TDMA_WRITE_PTR_HI,
+	RDMA_READ_PTR_HI = TDMA_WRITE_PTR_HI
+};
+
+/* GENET v4 supports 40-bits pointer addressing
+ * for obvious reasons the LO and HI word parts
+ * are contiguous, but this offsets the other
+ * registers.
+ */
+static const u8 genet_dma_ring_regs_v4[] = {
+	[TDMA_READ_PTR]			= 0x00,
+	[TDMA_READ_PTR_HI]		= 0x04,
+	[TDMA_CONS_INDEX]		= 0x08,
+	[TDMA_PROD_INDEX]		= 0x0C,
+	[DMA_RING_BUF_SIZE]		= 0x10,
+	[DMA_START_ADDR]		= 0x14,
+	[DMA_START_ADDR_HI]		= 0x18,
+	[DMA_END_ADDR]			= 0x1C,
+	[DMA_END_ADDR_HI]		= 0x20,
+	[DMA_MBUF_DONE_THRESH]		= 0x24,
+	[TDMA_FLOW_PERIOD]		= 0x28,
+	[TDMA_WRITE_PTR]		= 0x2C,
+	[TDMA_WRITE_PTR_HI]		= 0x30,
+};
+
+static const u8 genet_dma_ring_regs_v123[] = {
+	[TDMA_READ_PTR]			= 0x00,
+	[TDMA_CONS_INDEX]		= 0x04,
+	[TDMA_PROD_INDEX]		= 0x08,
+	[DMA_RING_BUF_SIZE]		= 0x0C,
+	[DMA_START_ADDR]		= 0x10,
+	[DMA_END_ADDR]			= 0x14,
+	[DMA_MBUF_DONE_THRESH]		= 0x18,
+	[TDMA_FLOW_PERIOD]		= 0x1C,
+	[TDMA_WRITE_PTR]		= 0x20,
+};
+
+/* Set at runtime once GENET version is known */
+static const u8 *genet_dma_ring_regs;
+
+static inline u32 bcmgenet_tdma_ring_readl(struct bcmgenet_priv *priv,
+						unsigned int ring,
+						enum dma_ring_reg r)
+{
+	return __raw_readl(priv->base + GENET_TDMA_REG_OFF +
+			(DMA_RING_SIZE * ring) +
+			genet_dma_ring_regs[r]);
+}
+
+static inline void bcmgenet_tdma_ring_writel(struct bcmgenet_priv *priv,
+						unsigned int ring,
+						u32 val,
+						enum dma_ring_reg r)
+{
+	__raw_writel(val, priv->base + GENET_TDMA_REG_OFF +
+			(DMA_RING_SIZE * ring) +
+			genet_dma_ring_regs[r]);
+}
+
+static inline u32 bcmgenet_rdma_ring_readl(struct bcmgenet_priv *priv,
+						unsigned int ring,
+						enum dma_ring_reg r)
+{
+	return __raw_readl(priv->base + GENET_RDMA_REG_OFF +
+			(DMA_RING_SIZE * ring) +
+			genet_dma_ring_regs[r]);
+}
+
+static inline void bcmgenet_rdma_ring_writel(struct bcmgenet_priv *priv,
+						unsigned int ring,
+						u32 val,
+						enum dma_ring_reg r)
+{
+	__raw_writel(val, priv->base + GENET_RDMA_REG_OFF +
+			(DMA_RING_SIZE * ring) +
+			genet_dma_ring_regs[r]);
+}
+
+static int bcmgenet_get_settings(struct net_device *dev,
+		struct ethtool_cmd *cmd)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	if (!netif_running(dev))
+		return -EINVAL;
+
+	if (!priv->phydev)
+		return -ENODEV;
+
+	return phy_ethtool_gset(priv->phydev, cmd);
+}
+
+static int bcmgenet_set_settings(struct net_device *dev,
+		struct ethtool_cmd *cmd)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	if (!netif_running(dev))
+		return -EINVAL;
+
+	if (!priv->phydev)
+		return -ENODEV;
+
+	return phy_ethtool_sset(priv->phydev, cmd);
+}
+
+static int bcmgenet_set_rx_csum(struct net_device *dev,
+				netdev_features_t wanted)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	u32 rbuf_chk_ctrl;
+	int rx_csum_en;
+
+	rx_csum_en = !!(wanted & NETIF_F_RXCSUM);
+
+	spin_lock_bh(&priv->bh_lock);
+	rbuf_chk_ctrl = bcmgenet_rbuf_readl(priv, RBUF_CHK_CTRL);
+
+	/* enable rx checksumming */
+	if (!rx_csum_en)
+		rbuf_chk_ctrl &= ~RBUF_RXCHK_EN;
+	else
+		rbuf_chk_ctrl |= RBUF_RXCHK_EN;
+	priv->desc_rxchk_en = rx_csum_en;
+	bcmgenet_rbuf_writel(priv, rbuf_chk_ctrl, RBUF_CHK_CTRL);
+
+	spin_unlock_bh(&priv->bh_lock);
+
+	return 0;
+}
+static int bcmgenet_set_tx_csum(struct net_device *dev,
+				netdev_features_t wanted)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int desc_64b_en;
+	u32 tbuf_ctrl, rbuf_ctrl;
+
+	spin_lock_bh(&priv->bh_lock);
+	tbuf_ctrl = bcmgenet_tbuf_ctrl_get(priv);
+	rbuf_ctrl = bcmgenet_rbuf_readl(priv, RBUF_CTRL);
+
+	desc_64b_en = !!(wanted & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM));
+
+	/* enable 64bytes descriptor in both directions (RBUF and TBUF) */
+	if (!desc_64b_en) {
+		tbuf_ctrl &= ~RBUF_64B_EN;
+		rbuf_ctrl &= ~RBUF_64B_EN;
+	} else {
+		tbuf_ctrl |= RBUF_64B_EN;
+		rbuf_ctrl |= RBUF_64B_EN;
+	}
+	priv->desc_64b_en = desc_64b_en;
+
+	bcmgenet_tbuf_ctrl_set(priv, tbuf_ctrl);
+	bcmgenet_rbuf_writel(priv, rbuf_ctrl, RBUF_CTRL);
+	spin_unlock_bh(&priv->bh_lock);
+	return 0;
+}
+
+static int bcmgenet_set_features(struct net_device *dev,
+		netdev_features_t features)
+{
+	netdev_features_t changed = features ^ dev->features;
+	netdev_features_t wanted = dev->wanted_features;
+	int ret = 0;
+
+	if (changed & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM))
+		ret = bcmgenet_set_tx_csum(dev, wanted);
+	if (changed & (NETIF_F_RXCSUM))
+		ret = bcmgenet_set_rx_csum(dev, wanted);
+
+	return ret;
+}
+
+static u32 bcmgenet_get_msglevel(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	return priv->msg_enable;
+}
+
+static void bcmgenet_set_msglevel(struct net_device *dev, u32 level)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	priv->msg_enable = level;
+}
+
+/* standard ethtool support functions. */
+enum bcmgenet_stat_type {
+	BCMGENET_STAT_NETDEV = -1,
+	BCMGENET_STAT_MIB_RX,
+	BCMGENET_STAT_MIB_TX,
+	BCMGENET_STAT_RUNT,
+	BCMGENET_STAT_MISC,
+};
+
+struct bcmgenet_stats {
+	char stat_string[ETH_GSTRING_LEN];
+	int stat_sizeof;
+	int stat_offset;
+	enum bcmgenet_stat_type type;
+	/* reg offset from UMAC base for misc counters */
+	u16 reg_offset;
+};
+
+#define STAT_NETDEV(m) { \
+	.stat_string = __stringify(m), \
+	.stat_sizeof = sizeof(((struct net_device_stats *)0)->m), \
+	.stat_offset = offsetof(struct net_device_stats, m), \
+	.type = BCMGENET_STAT_NETDEV, \
+}
+
+#define STAT_GENET_MIB(str, m, _type) { \
+	.stat_string = str, \
+	.stat_sizeof = sizeof(((struct bcmgenet_priv *)0)->m), \
+	.stat_offset = offsetof(struct bcmgenet_priv, m), \
+	.type = _type, \
+}
+
+#define STAT_GENET_MIB_RX(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_MIB_RX)
+#define STAT_GENET_MIB_TX(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_MIB_TX)
+#define STAT_GENET_RUNT(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_RUNT)
+
+#define STAT_GENET_MISC(str, m, offset) { \
+	.stat_string = str, \
+	.stat_sizeof = sizeof(((struct bcmgenet_priv *)0)->m), \
+	.stat_offset = offsetof(struct bcmgenet_priv, m), \
+	.type = BCMGENET_STAT_MISC, \
+	.reg_offset = offset, \
+}
+
+
+/* There is a 0xC gap between the end of RX and beginning of TX stats and then
+ * between the end of TX stats and the beginning of the RX RUNT
+ */
+#define BCMGENET_STAT_OFFSET	0xc
+
+/* Hardware counters must be kept in sync because the order/offset
+ * is important here (order in structure declaration = order in hardware)
+ */
+static const struct bcmgenet_stats bcmgenet_gstrings_stats[] = {
+	/* general stats */
+	STAT_NETDEV(rx_packets),
+	STAT_NETDEV(tx_packets),
+	STAT_NETDEV(rx_bytes),
+	STAT_NETDEV(tx_bytes),
+	STAT_NETDEV(rx_errors),
+	STAT_NETDEV(tx_errors),
+	STAT_NETDEV(rx_dropped),
+	STAT_NETDEV(tx_dropped),
+	STAT_NETDEV(multicast),
+	/* UniMAC RSV counters */
+	STAT_GENET_MIB_RX("rx_64_octets", mib.rx.pkt_cnt.cnt_64),
+	STAT_GENET_MIB_RX("rx_65_127_oct", mib.rx.pkt_cnt.cnt_127),
+	STAT_GENET_MIB_RX("rx_128_255_oct", mib.rx.pkt_cnt.cnt_255),
+	STAT_GENET_MIB_RX("rx_256_511_oct", mib.rx.pkt_cnt.cnt_511),
+	STAT_GENET_MIB_RX("rx_512_1023_oct", mib.rx.pkt_cnt.cnt_1023),
+	STAT_GENET_MIB_RX("rx_1024_1518_oct", mib.rx.pkt_cnt.cnt_1518),
+	STAT_GENET_MIB_RX("rx_vlan_1519_1522_oct", mib.rx.pkt_cnt.cnt_mgv),
+	STAT_GENET_MIB_RX("rx_1522_2047_oct", mib.rx.pkt_cnt.cnt_2047),
+	STAT_GENET_MIB_RX("rx_2048_4095_oct", mib.rx.pkt_cnt.cnt_4095),
+	STAT_GENET_MIB_RX("rx_4096_9216_oct", mib.rx.pkt_cnt.cnt_9216),
+	STAT_GENET_MIB_RX("rx_pkts", mib.rx.pkt),
+	STAT_GENET_MIB_RX("rx_bytes", mib.rx.bytes),
+	STAT_GENET_MIB_RX("rx_multicast", mib.rx.mca),
+	STAT_GENET_MIB_RX("rx_broadcast", mib.rx.bca),
+	STAT_GENET_MIB_RX("rx_fcs", mib.rx.fcs),
+	STAT_GENET_MIB_RX("rx_control", mib.rx.cf),
+	STAT_GENET_MIB_RX("rx_pause", mib.rx.pf),
+	STAT_GENET_MIB_RX("rx_unknown", mib.rx.uo),
+	STAT_GENET_MIB_RX("rx_align", mib.rx.aln),
+	STAT_GENET_MIB_RX("rx_outrange", mib.rx.flr),
+	STAT_GENET_MIB_RX("rx_code", mib.rx.cde),
+	STAT_GENET_MIB_RX("rx_carrier", mib.rx.fcr),
+	STAT_GENET_MIB_RX("rx_oversize", mib.rx.ovr),
+	STAT_GENET_MIB_RX("rx_jabber", mib.rx.jbr),
+	STAT_GENET_MIB_RX("rx_mtu_err", mib.rx.mtue),
+	STAT_GENET_MIB_RX("rx_good_pkts", mib.rx.pok),
+	STAT_GENET_MIB_RX("rx_unicast", mib.rx.uc),
+	STAT_GENET_MIB_RX("rx_ppp", mib.rx.ppp),
+	STAT_GENET_MIB_RX("rx_crc", mib.rx.rcrc),
+	/* UniMAC TSV counters */
+	STAT_GENET_MIB_TX("tx_64_octets", mib.tx.pkt_cnt.cnt_64),
+	STAT_GENET_MIB_TX("tx_65_127_oct", mib.tx.pkt_cnt.cnt_127),
+	STAT_GENET_MIB_TX("tx_128_255_oct", mib.tx.pkt_cnt.cnt_255),
+	STAT_GENET_MIB_TX("tx_256_511_oct", mib.tx.pkt_cnt.cnt_511),
+	STAT_GENET_MIB_TX("tx_512_1023_oct", mib.tx.pkt_cnt.cnt_1023),
+	STAT_GENET_MIB_TX("tx_1024_1518_oct", mib.tx.pkt_cnt.cnt_1518),
+	STAT_GENET_MIB_TX("tx_vlan_1519_1522_oct", mib.tx.pkt_cnt.cnt_mgv),
+	STAT_GENET_MIB_TX("tx_1522_2047_oct", mib.tx.pkt_cnt.cnt_2047),
+	STAT_GENET_MIB_TX("tx_2048_4095_oct", mib.tx.pkt_cnt.cnt_4095),
+	STAT_GENET_MIB_TX("tx_4096_9216_oct", mib.tx.pkt_cnt.cnt_9216),
+	STAT_GENET_MIB_TX("tx_pkts", mib.tx.pkts),
+	STAT_GENET_MIB_TX("tx_multicast", mib.tx.mca),
+	STAT_GENET_MIB_TX("tx_broadcast", mib.tx.bca),
+	STAT_GENET_MIB_TX("tx_pause", mib.tx.pf),
+	STAT_GENET_MIB_TX("tx_control", mib.tx.cf),
+	STAT_GENET_MIB_TX("tx_fcs_err", mib.tx.fcs),
+	STAT_GENET_MIB_TX("tx_oversize", mib.tx.ovr),
+	STAT_GENET_MIB_TX("tx_defer", mib.tx.drf),
+	STAT_GENET_MIB_TX("tx_excess_defer", mib.tx.edf),
+	STAT_GENET_MIB_TX("tx_single_col", mib.tx.scl),
+	STAT_GENET_MIB_TX("tx_multi_col", mib.tx.mcl),
+	STAT_GENET_MIB_TX("tx_late_col", mib.tx.lcl),
+	STAT_GENET_MIB_TX("tx_excess_col", mib.tx.ecl),
+	STAT_GENET_MIB_TX("tx_frags", mib.tx.frg),
+	STAT_GENET_MIB_TX("tx_total_col", mib.tx.ncl),
+	STAT_GENET_MIB_TX("tx_jabber", mib.tx.jbr),
+	STAT_GENET_MIB_TX("tx_bytes", mib.tx.bytes),
+	STAT_GENET_MIB_TX("tx_good_pkts", mib.tx.pok),
+	STAT_GENET_MIB_TX("tx_unicast", mib.tx.uc),
+	/* UniMAC RUNT counters */
+	STAT_GENET_RUNT("rx_runt_pkts", mib.rx_runt_cnt),
+	STAT_GENET_RUNT("rx_runt_valid_fcs", mib.rx_runt_fcs),
+	STAT_GENET_RUNT("rx_runt_inval_fcs_align", mib.rx_runt_fcs_align),
+	STAT_GENET_RUNT("rx_runt_bytes", mib.rx_runt_bytes),
+	/* Misc UniMAC counters */
+	STAT_GENET_MISC("rbuf_ovflow_cnt", mib.rbuf_ovflow_cnt,
+			UMAC_RBUF_OVFL_CNT),
+	STAT_GENET_MISC("rbuf_err_cnt", mib.rbuf_err_cnt, UMAC_RBUF_ERR_CNT),
+	STAT_GENET_MISC("mdf_err_cnt", mib.mdf_err_cnt, UMAC_MDF_ERR_CNT),
+};
+
+#define BCMGENET_STATS_LEN	ARRAY_SIZE(bcmgenet_gstrings_stats)
+
+static void bcmgenet_get_drvinfo(struct net_device *dev,
+		struct ethtool_drvinfo *info)
+{
+	strlcpy(info->driver, "bcmgenet", sizeof(info->driver));
+	strlcpy(info->version, "v2.0", sizeof(info->version));
+	info->n_stats = BCMGENET_STATS_LEN;
+
+}
+
+static int bcmgenet_get_sset_count(struct net_device *dev, int string_set)
+{
+	switch (string_set) {
+	case ETH_SS_STATS:
+		return BCMGENET_STATS_LEN;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static void bcmgenet_get_strings(struct net_device *dev,
+				u32 stringset, u8 *data)
+{
+	int i;
+
+	switch (stringset) {
+	case ETH_SS_STATS:
+		for (i = 0; i < BCMGENET_STATS_LEN; i++) {
+			memcpy(data + i * ETH_GSTRING_LEN,
+				bcmgenet_gstrings_stats[i].stat_string,
+				ETH_GSTRING_LEN);
+		}
+		break;
+	}
+}
+
+static void bcmgenet_update_mib_counters(struct bcmgenet_priv *priv)
+{
+	int i, j = 0;
+
+	for (i = 0; i < BCMGENET_STATS_LEN; i++) {
+		const struct bcmgenet_stats *s;
+		u32 val = 0;
+		char *p;
+		u8 offset = 0;
+
+		s = &bcmgenet_gstrings_stats[i];
+		switch (s->type) {
+		case BCMGENET_STAT_NETDEV:
+			continue;
+		case BCMGENET_STAT_MIB_RX:
+		case BCMGENET_STAT_MIB_TX:
+		case BCMGENET_STAT_RUNT:
+			if (s->type != BCMGENET_STAT_MIB_RX)
+				offset = BCMGENET_STAT_OFFSET;
+			val = bcmgenet_umac_readl(priv, UMAC_MIB_START +
+								j + offset);
+			break;
+		case BCMGENET_STAT_MISC:
+			val = bcmgenet_umac_readl(priv, s->reg_offset);
+			/* clear if overflowed */
+			if (val == ~0)
+				bcmgenet_umac_writel(priv, 0, s->reg_offset);
+			break;
+		}
+
+		j += s->stat_sizeof;
+		p = (char *)priv + s->stat_offset;
+		*(u32 *)p = val;
+	}
+}
+
+static void bcmgenet_get_ethtool_stats(struct net_device *dev,
+					struct ethtool_stats *stats,
+					u64 *data)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int i;
+
+	mutex_lock(&priv->mib_mutex);
+	if (netif_running(dev))
+		bcmgenet_update_mib_counters(priv);
+
+	for (i = 0; i < BCMGENET_STATS_LEN; i++) {
+		const struct bcmgenet_stats *s;
+		char *p;
+
+		s = &bcmgenet_gstrings_stats[i];
+		if (s->type == BCMGENET_STAT_NETDEV)
+			p = (char *)&dev->stats;
+		else
+			p = (char *)priv;
+		p += s->stat_offset;
+		data[i] = *(u32 *)p;
+	}
+	mutex_unlock(&priv->mib_mutex);
+}
+
+/* standard ethtool support functions. */
+static struct ethtool_ops bcmgenet_ethtool_ops = {
+	.get_strings		= bcmgenet_get_strings,
+	.get_sset_count		= bcmgenet_get_sset_count,
+	.get_ethtool_stats	= bcmgenet_get_ethtool_stats,
+	.get_settings		= bcmgenet_get_settings,
+	.set_settings		= bcmgenet_set_settings,
+	.get_drvinfo		= bcmgenet_get_drvinfo,
+	.get_link		= ethtool_op_get_link,
+	.get_msglevel		= bcmgenet_get_msglevel,
+	.set_msglevel		= bcmgenet_set_msglevel,
+};
+
+/* Power down the unimac, based on mode. */
+static void bcmgenet_power_down(struct bcmgenet_priv *priv,
+				enum bcmgenet_power_mode mode)
+{
+	u32 reg;
+
+	switch (mode) {
+	case GENET_POWER_CABLE_SENSE:
+		if (priv->phydev)
+			phy_detach(priv->phydev);
+		break;
+
+	case GENET_POWER_PASSIVE:
+		/* Power down LED */
+		bcmgenet_mii_reset(priv->dev);
+		if (priv->hw_params->flags & GENET_HAS_EXT) {
+			reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+			reg |= (EXT_PWR_DOWN_PHY |
+				EXT_PWR_DOWN_DLL | EXT_PWR_DOWN_BIAS);
+			bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+		}
+		break;
+	default:
+		break;
+	}
+
+}
+
+static void bcmgenet_power_up(struct bcmgenet_priv *priv,
+				enum bcmgenet_power_mode mode)
+{
+	u32 reg;
+
+	switch (mode) {
+	case GENET_POWER_CABLE_SENSE:
+		/* enable APD */
+		if (priv->hw_params->flags & GENET_HAS_EXT) {
+			reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+			reg |= EXT_PWR_DN_EN_LD;
+			bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+			bcmgenet_mii_reset(priv->dev);
+		}
+		break;
+
+	case GENET_POWER_PASSIVE:
+		if (priv->hw_params->flags & GENET_HAS_EXT) {
+			reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+			reg &= ~EXT_PWR_DOWN_DLL;
+			reg &= ~EXT_PWR_DOWN_PHY;
+			reg &= ~EXT_PWR_DOWN_BIAS;
+			/* enable APD */
+			reg |= EXT_PWR_DN_EN_LD;
+			bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+			bcmgenet_mii_reset(priv->dev);
+		}
+		break;
+	default:
+		break;
+	}
+}
+
+/* ioctl handle special commands that are not present in ethtool. */
+static int bcmgenet_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int val = 0;
+
+	if (!netif_running(dev))
+		return -EINVAL;
+
+	switch (cmd) {
+	case SIOCGMIIPHY:
+	case SIOCGMIIREG:
+	case SIOCSMIIREG:
+		if (!priv->phydev)
+			val = -ENODEV;
+		else
+			val = phy_mii_ioctl(priv->phydev, rq, cmd);
+		break;
+
+	default:
+		val = -EINVAL;
+		break;
+	}
+
+	return val;
+}
+
+static struct enet_cb *bcmgenet_get_txcb(struct bcmgenet_priv *priv,
+					 struct bcmgenet_tx_ring *ring)
+{
+	struct enet_cb *tx_cb_ptr;
+
+	tx_cb_ptr = ring->cbs;
+	tx_cb_ptr += ring->write_ptr - ring->cb_ptr;
+	tx_cb_ptr->bd_addr = priv->tx_bds + ring->write_ptr * DMA_DESC_SIZE;
+	/* Advancing local write pointer */
+	if (ring->write_ptr == ring->end_ptr)
+		ring->write_ptr = ring->cb_ptr;
+	else
+		ring->write_ptr++;
+
+	return tx_cb_ptr;
+}
+
+/* Simple helper to free a control block's resources */
+static void bcmgenet_free_cb(struct enet_cb *cb)
+{
+	dev_kfree_skb_any(cb->skb);
+	cb->skb = NULL;
+	dma_unmap_addr_set(cb, dma_addr, 0);
+}
+
+static inline void bcmgenet_tx_ring16_int_disable(struct bcmgenet_priv *priv,
+						  struct bcmgenet_tx_ring *ring)
+{
+	bcmgenet_intrl2_0_writel(priv,
+			UMAC_IRQ_TXDMA_BDONE | UMAC_IRQ_TXDMA_PDONE,
+			INTRL2_CPU_MASK_SET);
+}
+
+static inline void bcmgenet_tx_ring16_int_enable(struct bcmgenet_priv *priv,
+						 struct bcmgenet_tx_ring *ring)
+{
+	bcmgenet_intrl2_0_writel(priv,
+			UMAC_IRQ_TXDMA_BDONE | UMAC_IRQ_TXDMA_PDONE,
+			INTRL2_CPU_MASK_CLEAR);
+}
+
+static inline void bcmgenet_tx_ring_int_enable(struct bcmgenet_priv *priv,
+						struct bcmgenet_tx_ring *ring)
+{
+	bcmgenet_intrl2_1_writel(priv,
+			(1 << ring->index), INTRL2_CPU_MASK_CLEAR);
+	priv->int1_mask &= ~(1 << ring->index);
+}
+
+static inline void bcmgenet_tx_ring_int_disable(struct bcmgenet_priv *priv,
+						struct bcmgenet_tx_ring *ring)
+{
+	bcmgenet_intrl2_1_writel(priv,
+			(1 << ring->index), INTRL2_CPU_MASK_SET);
+	priv->int1_mask |= (1 << ring->index);
+}
+
+/* Unlocked version of the reclaim routine */
+static void __bcmgenet_tx_reclaim(struct net_device *dev,
+				struct bcmgenet_tx_ring *ring)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int last_tx_cn, last_c_index, num_tx_bds;
+	struct enet_cb *tx_cb_ptr;
+	unsigned int c_index;
+
+	/* Compute how many buffers are transmited since last xmit call */
+	c_index = bcmgenet_tdma_ring_readl(priv, ring->index, TDMA_CONS_INDEX);
+
+	last_c_index = ring->c_index;
+	num_tx_bds = ring->size;
+
+	c_index &= (num_tx_bds - 1);
+
+	if (c_index >= last_c_index)
+		last_tx_cn = c_index - last_c_index;
+	else
+		last_tx_cn = num_tx_bds - last_c_index + c_index;
+
+	netif_dbg(priv, tx_done, dev,
+			"%s ring=%d index=%d last_tx_cn=%d last_index=%d\n",
+			__func__, ring->index,
+			c_index, last_tx_cn, last_c_index);
+
+	/* Reclaim transmitted buffers */
+	while (last_tx_cn-- > 0) {
+		tx_cb_ptr = ring->cbs + last_c_index;
+		if (tx_cb_ptr->skb) {
+			dev->stats.tx_bytes += tx_cb_ptr->skb->len;
+			dma_unmap_single(&dev->dev,
+					dma_unmap_addr(tx_cb_ptr, dma_addr),
+					tx_cb_ptr->skb->len,
+					DMA_TO_DEVICE);
+			bcmgenet_free_cb(tx_cb_ptr);
+		} else if (dma_unmap_addr(tx_cb_ptr, dma_addr)) {
+			dev->stats.tx_bytes +=
+				dma_unmap_len(tx_cb_ptr, dma_len);
+			dma_unmap_page(&dev->dev,
+					dma_unmap_addr(tx_cb_ptr, dma_addr),
+					dma_unmap_len(tx_cb_ptr, dma_len),
+					DMA_TO_DEVICE);
+			dma_unmap_addr_set(tx_cb_ptr, dma_addr, 0);
+		}
+		dev->stats.tx_packets++;
+		ring->free_bds += 1;
+
+		last_c_index++;
+		last_c_index &= (num_tx_bds - 1);
+	}
+
+	if (ring->free_bds > (MAX_SKB_FRAGS + 1))
+		ring->int_disable(priv, ring);
+
+	if (__netif_subqueue_stopped(dev, ring->queue))
+		netif_wake_subqueue(dev, ring->queue);
+
+	ring->c_index = c_index;
+}
+
+static void bcmgenet_tx_reclaim(struct net_device *dev,
+		struct bcmgenet_tx_ring *ring)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ring->lock, flags);
+	__bcmgenet_tx_reclaim(dev, ring);
+	spin_unlock_irqrestore(&ring->lock, flags);
+}
+
+static void bcmgenet_tx_reclaim_all(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int i;
+
+	if (netif_is_multiqueue(dev)) {
+		for (i = 0; i < priv->hw_params->tx_queues; i++)
+			bcmgenet_tx_reclaim(dev, &priv->tx_rings[i]);
+	}
+
+	bcmgenet_tx_reclaim(dev, &priv->tx_rings[DESC_INDEX]);
+}
+
+/* Transmits a single SKB (either head of a fragment or a single SKB)
+ * caller must hold priv->lock
+ */
+static int bcmgenet_xmit_single(struct net_device *dev,
+				struct sk_buff *skb,
+				u16 dma_desc_flags,
+				struct bcmgenet_tx_ring *ring)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct device *kdev = &priv->pdev->dev;
+	struct enet_cb *tx_cb_ptr;
+	unsigned int skb_len;
+	dma_addr_t mapping;
+	u32 length_status;
+	int ret;
+
+	tx_cb_ptr = bcmgenet_get_txcb(priv, ring);
+
+	if (unlikely(!tx_cb_ptr))
+		BUG();
+
+	tx_cb_ptr->skb = skb;
+
+	skb_len = skb_headlen(skb) < ETH_ZLEN ? ETH_ZLEN : skb_headlen(skb);
+
+	mapping = dma_map_single(kdev, skb->data, skb_len, DMA_TO_DEVICE);
+	ret = dma_mapping_error(kdev, mapping);
+	if (ret) {
+		netif_err(priv, tx_err, dev, "Tx DMA map failed\n");
+		dev_kfree_skb(skb);
+		return ret;
+	}
+
+	dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping);
+	dma_unmap_len_set(tx_cb_ptr, dma_len, skb->len);
+	length_status = (skb_len << DMA_BUFLENGTH_SHIFT) | dma_desc_flags |
+			(priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT) |
+			DMA_TX_APPEND_CRC;
+
+	if (skb->ip_summed == CHECKSUM_PARTIAL)
+		length_status |= DMA_TX_DO_CSUM;
+
+	dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping, length_status);
+
+	/* Decrement total BD count and advance our write pointer */
+	ring->free_bds -= 1;
+	ring->prod_index += 1;
+	ring->prod_index &= DMA_P_INDEX_MASK;
+
+	return 0;
+}
+
+/* Transmit a SKB fragement */
+static int bcmgenet_xmit_frag(struct net_device *dev,
+				skb_frag_t *frag,
+				u16 dma_desc_flags,
+				struct bcmgenet_tx_ring *ring)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct device *kdev = &priv->pdev->dev;
+	struct enet_cb *tx_cb_ptr;
+	dma_addr_t mapping;
+	int ret;
+
+	tx_cb_ptr = bcmgenet_get_txcb(priv, ring);
+
+	if (unlikely(!tx_cb_ptr))
+		BUG();
+	tx_cb_ptr->skb = NULL;
+
+	mapping = skb_frag_dma_map(kdev, frag, 0,
+		skb_frag_size(frag), DMA_TO_DEVICE);
+	ret = dma_mapping_error(kdev, mapping);
+	if (ret) {
+		netif_err(priv, tx_err, dev, "%s: Tx DMA map failed\n",
+				__func__);
+		return ret;
+	}
+
+	dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping);
+	dma_unmap_len_set(tx_cb_ptr, dma_len, frag->size);
+
+	dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping,
+			(frag->size << DMA_BUFLENGTH_SHIFT) | dma_desc_flags |
+			(priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT));
+
+
+	ring->free_bds -= 1;
+	ring->prod_index += 1;
+	ring->prod_index &= DMA_P_INDEX_MASK;
+
+	return 0;
+}
+
+/* Reallocate the SKB to put enough headroom in front of it and insert
+ * the transmit checksum offsets in the descriptors
+ */
+static int bcmgenet_put_tx_csum(struct net_device *dev, struct sk_buff *skb)
+{
+	struct status_64 *status = NULL;
+	struct sk_buff *new_skb;
+	u16 offset;
+	u8 ip_proto;
+	u16 ip_ver;
+	u32 tx_csum_info;
+
+	if (unlikely(skb_headroom(skb) < sizeof(*status))) {
+		/* If 64 byte status block enabled, must make sure skb has
+		 * enough headroom for us to insert 64B status block.
+		 */
+		new_skb = skb_realloc_headroom(skb, sizeof(*status));
+		dev_kfree_skb(skb);
+		if (!new_skb) {
+			dev->stats.tx_errors++;
+			dev->stats.tx_dropped++;
+			return -ENOMEM;
+		}
+		skb = new_skb;
+	}
+
+	skb_push(skb, sizeof(*status));
+	status = (struct status_64 *)skb->data;
+
+	if (skb->ip_summed  == CHECKSUM_PARTIAL) {
+		ip_ver = htons(skb->protocol);
+		switch (ip_ver) {
+		case ETH_P_IP:
+			ip_proto = ip_hdr(skb)->protocol;
+			break;
+		case ETH_P_IPV6:
+			ip_proto = ipv6_hdr(skb)->nexthdr;
+			break;
+		default:
+			return 0;
+		}
+
+		offset = skb_checksum_start_offset(skb) - sizeof(*status);
+		tx_csum_info = (offset << STATUS_TX_CSUM_START_SHIFT) |
+				(offset + skb->csum_offset);
+
+		/* Set the length valid bit for TCP and UDP and just set
+		 * the special UDP flag for IPv4, else just set to 0.
+		 */
+		if (ip_proto == IPPROTO_TCP || ip_proto == IPPROTO_UDP) {
+			tx_csum_info |= STATUS_TX_CSUM_LV;
+			if (ip_proto == IPPROTO_UDP && ip_ver == ETH_P_IP)
+				tx_csum_info |= STATUS_TX_CSUM_PROTO_UDP;
+		} else
+			tx_csum_info = 0;
+
+		status->tx_csum_info = tx_csum_info;
+	}
+
+	return 0;
+}
+
+static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct bcmgenet_tx_ring *ring = NULL;
+	unsigned long flags = 0;
+	int nr_frags, index;
+	u16 dma_desc_flags;
+	int ret;
+	int i;
+
+	index = skb_get_queue_mapping(skb);
+	/* Mapping strategy:
+	 * queue_mapping = 0, unclassified, packet xmited through ring16
+	 * queue_mapping = 1, goes to ring 0. (highest priority queue
+	 * queue_mapping = 2, goes to ring 1.
+	 * queue_mapping = 3, goes to ring 2.
+	 * queue_mapping = 4, goes to ring 3.
+	 */
+	if (index == 0)
+		index = DESC_INDEX;
+	else
+		index -= 1;
+
+	if ((index != DESC_INDEX) && (index > priv->hw_params->tx_queues - 1)) {
+		netdev_err(dev, "%s: queue_mapping %d is invalid\n",
+				__func__, skb_get_queue_mapping(skb));
+		dev->stats.tx_errors++;
+		dev->stats.tx_dropped++;
+		ret = NETDEV_TX_OK;
+		goto out;
+	}
+	nr_frags = skb_shinfo(skb)->nr_frags;
+	ring = &priv->tx_rings[index];
+
+	spin_lock_irqsave(&ring->lock, flags);
+	if (ring->free_bds <= nr_frags + 1) {
+		netif_stop_subqueue(dev, ring->queue);
+		netdev_err(dev, "%s: tx ring %d full when queue %d awake\n",
+				__func__, index, ring->queue);
+		ret = NETDEV_TX_BUSY;
+		goto out;
+	}
+
+	/* reclaim xmited skb every 8 packets. */
+	/*if (ring->free_bds < ring->size - 8)*/
+		/*__bcmgenet_tx_reclaim(dev, ring);*/
+
+	/* set the SKB transmit checksum */
+	if (priv->desc_64b_en) {
+		ret = bcmgenet_put_tx_csum(dev, skb);
+		if (ret) {
+			ret = NETDEV_TX_OK;
+			goto out;
+		}
+	}
+
+	dma_desc_flags = DMA_SOP;
+	if (nr_frags == 0)
+		dma_desc_flags |= DMA_EOP;
+
+	/* Transmit single SKB or head of fragment list */
+	ret = bcmgenet_xmit_single(dev, skb, dma_desc_flags, ring);
+	if (ret) {
+		ret = NETDEV_TX_OK;
+		goto out;
+	}
+
+	/* xmit fragment */
+	for (i = 0; i < nr_frags; i++) {
+		ret = bcmgenet_xmit_frag(dev,
+				&skb_shinfo(skb)->frags[i],
+				(i == nr_frags - 1) ? DMA_EOP : 0, ring);
+		if (ret) {
+			ret = NETDEV_TX_OK;
+			goto out;
+		}
+	}
+
+	/* we kept a software copy of how much we should advance the TDMA
+	 * producer index, now write it down to the hardware
+	 */
+	bcmgenet_tdma_ring_writel(priv, ring->index,
+			ring->prod_index, TDMA_PROD_INDEX);
+
+	if (ring->free_bds <= (MAX_SKB_FRAGS + 1)) {
+		netif_stop_subqueue(dev, ring->queue);
+		ring->int_enable(priv, ring);
+	}
+
+out:
+	spin_unlock_irqrestore(&ring->lock, flags);
+
+	return ret;
+}
+
+
+static int bcmgenet_rx_refill(struct bcmgenet_priv *priv,
+				struct enet_cb *cb)
+{
+	struct device *kdev = &priv->pdev->dev;
+	struct sk_buff *skb;
+	dma_addr_t mapping;
+	int ret;
+
+	skb = netdev_alloc_skb(priv->dev,
+				priv->rx_buf_len + SKB_ALIGNMENT);
+	if (!skb)
+		return -ENOMEM;
+
+	/* a caller did not release this control block */
+	WARN_ON(cb->skb != NULL);
+	cb->skb = skb;
+	mapping = dma_map_single(kdev, skb->data,
+			priv->rx_buf_len, DMA_FROM_DEVICE);
+	ret = dma_mapping_error(kdev, mapping);
+	if (ret) {
+		bcmgenet_free_cb(cb);
+		netif_err(priv, rx_err, priv->dev,
+				"%s DMA map failed\n", __func__);
+		return ret;
+	}
+
+	dma_unmap_addr_set(cb, dma_addr, mapping);
+	/* assign packet, prepare descriptor, and advance pointer */
+
+	dmadesc_set_addr(priv, priv->rx_bd_assign_ptr, mapping);
+
+	/* turn on the newly assigned BD for DMA to use */
+	priv->rx_bd_assign_index++;
+	priv->rx_bd_assign_index &= (priv->num_rx_bds - 1);
+
+	priv->rx_bd_assign_ptr = priv->rx_bds +
+		(priv->rx_bd_assign_index * DMA_DESC_SIZE);
+
+	return 0;
+}
+
+/* bcmgenet_desc_rx - descriptor based rx process.
+ * this could be called from bottom half, or from NAPI polling method.
+ */
+static unsigned int bcmgenet_desc_rx(struct bcmgenet_priv *priv,
+				     unsigned int budget)
+{
+	struct net_device *dev = priv->dev;
+	struct enet_cb *cb;
+	struct sk_buff *skb;
+	u32 dma_length_status;
+	unsigned long dma_flag;
+	int len, err;
+	unsigned int rxpktprocessed = 0, rxpkttoprocess;
+	unsigned int p_index;
+	unsigned int chksum_ok = 0;
+
+	p_index = bcmgenet_rdma_ring_readl(priv,
+			DESC_INDEX, RDMA_PROD_INDEX);
+	p_index &= DMA_P_INDEX_MASK;
+
+	if (p_index < priv->rx_c_index)
+		rxpkttoprocess = (DMA_C_INDEX_MASK + 1) -
+			priv->rx_c_index + p_index;
+	else
+		rxpkttoprocess = p_index - priv->rx_c_index;
+
+	netif_dbg(priv, rx_status, dev,
+		"RDMA: rxpkttoprocess=%d\n", rxpkttoprocess);
+
+	while ((rxpktprocessed < rxpkttoprocess) &&
+			(rxpktprocessed < budget)) {
+
+		/* Unmap the packet contents such that we can use the
+		 * RSV from the 64 bytes descriptor when enabled and save
+		 * a 32-bits register read
+		 */
+		cb = &priv->rx_cbs[priv->rx_read_ptr];
+		skb = cb->skb;
+		dma_unmap_single(&dev->dev, dma_unmap_addr(cb, dma_addr),
+				priv->rx_buf_len, DMA_FROM_DEVICE);
+
+		if (!priv->desc_64b_en) {
+			dma_length_status = dmadesc_get_length_status(priv,
+							priv->rx_bds +
+							(priv->rx_read_ptr *
+							 DMA_DESC_SIZE));
+		} else {
+			struct status_64 *status;
+			status = (struct status_64 *)skb->data;
+			dma_length_status = status->length_status;
+		}
+
+		/* DMA flags and length are still valid no matter how
+		 * we got the Receive Status Vector (64B RSB or register)
+		 */
+		dma_flag = dma_length_status & 0xffff;
+		len = dma_length_status >> DMA_BUFLENGTH_SHIFT;
+
+		netif_dbg(priv, rx_status, dev,
+			"%s: p_ind=%d c_ind=%d read_ptr=%d len_stat=0x%08x\n",
+			__func__, p_index, priv->rx_c_index, priv->rx_read_ptr,
+			dma_length_status);
+
+		rxpktprocessed++;
+
+		priv->rx_read_ptr++;
+		priv->rx_read_ptr &= (priv->num_rx_bds - 1);
+
+		/* out of memory, just drop packets at the hardware level */
+		if (unlikely(!skb)) {
+			dev->stats.rx_dropped++;
+			dev->stats.rx_errors++;
+			goto refill;
+		}
+
+		if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) {
+			netif_err(priv, rx_status, dev,
+					"Droping fragmented packet!\n");
+			dev->stats.rx_dropped++;
+			dev->stats.rx_errors++;
+			dev_kfree_skb_any(cb->skb);
+			cb->skb = NULL;
+			goto refill;
+		}
+		/* report errors */
+		if (unlikely(dma_flag & (DMA_RX_CRC_ERROR |
+						DMA_RX_OV |
+						DMA_RX_NO |
+						DMA_RX_LG |
+						DMA_RX_RXER))) {
+			netif_err(priv, rx_status, dev, "dma_flag=0x%x\n",
+						(unsigned int)dma_flag);
+			if (dma_flag & DMA_RX_CRC_ERROR)
+				dev->stats.rx_crc_errors++;
+			if (dma_flag & DMA_RX_OV)
+				dev->stats.rx_over_errors++;
+			if (dma_flag & DMA_RX_NO)
+				dev->stats.rx_frame_errors++;
+			if (dma_flag & DMA_RX_LG)
+				dev->stats.rx_length_errors++;
+			dev->stats.rx_dropped++;
+			dev->stats.rx_errors++;
+
+			/* discard the packet and advance consumer index.*/
+			dev_kfree_skb_any(cb->skb);
+			cb->skb = NULL;
+			goto refill;
+		} /* error packet */
+
+		chksum_ok = (dma_flag & priv->dma_rx_chk_bit) &&
+				priv->desc_rxchk_en;
+
+		skb_put(skb, len);
+		if (priv->desc_64b_en) {
+			skb_pull(skb, 64);
+			len -= 64;
+		}
+
+		if (likely(chksum_ok))
+			skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+		/* remove hardware 2bytes added for IP alignment */
+		skb_pull(skb, 2);
+		len -= 2;
+
+		if (priv->crc_fwd_en) {
+			skb_trim(skb, len - ETH_FCS_LEN);
+			len -= ETH_FCS_LEN;
+		}
+
+		/*Finish setting up the received SKB and send it to the kernel*/
+		skb->protocol = eth_type_trans(skb, priv->dev);
+		dev->stats.rx_packets++;
+		dev->stats.rx_bytes += len;
+		if (dma_flag & DMA_RX_MULT)
+			dev->stats.multicast++;
+
+		/* Notify kernel */
+		napi_gro_receive(&priv->napi, skb);
+		cb->skb = NULL;
+		netif_dbg(priv, rx_status, dev, "pushed up to kernel\n");
+
+		/* refill RX path on the current control block */
+refill:
+		err = bcmgenet_rx_refill(priv, cb);
+		if (err)
+			netif_err(priv, rx_err, dev, "Rx refill failed\n");
+	}
+
+	return rxpktprocessed;
+}
+
+/* Assign skb to RX DMA descriptor. */
+static int bcmgenet_alloc_rx_buffers(struct bcmgenet_priv *priv)
+{
+	struct enet_cb *cb;
+	int ret = 0;
+	int i;
+	u32 reg;
+
+	netif_dbg(priv, hw, priv->dev, "%s:\n", __func__);
+
+	/* This function may be called from irq bottom-half. */
+	spin_lock_bh(&priv->bh_lock);
+
+	/* loop here for each buffer needing assign */
+	for (i = 0; i < priv->num_rx_bds; i++) {
+		cb = &priv->rx_cbs[priv->rx_bd_assign_index];
+		if (cb->skb)
+			continue;
+
+		/* set the DMA descriptor length once and for all
+		 * it will only change if we support dynamically sizing
+		 * priv->rx_buf_len, but we do not
+		 */
+		dmadesc_set_length_status(priv, priv->rx_bd_assign_ptr,
+				priv->rx_buf_len << DMA_BUFLENGTH_SHIFT);
+
+		ret = bcmgenet_rx_refill(priv, cb);
+		if (ret)
+			break;
+
+	}
+
+	/* Enable rx DMA incase it was disabled due to running out of rx BD */
+	reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
+	reg |= DMA_EN;
+	bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
+
+	spin_unlock_bh(&priv->bh_lock);
+
+	return ret;
+}
+
+static void bcmgenet_free_rx_buffers(struct bcmgenet_priv *priv)
+{
+	struct enet_cb *cb;
+	int i;
+
+	for (i = 0; i < priv->num_rx_bds; i++) {
+		cb = &priv->rx_cbs[i];
+
+		if (dma_unmap_addr(cb, dma_addr)) {
+			dma_unmap_single(&priv->dev->dev,
+					dma_unmap_addr(cb, dma_addr),
+					priv->rx_buf_len, DMA_FROM_DEVICE);
+			dma_unmap_addr_set(cb, dma_addr, 0);
+		}
+
+		if (cb->skb)
+			bcmgenet_free_cb(cb);
+	}
+}
+
+static int reset_umac(struct bcmgenet_priv *priv)
+{
+	struct device *kdev = &priv->pdev->dev;
+	unsigned int timeout = 0;
+	u32 reg;
+
+	/* 7358a0/7552a0: bad default in RBUF_FLUSH_CTRL.umac_sw_rst */
+	bcmgenet_rbuf_ctrl_set(priv, 0);
+	udelay(10);
+
+	/* disable MAC while updating its registers */
+	bcmgenet_umac_writel(priv, 0, UMAC_CMD);
+
+	/* issue soft reset, wait for it to complete */
+	bcmgenet_umac_writel(priv, CMD_SW_RESET, UMAC_CMD);
+	while (timeout++ < 1000) {
+		reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+		if (!(reg & CMD_SW_RESET))
+			break;
+		udelay(1);
+	}
+
+	if (timeout == 1000) {
+		dev_err(kdev,
+			"timeout waiting for MAC to come out of resetn\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+/* init_umac: Initializes the uniMac controller */
+static int init_umac(struct bcmgenet_priv *priv)
+{
+	struct device *kdev = &priv->pdev->dev;
+	int ret;
+	u32 reg, cpu_mask_clear;
+
+	dev_dbg(&priv->pdev->dev, "bcmgenet: init_umac\n");
+
+	ret = reset_umac(priv);
+	if (ret)
+		return ret;
+
+	bcmgenet_umac_writel(priv, 0, UMAC_CMD);
+	/* clear tx/rx counter */
+	bcmgenet_umac_writel(priv,
+		MIB_RESET_RX | MIB_RESET_TX | MIB_RESET_RUNT, UMAC_MIB_CTRL);
+	bcmgenet_umac_writel(priv, 0, UMAC_MIB_CTRL);
+
+	bcmgenet_umac_writel(priv, ENET_MAX_MTU_SIZE, UMAC_MAX_FRAME_LEN);
+
+	/* init rx registers, enable ip header optimization */
+	reg = bcmgenet_rbuf_readl(priv, RBUF_CTRL);
+	reg |= RBUF_ALIGN_2B;
+	bcmgenet_rbuf_writel(priv, reg, RBUF_CTRL);
+
+	if (!GENET_IS_V1(priv) && !GENET_IS_V2(priv))
+		bcmgenet_rbuf_writel(priv, 1, RBUF_TBUF_SIZE_CTRL);
+
+	/* Mask all interrupts.*/
+	bcmgenet_intrl2_0_writel(priv, 0xFFFFFFFF, INTRL2_CPU_MASK_SET);
+	bcmgenet_intrl2_0_writel(priv, 0xFFFFFFFF, INTRL2_CPU_CLEAR);
+	bcmgenet_intrl2_0_writel(priv, 0, INTRL2_CPU_MASK_CLEAR);
+
+	cpu_mask_clear = UMAC_IRQ_RXDMA_BDONE;
+
+	dev_dbg(kdev, "%s:Enabling RXDMA_BDONE interrupt\n", __func__);
+
+	/* Monitor cable plug/unpluged event for internal PHY */
+	if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL)
+		cpu_mask_clear |= (UMAC_IRQ_LINK_DOWN | UMAC_IRQ_LINK_UP);
+	else if (priv->ext_phy)
+		cpu_mask_clear |= (UMAC_IRQ_LINK_DOWN | UMAC_IRQ_LINK_UP);
+	else if (priv->phy_interface == PHY_INTERFACE_MODE_MOCA) {
+		reg = bcmgenet_bp_mc_get(priv);
+		reg |= BIT(priv->hw_params->bp_in_en_shift);
+
+		/* bp_mask: back pressure mask */
+		if (netif_is_multiqueue(priv->dev))
+			reg |= priv->hw_params->bp_in_mask;
+		else
+			reg &= ~priv->hw_params->bp_in_mask;
+		bcmgenet_bp_mc_set(priv, reg);
+	}
+
+	/* Enable MDIO interrupts on GENET v3+ */
+	if (priv->hw_params->flags & GENET_HAS_MDIO_INTR)
+		cpu_mask_clear |= UMAC_IRQ_MDIO_DONE | UMAC_IRQ_MDIO_ERROR;
+
+	bcmgenet_intrl2_0_writel(priv, cpu_mask_clear,
+		INTRL2_CPU_MASK_CLEAR);
+
+	/* Enable rx/tx engine.*/
+	dev_dbg(kdev, "done init umac\n");
+
+	return 0;
+}
+
+/* Initialize all house-keeping variables for a TX ring, along
+ * with corresponding hardware registers
+ */
+static void bcmgenet_init_tx_ring(struct bcmgenet_priv *priv,
+				  unsigned int index, unsigned int size,
+				  unsigned int write_ptr, unsigned int end_ptr)
+{
+	struct bcmgenet_tx_ring *ring = &priv->tx_rings[index];
+	u32 words_per_bd = WORDS_PER_BD(priv);
+	u32 flow_period_val = 0;
+	unsigned int first_bd;
+
+	spin_lock_init(&ring->lock);
+	ring->index = index;
+	if (index == DESC_INDEX) {
+		ring->queue = 0;
+		ring->int_enable = bcmgenet_tx_ring16_int_enable;
+		ring->int_disable = bcmgenet_tx_ring16_int_disable;
+	} else {
+		ring->queue = index + 1;
+		ring->int_enable = bcmgenet_tx_ring_int_enable;
+		ring->int_disable = bcmgenet_tx_ring_int_disable;
+	}
+	ring->cbs = priv->tx_cbs + write_ptr;
+	ring->size = size;
+	ring->c_index = 0;
+	ring->free_bds = size;
+	ring->write_ptr = write_ptr;
+	ring->cb_ptr = write_ptr;
+	ring->end_ptr = end_ptr - 1;
+	ring->prod_index = 0;
+
+	/* Set flow period for ring != 16 */
+	if (index != DESC_INDEX)
+		flow_period_val = ENET_MAX_MTU_SIZE << 16;
+
+	bcmgenet_tdma_ring_writel(priv, index, 0, TDMA_PROD_INDEX);
+	bcmgenet_tdma_ring_writel(priv, index, 0, TDMA_CONS_INDEX);
+	bcmgenet_tdma_ring_writel(priv, index, 1, DMA_MBUF_DONE_THRESH);
+	/* Disable rate control for now */
+	bcmgenet_tdma_ring_writel(priv, index, flow_period_val,
+			TDMA_FLOW_PERIOD);
+	/* Unclassified traffic goes to ring 16 */
+	bcmgenet_tdma_ring_writel(priv, index,
+			((size << DMA_RING_SIZE_SHIFT) | RX_BUF_LENGTH),
+			DMA_RING_BUF_SIZE);
+
+	first_bd = write_ptr;
+
+	/* Set start and end address, read and write pointers */
+	bcmgenet_tdma_ring_writel(priv, index, first_bd * words_per_bd,
+			DMA_START_ADDR);
+	bcmgenet_tdma_ring_writel(priv, index, first_bd * words_per_bd,
+			TDMA_READ_PTR);
+	bcmgenet_tdma_ring_writel(priv, index, first_bd,
+			TDMA_WRITE_PTR);
+	bcmgenet_tdma_ring_writel(priv, index, end_ptr * words_per_bd - 1,
+			DMA_END_ADDR);
+}
+
+/* Initialize a RDMA ring */
+static int bcmgenet_init_rx_ring(struct bcmgenet_priv *priv,
+				  unsigned int index, unsigned int size)
+{
+	u32 words_per_bd = WORDS_PER_BD(priv);
+	int ret;
+
+	priv->num_rx_bds = TOTAL_DESC;
+	priv->rx_bds = priv->base + priv->hw_params->rdma_offset;
+	priv->rx_bd_assign_ptr = priv->rx_bds;
+	priv->rx_bd_assign_index = 0;
+	priv->rx_c_index = 0;
+	priv->rx_read_ptr = 0;
+	priv->rx_cbs = kzalloc(priv->num_rx_bds * sizeof(struct enet_cb),
+				GFP_KERNEL);
+	if (!priv->rx_cbs)
+		return -ENOMEM;
+
+	ret = bcmgenet_alloc_rx_buffers(priv);
+	if (ret) {
+		kfree(priv->rx_cbs);
+		return ret;
+	}
+
+	bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_WRITE_PTR);
+	bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_PROD_INDEX);
+	bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_CONS_INDEX);
+	bcmgenet_rdma_ring_writel(priv, index,
+		((size << DMA_RING_SIZE_SHIFT) | RX_BUF_LENGTH),
+		DMA_RING_BUF_SIZE);
+	bcmgenet_rdma_ring_writel(priv, index, 0, DMA_START_ADDR);
+	bcmgenet_rdma_ring_writel(priv, index,
+		words_per_bd * size - 1, DMA_END_ADDR);
+	bcmgenet_rdma_ring_writel(priv, index,
+			(DMA_FC_THRESH_LO << DMA_XOFF_THRESHOLD_SHIFT) |
+			DMA_FC_THRESH_HI, RDMA_XON_XOFF_THRESH);
+	bcmgenet_rdma_ring_writel(priv, index, 0, RDMA_READ_PTR);
+
+	return ret;
+}
+
+/* init multi xmit queues, only available for GENET2+
+ * the queue is partitioned as follows:
+ *
+ * queue 0 - 3 is priority based, each one has 32 descriptors,
+ * with queue 0 being the highest priority queue.
+ *
+ * queue 16 is the default tx queue with GENET_DEFAULT_BD_CNT
+ * descriptors: 256 - (number of tx queues * bds per queues) = 128
+ * descriptors.
+ *
+ * The transmit control block pool is then partitioned as following:
+ * - tx_cbs[0...127] are for queue 16
+ * - tx_ring_cbs[0] points to tx_cbs[128..159]
+ * - tx_ring_cbs[1] points to tx_cbs[160..191]
+ * - tx_ring_cbs[2] points to tx_cbs[192..223]
+ * - tx_ring_cbs[3] points to tx_cbs[224..255]
+ */
+static void bcmgenet_init_multiq(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	unsigned int i, dma_enable;
+	u32 reg, dma_ctrl, ring_cfg = 0, dma_priority = 0;
+
+	if (!netif_is_multiqueue(dev)) {
+		netdev_warn(dev, "called with non multi queue aware HW\n");
+		return;
+	}
+
+	dma_ctrl = bcmgenet_tdma_readl(priv, DMA_CTRL);
+	dma_enable = dma_ctrl & DMA_EN;
+	dma_ctrl &= ~DMA_EN;
+	bcmgenet_tdma_writel(priv, dma_ctrl, DMA_CTRL);
+
+	/* Enable strict priority arbiter mode */
+	bcmgenet_tdma_writel(priv, DMA_ARBITER_SP, DMA_ARB_CTRL);
+
+	for (i = 0; i < priv->hw_params->tx_queues; i++) {
+		/* first 64 tx_cbs are reserved for default tx queue
+		 * (ring 16)
+		 */
+		bcmgenet_init_tx_ring(priv, i, priv->hw_params->bds_cnt,
+					i * priv->hw_params->bds_cnt,
+					(i + 1) * priv->hw_params->bds_cnt);
+
+		/* Configure ring as decriptor ring and setup priority */
+		ring_cfg |= (1 << i);
+		dma_priority |= ((GENET_Q0_PRIORITY + i) <<
+				(GENET_MAX_MQ_CNT + 1) * i);
+		dma_ctrl |= (1 << (i + DMA_RING_BUF_EN_SHIFT));
+	}
+
+	/* Enable rings */
+	reg = bcmgenet_tdma_readl(priv, DMA_RING_CFG);
+	reg |= ring_cfg;
+	bcmgenet_tdma_writel(priv, reg, DMA_RING_CFG);
+
+	/* Use configured rings priority and set ring #16 priority */
+	reg = bcmgenet_tdma_readl(priv, DMA_RING_PRIORITY);
+	reg |= ((GENET_Q0_PRIORITY + priv->hw_params->tx_queues) << 20);
+	reg |= dma_priority;
+	bcmgenet_tdma_writel(priv, reg, DMA_PRIORITY);
+
+	/* Configure ring as descriptor ring and re-enable DMA if enabled */
+	reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
+	reg |= dma_ctrl;
+	if (dma_enable)
+		reg |= DMA_EN;
+	bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
+}
+
+static void bcmgenet_fini_dma(struct bcmgenet_priv *priv)
+{
+	int i;
+
+	/* disable DMA */
+	bcmgenet_rdma_writel(priv, 0, DMA_CTRL);
+	bcmgenet_tdma_writel(priv, 0, DMA_CTRL);
+
+	for (i = 0; i < priv->num_tx_bds; i++) {
+		if (priv->tx_cbs[i].skb != NULL) {
+			dev_kfree_skb(priv->tx_cbs[i].skb);
+			priv->tx_cbs[i].skb = NULL;
+		}
+	}
+	bcmgenet_free_rx_buffers(priv);
+	kfree(priv->rx_cbs);
+	kfree(priv->tx_cbs);
+}
+
+/* init_edma: Initialize DMA control register */
+static int bcmgenet_init_dma(struct bcmgenet_priv *priv)
+{
+	int ret;
+
+	netif_dbg(priv, hw, priv->dev, "bcmgenet: init_edma\n");
+
+	/* by default, enable ring 16 (descriptor based) */
+	ret = bcmgenet_init_rx_ring(priv, DESC_INDEX, TOTAL_DESC);
+	if (ret) {
+		netdev_err(priv->dev, "failed to initialize RX ring\n");
+		return ret;
+	}
+
+	/* init rDma */
+	bcmgenet_rdma_writel(priv, DMA_MAX_BURST_LENGTH, DMA_SCB_BURST_SIZE);
+
+	/* Init tDma */
+	bcmgenet_tdma_writel(priv, DMA_MAX_BURST_LENGTH, DMA_SCB_BURST_SIZE);
+
+	/* Initialize commont TX ring structures */
+	priv->tx_bds = priv->base + priv->hw_params->tdma_offset;
+	priv->num_tx_bds = TOTAL_DESC;
+	priv->tx_cbs = kzalloc(priv->num_tx_bds * sizeof(struct enet_cb),
+				GFP_KERNEL);
+	if (!priv->tx_cbs) {
+		bcmgenet_fini_dma(priv);
+		return -ENOMEM;
+	}
+
+	/* initialize multi xmit queue */
+	bcmgenet_init_multiq(priv->dev);
+
+	/* initialize special ring 16 */
+	bcmgenet_init_tx_ring(priv, DESC_INDEX, GENET_DEFAULT_BD_CNT,
+			priv->hw_params->tx_queues * priv->hw_params->bds_cnt,
+			TOTAL_DESC);
+
+	return 0;
+}
+
+/* NAPI polling method*/
+static int bcmgenet_poll(struct napi_struct *napi, int budget)
+{
+	struct bcmgenet_priv *priv = container_of(napi,
+			struct bcmgenet_priv, napi);
+	unsigned int work_done;
+
+	work_done = bcmgenet_desc_rx(priv, budget);
+
+	/* tx reclaim */
+	bcmgenet_tx_reclaim(priv->dev, &priv->tx_rings[DESC_INDEX]);
+	/* Advancing our consumer index*/
+	priv->rx_c_index += work_done;
+	priv->rx_c_index &= DMA_C_INDEX_MASK;
+	bcmgenet_rdma_ring_writel(priv, DESC_INDEX,
+				priv->rx_c_index, RDMA_CONS_INDEX);
+	if (work_done < budget) {
+		napi_complete(napi);
+		bcmgenet_intrl2_0_writel(priv,
+			UMAC_IRQ_RXDMA_BDONE, INTRL2_CPU_MASK_CLEAR);
+	}
+
+	return work_done;
+}
+
+/* Interrupt bottom half */
+static void bcmgenet_irq_task(struct work_struct *work)
+{
+	struct bcmgenet_priv *priv = container_of(
+			work, struct bcmgenet_priv, bcmgenet_irq_work);
+	struct net_device *dev;
+	u32 reg;
+
+	dev = priv->dev;
+
+	netif_dbg(priv, intr, dev, "%s\n", __func__);
+	/* Cable plugged/unplugged event */
+	if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL) {
+		if (priv->irq0_stat & UMAC_IRQ_PHY_DET_R) {
+			priv->irq0_stat &= ~UMAC_IRQ_PHY_DET_R;
+			netif_crit(priv, link, dev,
+				"cable plugged in, powering up\n");
+			bcmgenet_power_up(priv, GENET_POWER_CABLE_SENSE);
+		} else if (priv->irq0_stat & UMAC_IRQ_PHY_DET_F) {
+			priv->irq0_stat &= ~UMAC_IRQ_PHY_DET_F;
+			netif_crit(priv, link, dev,
+				"cable unplugged, powering down\n");
+			bcmgenet_power_down(priv, GENET_POWER_CABLE_SENSE);
+		}
+	}
+	if (priv->irq0_stat & UMAC_IRQ_MPD_R) {
+		priv->irq0_stat &= ~UMAC_IRQ_MPD_R;
+		netif_crit(priv, wol, dev,
+			"magic packet detected, waking up\n");
+		/* disable mpd interrupt */
+		bcmgenet_intrl2_0_writel(priv,
+			UMAC_IRQ_MPD_R, INTRL2_CPU_MASK_SET);
+		/* disable CRC forward.*/
+		reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+		reg &= ~CMD_CRC_FWD;
+		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+		priv->crc_fwd_en = 0;
+		bcmgenet_power_up(priv, GENET_POWER_WOL_MAGIC);
+
+	} else if (priv->irq0_stat & (UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM)) {
+		priv->irq0_stat &= ~(UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM);
+		netif_crit(priv, wol, dev,
+			"ACPI pattern matched, waking up\n");
+		/* disable HFB match interrupts */
+		bcmgenet_intrl2_0_writel(priv,
+			UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM, INTRL2_CPU_MASK_SET);
+		bcmgenet_power_up(priv, GENET_POWER_WOL_ACPI);
+	}
+
+	/* Link UP/DOWN event */
+	if ((priv->hw_params->flags & GENET_HAS_MDIO_INTR) &&
+		(priv->irq0_stat & (UMAC_IRQ_LINK_UP|UMAC_IRQ_LINK_DOWN))) {
+		if (priv->phydev)
+			phy_mac_interrupt(priv->phydev,
+				(priv->irq0_stat & UMAC_IRQ_LINK_UP));
+		priv->irq0_stat &= ~(UMAC_IRQ_LINK_UP|UMAC_IRQ_LINK_DOWN);
+	}
+}
+
+/* bcmgenet_isr1: interrupt handler for ring buffer. */
+static irqreturn_t bcmgenet_isr1(int irq, void *dev_id)
+{
+	struct bcmgenet_priv *priv = dev_id;
+	unsigned int index;
+
+	/* Save irq status for bottom-half processing. */
+	priv->irq1_stat =
+		bcmgenet_intrl2_1_readl(priv, INTRL2_CPU_STAT) &
+		~priv->int1_mask;
+	/* clear inerrupts*/
+	bcmgenet_intrl2_1_writel(priv, priv->irq1_stat, INTRL2_CPU_CLEAR);
+
+	netif_dbg(priv, intr, priv->dev,
+		"%s: IRQ=0x%x\n", __func__, priv->irq1_stat);
+	/* Check the MBDONE interrupts.
+	 * packet is done, reclaim descriptors
+	 */
+	if (priv->irq1_stat & 0x0000ffff) {
+		index = 0;
+		for (index = 0; index < 16; index++) {
+			if (priv->irq1_stat & (1 << index))
+				bcmgenet_tx_reclaim(priv->dev,
+						&priv->tx_rings[index]);
+		}
+	}
+	return IRQ_HANDLED;
+}
+
+/* bcmgenet_isr0: Handle various interrupts. */
+static irqreturn_t bcmgenet_isr0(int irq, void *dev_id)
+{
+	struct bcmgenet_priv *priv = dev_id;
+
+	/* Save irq status for bottom-half processing. */
+	priv->irq0_stat =
+		bcmgenet_intrl2_0_readl(priv, INTRL2_CPU_STAT) &
+		~bcmgenet_intrl2_0_readl(priv, INTRL2_CPU_MASK_STATUS);
+	/* clear inerrupts*/
+	bcmgenet_intrl2_0_writel(priv, priv->irq0_stat, INTRL2_CPU_CLEAR);
+
+	netif_dbg(priv, intr, priv->dev,
+		"IRQ=0x%x\n", priv->irq0_stat);
+
+	if (priv->irq0_stat & (UMAC_IRQ_RXDMA_BDONE | UMAC_IRQ_RXDMA_PDONE)) {
+		/* We use NAPI(software interrupt throttling, if
+		 * Rx Descriptor throttling is not used.
+		 * Disable interrupt, will be enabled in the poll method.
+		 */
+		if (likely(napi_schedule_prep(&priv->napi))) {
+			bcmgenet_intrl2_0_writel(priv,
+				UMAC_IRQ_RXDMA_BDONE, INTRL2_CPU_MASK_SET);
+			__napi_schedule(&priv->napi);
+		}
+	}
+	if (priv->irq0_stat &
+			(UMAC_IRQ_TXDMA_BDONE | UMAC_IRQ_TXDMA_PDONE)) {
+		/* Tx reclaim */
+		bcmgenet_tx_reclaim(priv->dev, &priv->tx_rings[DESC_INDEX]);
+	}
+	if (priv->irq0_stat & (UMAC_IRQ_PHY_DET_R |
+				UMAC_IRQ_PHY_DET_F |
+				UMAC_IRQ_LINK_UP |
+				UMAC_IRQ_LINK_DOWN |
+				UMAC_IRQ_HFB_SM |
+				UMAC_IRQ_HFB_MM |
+				UMAC_IRQ_MPD_R)) {
+		/* all other interested interrupts handled in bottom half */
+		schedule_work(&priv->bcmgenet_irq_work);
+	}
+
+	if ((priv->hw_params->flags & GENET_HAS_MDIO_INTR) &&
+		priv->irq0_stat & (UMAC_IRQ_MDIO_DONE | UMAC_IRQ_MDIO_ERROR)) {
+		priv->irq0_stat &= ~(UMAC_IRQ_MDIO_DONE | UMAC_IRQ_MDIO_ERROR);
+		wake_up(&priv->wq);
+	}
+
+	return IRQ_HANDLED;
+}
+
+static void bcmgenet_umac_reset(struct bcmgenet_priv *priv)
+{
+	u32 reg;
+
+	reg = bcmgenet_rbuf_ctrl_get(priv);
+	reg |= BIT(1);
+	bcmgenet_rbuf_ctrl_set(priv, reg);
+	udelay(10);
+
+	reg &= ~BIT(1);
+	bcmgenet_rbuf_ctrl_set(priv, reg);
+	udelay(10);
+}
+
+static void bcmgenet_set_hw_addr(struct bcmgenet_priv *priv,
+				  unsigned char *addr)
+{
+	bcmgenet_umac_writel(priv, (addr[0] << 24) | (addr[1] << 16) |
+			(addr[2] << 8) | addr[3], UMAC_MAC0);
+	bcmgenet_umac_writel(priv, (addr[4] << 8) | addr[5], UMAC_MAC1);
+}
+
+static int bcmgenet_wol_resume(struct bcmgenet_priv *priv)
+{
+	int ret;
+
+	/* From WOL-enabled suspend, switch to regular clock */
+	clk_disable(priv->clk_wol);
+	/* init umac registers to synchronize s/w with h/w */
+	ret = init_umac(priv);
+	if (ret)
+		return ret;
+
+	if (priv->phydev)
+		phy_init_hw(priv->phydev);
+	/* Speed settings must be restored */
+	bcmgenet_mii_config(priv->dev);
+
+	return 0;
+}
+
+/* Returns a reusable dma control register value */
+static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)
+{
+	u32 reg;
+	u32 dma_ctrl;
+
+	/* disable DMA */
+	dma_ctrl = 1 << (DESC_INDEX + DMA_RING_BUF_EN_SHIFT) | DMA_EN;
+	reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
+	reg &= ~dma_ctrl;
+	bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
+
+	reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
+	reg &= ~dma_ctrl;
+	bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
+
+	bcmgenet_umac_writel(priv, 1, UMAC_TX_FLUSH);
+	udelay(10);
+	bcmgenet_umac_writel(priv, 0, UMAC_TX_FLUSH);
+
+	return dma_ctrl;
+}
+
+static void bcmgenet_enable_dma(struct bcmgenet_priv *priv, u32 dma_ctrl)
+{
+	u32 reg;
+
+	reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
+	reg |= dma_ctrl;
+	bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
+
+	reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
+	reg |= dma_ctrl;
+	bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
+}
+
+static int bcmgenet_open(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	unsigned long dma_ctrl;
+	u32 reg;
+	int ret;
+
+	netif_dbg(priv, ifup, dev, "bcmgenet_open\n");
+
+	/* Turn on the clock */
+	if (!IS_ERR(priv->clk))
+		clk_prepare_enable(priv->clk);
+
+	/* take MAC out of reset */
+	bcmgenet_umac_reset(priv);
+
+	ret = init_umac(priv);
+	if (ret)
+		goto err_clk_disable;
+
+	/* disable ethernet MAC while updating its registers */
+	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+	reg &= ~(CMD_TX_EN | CMD_RX_EN);
+	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+
+	bcmgenet_set_hw_addr(priv, dev->dev_addr);
+
+	if (priv->wol_enabled) {
+		ret = bcmgenet_wol_resume(priv);
+		if (ret)
+			return ret;
+	}
+
+	if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL) {
+		reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+		reg |= EXT_ENERGY_DET_MASK;
+		bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+	}
+
+	if (test_and_clear_bit(GENET_POWER_WOL_MAGIC, &priv->wol_enabled))
+		bcmgenet_power_up(priv, GENET_POWER_WOL_MAGIC);
+	if (test_and_clear_bit(GENET_POWER_WOL_ACPI, &priv->wol_enabled))
+		bcmgenet_power_up(priv, GENET_POWER_WOL_ACPI);
+
+	/* Disable RX/TX DMA and flush TX queues */
+	dma_ctrl = bcmgenet_dma_disable(priv);
+
+	/* Reinitialize TDMA and RDMA and SW housekeeping */
+	ret = bcmgenet_init_dma(priv);
+	if (ret) {
+		netdev_err(dev, "failed to initialize DMA\n");
+		goto err_fini_dma;
+	}
+
+	/* Always enable ring 16 - descriptor ring */
+	bcmgenet_enable_dma(priv, dma_ctrl);
+
+	ret = request_irq(priv->irq0, bcmgenet_isr0, IRQF_SHARED,
+			dev->name, priv);
+	if (ret < 0) {
+		netdev_err(dev, "can't request IRQ %d\n", priv->irq0);
+		goto err_fini_dma;
+	}
+
+	ret = request_irq(priv->irq1, bcmgenet_isr1, IRQF_SHARED,
+				dev->name, priv);
+	if (ret < 0) {
+		netdev_err(dev, "can't request IRQ %d\n", priv->irq1);
+		goto err_irq0;
+	}
+
+	/* Start the network engine */
+	napi_enable(&priv->napi);
+
+	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+	reg |= (CMD_TX_EN | CMD_RX_EN);
+	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+
+	/* Make sure we reflect the value of CRC_CMD_FWD */
+	priv->crc_fwd_en = !!(reg & CMD_CRC_FWD);
+
+	device_set_wakeup_capable(&dev->dev, 1);
+
+	if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL)
+		bcmgenet_power_up(priv, GENET_POWER_PASSIVE);
+
+	netif_tx_start_all_queues(dev);
+
+	if (priv->phydev)
+		phy_start(priv->phydev);
+
+	return 0;
+
+err_irq0:
+	free_irq(priv->irq0, dev);
+err_fini_dma:
+	bcmgenet_fini_dma(priv);
+err_clk_disable:
+	if (!IS_ERR(priv->clk))
+		clk_disable_unprepare(priv->clk);
+	return ret;
+}
+
+static int bcmgenet_dma_teardown(struct bcmgenet_priv *priv)
+{
+	int timeout = 0;
+	u32 reg;
+
+	/* Disable TDMA to stop add more frames in TX DMA */
+	reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
+	reg &= ~DMA_EN;
+	bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
+
+	/* Check TDMA status register to confirm TDMA is disabled */
+	while (!(bcmgenet_tdma_readl(priv, DMA_STATUS) & DMA_DISABLED)) {
+		if (timeout++ == 5000) {
+			netdev_warn(priv->dev,
+				"Timed out while disabling TX DMA\n");
+			return -ETIMEDOUT;
+		}
+		udelay(1);
+	}
+
+	/* Wait 10ms for packet drain in both tx and rx dma */
+	usleep_range(10000, 20000);
+
+	/* Disable RDMA */
+	reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
+	reg &= ~DMA_EN;
+	bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
+
+	timeout = 0;
+	/* Check RDMA status register to confirm RDMA is disabled */
+	while (!(bcmgenet_rdma_readl(priv, DMA_STATUS) & DMA_DISABLED)) {
+		if (timeout++ == 5000) {
+			netdev_warn(priv->dev,
+				"Timed out while disabling RX DMA\n");
+			return -ETIMEDOUT;
+		}
+		udelay(1);
+	}
+
+	return 0;
+}
+
+static int bcmgenet_close(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int ret;
+	u32 reg;
+
+	netif_dbg(priv, ifdown, dev, "bcmgenet_close\n");
+
+	if (priv->phydev)
+		phy_stop(priv->phydev);
+
+	/* Disable MAC receive */
+	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+	reg &= ~CMD_RX_EN;
+	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+
+	netif_tx_stop_all_queues(dev);
+
+	ret = bcmgenet_dma_teardown(priv);
+	if (ret)
+		return ret;
+
+	/* Disable MAC transmit. TX DMA disabled have to done before this */
+	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+	reg &= ~CMD_TX_EN;
+	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+
+	napi_disable(&priv->napi);
+
+	/* tx reclaim */
+	bcmgenet_tx_reclaim_all(dev);
+	bcmgenet_fini_dma(priv);
+
+	free_irq(priv->irq0, priv);
+	free_irq(priv->irq1, priv);
+
+	/* Wait for pending work items to complete - we are stopping
+	 * the clock now. Since interrupts are disabled, no new work
+	 * will be scheduled.
+	 */
+	cancel_work_sync(&priv->bcmgenet_irq_work);
+
+	if (device_may_wakeup(&dev->dev)) {
+		if (priv->wolopts & (WAKE_MAGIC | WAKE_MAGICSECURE))
+			bcmgenet_power_down(priv, GENET_POWER_WOL_MAGIC);
+		if (priv->wolopts & WAKE_ARP)
+			bcmgenet_power_down(priv, GENET_POWER_WOL_ACPI);
+	} else if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL)
+		bcmgenet_power_down(priv, GENET_POWER_PASSIVE);
+
+	if (priv->wol_enabled)
+		clk_enable(priv->clk_wol);
+
+	if (!IS_ERR(priv->clk))
+		clk_disable_unprepare(priv->clk);
+
+	return 0;
+}
+
+static void bcmgenet_timeout(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	BUG_ON(dev == NULL);
+
+	netif_dbg(priv, tx_err, dev, "bcmgenet_timeout\n");
+
+	dev->trans_start = jiffies;
+
+	dev->stats.tx_errors++;
+
+	netif_tx_wake_all_queues(dev);
+}
+
+#define MAX_MC_COUNT	16
+
+static inline void bcmgenet_set_mdf_addr(struct bcmgenet_priv *priv,
+					 unsigned char *addr,
+					 int *i,
+					 int *mc)
+{
+	u32 reg;
+
+	bcmgenet_umac_writel(priv, addr[0] << 8 | addr[1],
+			UMAC_MDF_ADDR + (*i * 4));
+	bcmgenet_umac_writel(priv,
+			addr[2] << 24 | addr[3] << 16 |
+			addr[4] << 8 | addr[5],
+			UMAC_MDF_ADDR + ((*i + 1) * 4));
+	reg = bcmgenet_umac_readl(priv, UMAC_MDF_CTRL);
+	reg |= (1 << (MAX_MC_COUNT - *mc));
+	bcmgenet_umac_writel(priv, reg, UMAC_MDF_CTRL);
+	*i += 2;
+	(*mc)++;
+}
+
+static void bcmgenet_set_rx_mode(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct netdev_hw_addr *ha;
+	int i, mc;
+	u32 reg;
+
+	netif_dbg(priv, hw, dev, "%s: %08X\n", __func__, dev->flags);
+
+	/* Promiscous mode */
+	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+	if (dev->flags & IFF_PROMISC) {
+		reg |= CMD_PROMISC;
+		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+		bcmgenet_umac_writel(priv, 0, UMAC_MDF_CTRL);
+		return;
+	} else {
+		reg &= ~CMD_PROMISC;
+		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+	}
+
+	/* UniMac doesn't support ALLMULTI */
+	if (dev->flags & IFF_ALLMULTI)
+		return;
+
+	/* update MDF filter */
+	i = 0;
+	mc = 0;
+	/* Broadcast */
+	bcmgenet_set_mdf_addr(priv, dev->broadcast, &i, &mc);
+	/* my own address.*/
+	bcmgenet_set_mdf_addr(priv, dev->dev_addr, &i, &mc);
+	/* Unicast list*/
+	if (netdev_uc_count(dev) > (MAX_MC_COUNT - mc))
+		return;
+
+	if (!netdev_uc_empty(dev))
+		netdev_for_each_uc_addr(ha, dev)
+			bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc);
+	/* Multicast */
+	if (netdev_mc_empty(dev) || netdev_mc_count(dev) >= (MAX_MC_COUNT - mc))
+		return;
+
+	netdev_for_each_mc_addr(ha, dev)
+		bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc);
+}
+
+/* Set the hardware MAC address. */
+static int bcmgenet_set_mac_addr(struct net_device *dev, void *p)
+{
+	struct sockaddr *addr = p;
+
+	if (netif_running(dev))
+		return -EBUSY;
+
+	ether_addr_copy(dev->dev_addr, addr->sa_data);
+
+	return 0;
+}
+
+static u16 bcmgenet_select_queue(struct net_device *dev,
+		struct sk_buff *skb, void *accel_priv)
+{
+	return netif_is_multiqueue(dev) ? skb->queue_mapping : 0;
+}
+
+static const struct net_device_ops bcmgenet_netdev_ops = {
+	.ndo_open = bcmgenet_open,
+	.ndo_stop = bcmgenet_close,
+	.ndo_start_xmit = bcmgenet_xmit,
+	.ndo_select_queue = bcmgenet_select_queue,
+	.ndo_tx_timeout = bcmgenet_timeout,
+	.ndo_set_rx_mode = bcmgenet_set_rx_mode,
+	.ndo_set_mac_address = bcmgenet_set_mac_addr,
+	.ndo_do_ioctl = bcmgenet_ioctl,
+	.ndo_set_features = bcmgenet_set_features,
+};
+
+/* Array of GENET hardware parameters/characteristics */
+static struct bcmgenet_hw_params bcmgenet_hw_params[] = {
+	[GENET_V1] = {
+		.tx_queues = 0,
+		.rx_queues = 0,
+		.bds_cnt = 0,
+		.bp_in_en_shift = 16,
+		.bp_in_mask = 0xffff,
+		.hfb_filter_cnt = 16,
+		.qtag_mask = 0x1F,
+		.hfb_offset = 0x1000,
+		.rdma_offset = 0x2000,
+		.tdma_offset = 0x3000,
+		.words_per_bd = 2,
+	},
+	[GENET_V2] = {
+		.tx_queues = 4,
+		.rx_queues = 4,
+		.bds_cnt = 32,
+		.bp_in_en_shift = 16,
+		.bp_in_mask = 0xffff,
+		.hfb_filter_cnt = 16,
+		.qtag_mask = 0x1F,
+		.tbuf_offset = 0x0600,
+		.hfb_offset = 0x1000,
+		.hfb_reg_offset = 0x2000,
+		.rdma_offset = 0x3000,
+		.tdma_offset = 0x4000,
+		.words_per_bd = 2,
+		.flags = GENET_HAS_EXT,
+	},
+	[GENET_V3] = {
+		.tx_queues = 4,
+		.rx_queues = 4,
+		.bds_cnt = 32,
+		.bp_in_en_shift = 17,
+		.bp_in_mask = 0x1ffff,
+		.hfb_filter_cnt = 48,
+		.qtag_mask = 0x3F,
+		.tbuf_offset = 0x0600,
+		.hfb_offset = 0x8000,
+		.hfb_reg_offset = 0xfc00,
+		.rdma_offset = 0x10000,
+		.tdma_offset = 0x11000,
+		.words_per_bd = 2,
+		.flags = GENET_HAS_EXT | GENET_HAS_MDIO_INTR,
+	},
+	[GENET_V4] = {
+		.tx_queues = 4,
+		.rx_queues = 4,
+		.bds_cnt = 32,
+		.bp_in_en_shift = 17,
+		.bp_in_mask = 0x1ffff,
+		.hfb_filter_cnt = 48,
+		.qtag_mask = 0x3F,
+		.tbuf_offset = 0x0600,
+		.hfb_offset = 0x8000,
+		.hfb_reg_offset = 0xfc00,
+		.rdma_offset = 0x2000,
+		.tdma_offset = 0x4000,
+		.words_per_bd = 3,
+		.flags = GENET_HAS_40BITS | GENET_HAS_EXT | GENET_HAS_MDIO_INTR,
+	},
+};
+
+/* Infer hardware parameters from the detected GENET version */
+static void bcmgenet_set_hw_params(struct bcmgenet_priv *priv)
+{
+	struct bcmgenet_hw_params *params;
+	u32 reg;
+	u8 major;
+
+	if (GENET_IS_V4(priv)) {
+		bcmgenet_dma_regs = bcmgenet_dma_regs_v3plus;
+		genet_dma_ring_regs = genet_dma_ring_regs_v4;
+		priv->dma_rx_chk_bit = DMA_RX_CHK_V3PLUS;
+		priv->version = GENET_V4;
+	} else if (GENET_IS_V3(priv)) {
+		bcmgenet_dma_regs = bcmgenet_dma_regs_v3plus;
+		genet_dma_ring_regs = genet_dma_ring_regs_v123;
+		priv->dma_rx_chk_bit = DMA_RX_CHK_V3PLUS;
+		priv->version = GENET_V3;
+	} else if (GENET_IS_V2(priv)) {
+		bcmgenet_dma_regs = bcmgenet_dma_regs_v2;
+		genet_dma_ring_regs = genet_dma_ring_regs_v123;
+		priv->dma_rx_chk_bit = DMA_RX_CHK_V12;
+		priv->version = GENET_V2;
+	} else if (GENET_IS_V1(priv)) {
+		bcmgenet_dma_regs = bcmgenet_dma_regs_v1;
+		genet_dma_ring_regs = genet_dma_ring_regs_v123;
+		priv->dma_rx_chk_bit = DMA_RX_CHK_V12;
+		priv->version = GENET_V1;
+	}
+
+	/* enum genet_version starts at 1 */
+	priv->hw_params = &bcmgenet_hw_params[priv->version];
+	params = priv->hw_params;
+
+	/* Read GENET HW version */
+	reg = bcmgenet_sys_readl(priv, SYS_REV_CTRL);
+	major = (reg >> 24 & 0x0f);
+	if (major == 5)
+		major = 4;
+	else if (major == 0)
+		major = 1;
+	if (major != priv->version) {
+		dev_err(&priv->pdev->dev,
+			"GENET version mismatch, got: %d, configured for: %d\n",
+			major, priv->version);
+	}
+
+	/* Print the GENET core version */
+	dev_info(&priv->pdev->dev, "GENET " GENET_VER_FMT,
+		major, (reg >> 16) & 0x0f, reg & 0xffff);
+
+#ifdef CONFIG_PHYS_ADDR_T_64BIT
+	if (!(params->flags & GENET_HAS_40BITS))
+		pr_warn("GENET does not support 40-bits PA\n");
+#endif
+
+	pr_debug("Configuration for version: %d\n"
+		"TXq: %1d, RXq: %1d, BDs: %1d\n"
+		"BP << en: %2d, BP msk: 0x%05x\n"
+		"HFB count: %2d, QTAQ msk: 0x%05x\n"
+		"TBUF: 0x%04x, HFB: 0x%04x, HFBreg: 0x%04x\n"
+		"RDMA: 0x%05x, TDMA: 0x%05x\n"
+		"Words/BD: %d\n",
+		priv->version,
+		params->tx_queues, params->rx_queues, params->bds_cnt,
+		params->bp_in_en_shift, params->bp_in_mask,
+		params->hfb_filter_cnt, params->qtag_mask,
+		params->tbuf_offset, params->hfb_offset,
+		params->hfb_reg_offset,
+		params->rdma_offset, params->tdma_offset,
+		params->words_per_bd);
+}
+
+static int bcmgenet_probe(struct platform_device *pdev)
+{
+	struct device_node *dn = pdev->dev.of_node;
+	struct bcmgenet_priv *priv;
+	struct net_device *dev;
+	const void *macaddr;
+	struct resource *r;
+	int err = -EIO;
+
+	/* Up to GENET_MAX_MQ_CNT + 1 TX queues and a single RX queue */
+	dev = alloc_etherdev_mqs(sizeof(*priv), GENET_MAX_MQ_CNT + 1, 1);
+	if (!dev) {
+		dev_err(&pdev->dev, "can't allocate net device\n");
+		return -ENOMEM;
+	}
+
+	priv = netdev_priv(dev);
+	priv->irq0 = platform_get_irq(pdev, 0);
+	priv->irq1 = platform_get_irq(pdev, 1);
+	if (!priv->irq0 || !priv->irq1) {
+		dev_err(&pdev->dev, "can't find IRQs\n");
+		err = -EINVAL;
+		goto err;
+	}
+
+	macaddr = of_get_mac_address(dn);
+	if (!macaddr) {
+		dev_err(&pdev->dev, "can't find MAC address\n");
+		err = -EINVAL;
+		goto err;
+	}
+
+	r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	priv->base = devm_request_and_ioremap(&pdev->dev, r);
+	if (!priv->base) {
+		dev_err(&pdev->dev, "can't ioremap\n");
+		err = -EINVAL;
+		goto err;
+	}
+
+	dev->base_addr = (unsigned long)priv->base;
+	SET_NETDEV_DEV(dev, &pdev->dev);
+	dev_set_drvdata(&pdev->dev, dev);
+	ether_addr_copy(dev->dev_addr, macaddr);
+	dev->irq = priv->irq0;
+	dev->watchdog_timeo = 2 * HZ;
+	SET_ETHTOOL_OPS(dev, &bcmgenet_ethtool_ops);
+	dev->netdev_ops = &bcmgenet_netdev_ops;
+	netif_napi_add(dev, &priv->napi, bcmgenet_poll, 64);
+
+	priv->msg_enable = netif_msg_init(-1, GENET_MSG_DEFAULT);
+
+	/* Set hardware features */
+	dev->hw_features |= NETIF_F_SG | NETIF_F_IP_CSUM |
+		NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM;
+
+	/* Set the needed headroom to account for any possible
+	 * features enabling/disabling at runtime
+	 */
+	dev->needed_headroom += 64;
+
+	netdev_boot_setup_check(dev);
+
+	priv->dev = dev;
+	priv->pdev = pdev;
+
+	if (of_device_is_compatible(dn, "brcm,genet-v4"))
+		priv->version = GENET_V4;
+	else if (of_device_is_compatible(dn, "brcm,genet-v3"))
+		priv->version = GENET_V3;
+	else if (of_device_is_compatible(dn, "brcm,genet-v2"))
+		priv->version = GENET_V2;
+	else if (of_device_is_compatible(dn, "brcm,genet-v1"))
+		priv->version = GENET_V1;
+	else {
+		dev_err(&pdev->dev, "unknown GENET version\n");
+		err = -EINVAL;
+		goto err;
+	}
+
+	bcmgenet_set_hw_params(priv);
+
+	spin_lock_init(&priv->lock);
+	spin_lock_init(&priv->bh_lock);
+	mutex_init(&priv->mib_mutex);
+	/* Mii wait queue */
+	init_waitqueue_head(&priv->wq);
+	/* Always use RX_BUF_LENGTH (2KB) buffer for all chips */
+	priv->rx_buf_len = RX_BUF_LENGTH;
+	INIT_WORK(&priv->bcmgenet_irq_work, bcmgenet_irq_task);
+
+	priv->clk = devm_clk_get(&priv->pdev->dev, "enet");
+	if (IS_ERR(priv->clk))
+		dev_warn(&priv->pdev->dev, "failed to get enet clock\n");
+
+	priv->clk_wol = devm_clk_get(&priv->pdev->dev, "enet-wol");
+	if (IS_ERR(priv->clk_wol))
+		dev_warn(&priv->pdev->dev, "failed to get enet-wol clock\n");
+
+	if (!IS_ERR(priv->clk))
+		clk_prepare_enable(priv->clk);
+
+	err = reset_umac(priv);
+	if (err)
+		goto err_clk_disable;
+
+	err = bcmgenet_mii_init(dev);
+	if (err)
+		goto err_clk_disable;
+
+	/* setup number of real queues  + 1 (GENET_V1 has 0 hardware queues
+	 * just the ring 16 descriptor based TX
+	 */
+	netif_set_real_num_tx_queues(priv->dev, priv->hw_params->tx_queues + 1);
+	netif_set_real_num_rx_queues(priv->dev, priv->hw_params->rx_queues + 1);
+
+	err = register_netdev(dev);
+	if (err)
+		goto err_clk_disable;
+
+	/* Turn off the clocks */
+	if (!IS_ERR(priv->clk))
+		clk_disable_unprepare(priv->clk);
+
+	return err;
+
+err_clk_disable:
+	if (!IS_ERR(priv->clk))
+		clk_disable_unprepare(priv->clk);
+err:
+	free_netdev(dev);
+	return err;
+}
+
+static int bcmgenet_remove(struct platform_device *pdev)
+{
+	struct bcmgenet_priv *priv = dev_to_priv(&pdev->dev);
+
+	dev_set_drvdata(&pdev->dev, NULL);
+	unregister_netdev(priv->dev);
+	bcmgenet_mii_exit(priv->dev);
+	free_netdev(priv->dev);
+
+	return 0;
+}
+
+static const struct of_device_id bcmgenet_match[] = {
+	{ .compatible = "brcm,genet-v1", },
+	{ .compatible = "brcm,genet-v2", },
+	{ .compatible = "brcm,genet-v3", },
+	{ .compatible = "brcm,genet-v4", },
+	{ },
+};
+
+static struct platform_driver bcmgenet_driver = {
+	.probe	= bcmgenet_probe,
+	.remove	= bcmgenet_remove,
+	.driver	= {
+		.name	= "bcmgenet",
+		.owner	= THIS_MODULE,
+		.of_match_table = bcmgenet_match,
+	},
+};
+module_platform_driver(bcmgenet_driver);
+
+MODULE_AUTHOR("Broadcom Corporation");
+MODULE_DESCRIPTION("Broadcom GENET Ethernet controller driver");
+MODULE_ALIAS("platform:bcmgenet");
+MODULE_LICENSE("GPL");
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 07/10] net: bcmgenet: add MDIO routines
  2014-02-13  5:29 ` Florian Fainelli
@ 2014-02-13  5:29   ` Florian Fainelli
  -1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

This patch adds support for configuring the port multiplexer hardware
which resides in front of the GENET Ethernet MAC controller. This allows
us to support:

- internal PHYs (using drivers/net/phy/bcm7xxx.c)
- MoCA PHYs which are an entirely separate hardware block not covered
  here
- external PHYs and switches

Note that MoCA and switches are currently supported using the emulated
"fixed PHY" driver.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- fixed MDIO crash/warning when Device Tree probing fails
- removed the use of priv->phy_type and use priv->phy_interface
  directly

 drivers/net/ethernet/broadcom/genet/bcmmii.c | 481 +++++++++++++++++++++++++++
 1 file changed, 481 insertions(+)
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmmii.c

diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
new file mode 100644
index 0000000..bf8e3e0
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
@@ -0,0 +1,481 @@
+/*
+ * Broadcom GENET MDIO routines
+ *
+ * Copyright (c) 2014 Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+
+
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/wait.h>
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+#include <linux/bitops.h>
+#include <linux/netdevice.h>
+#include <linux/platform_device.h>
+#include <linux/phy.h>
+#include <linux/phy_fixed.h>
+#include <linux/brcmphy.h>
+#include <linux/of.h>
+#include <linux/of_net.h>
+#include <linux/of_mdio.h>
+
+#include "bcmgenet.h"
+
+/* read a value from the MII */
+static int bcmgenet_mii_read(struct mii_bus *bus, int phy_id, int location)
+{
+	int ret;
+	struct net_device *dev = bus->priv;
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	u32 reg;
+
+	bcmgenet_umac_writel(priv, (MDIO_RD | (phy_id << MDIO_PMD_SHIFT) |
+			(location << MDIO_REG_SHIFT)), UMAC_MDIO_CMD);
+	/* Start MDIO transaction*/
+	reg = bcmgenet_umac_readl(priv, UMAC_MDIO_CMD);
+	reg |= MDIO_START_BUSY;
+	bcmgenet_umac_writel(priv, reg, UMAC_MDIO_CMD);
+	wait_event_timeout(priv->wq,
+			!(bcmgenet_umac_readl(priv, UMAC_MDIO_CMD)
+				& MDIO_START_BUSY),
+			HZ / 100);
+	ret = bcmgenet_umac_readl(priv, UMAC_MDIO_CMD);
+
+	if (ret & MDIO_READ_FAIL)
+		return -EIO;
+
+	return ret & 0xffff;
+}
+
+/* write a value to the MII */
+static int bcmgenet_mii_write(struct mii_bus *bus, int phy_id,
+			int location, u16 val)
+{
+	struct net_device *dev = bus->priv;
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	u32 reg;
+
+	bcmgenet_umac_writel(priv, (MDIO_WR | (phy_id << MDIO_PMD_SHIFT) |
+			(location << MDIO_REG_SHIFT) | (0xffff & val)),
+			UMAC_MDIO_CMD);
+	reg = bcmgenet_umac_readl(priv, UMAC_MDIO_CMD);
+	reg |= MDIO_START_BUSY;
+	bcmgenet_umac_writel(priv, reg, UMAC_MDIO_CMD);
+	wait_event_timeout(priv->wq,
+			!(bcmgenet_umac_readl(priv, UMAC_MDIO_CMD) &
+				MDIO_START_BUSY),
+			HZ / 100);
+
+	return 0;
+}
+
+/* setup netdev link state when PHY link status change and
+ * update UMAC and RGMII block when link up
+ */
+static void bcmgenet_mii_setup(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct phy_device *phydev = priv->phydev;
+	u32 reg, cmd_bits = 0;
+	unsigned int status_changed = 0;
+
+	if (priv->old_link != phydev->link) {
+		status_changed = 1;
+		priv->old_link = phydev->link;
+	}
+
+	if (phydev->link) {
+		/* program UMAC and RGMII block based on established link
+		 * speed, pause, and duplex.
+		 * the speed set in umac->cmd tell RGMII block which clock
+		 * 25MHz(100Mbps)/125MHz(1Gbps) to use for transmit.
+		 * receive clock is provided by PHY.
+		 */
+		reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
+		reg &= ~OOB_DISABLE;
+		reg |= RGMII_LINK;
+		bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
+
+		/* speed */
+		if (phydev->speed == SPEED_1000)
+			cmd_bits = UMAC_SPEED_1000;
+		else if (phydev->speed == SPEED_100)
+			cmd_bits = UMAC_SPEED_100;
+		else
+			cmd_bits = UMAC_SPEED_10;
+		cmd_bits <<= CMD_SPEED_SHIFT;
+
+		if (priv->old_duplex != phydev->duplex) {
+			status_changed = 1;
+			priv->old_duplex = phydev->duplex;
+		}
+
+		/* duplex */
+		if (phydev->duplex != DUPLEX_FULL)
+			cmd_bits |= CMD_HD_EN;
+
+		if (priv->old_pause != phydev->pause) {
+			status_changed = 1;
+			priv->old_pause = phydev->pause;
+		}
+
+		/* pause capability */
+		if (!phydev->pause)
+			cmd_bits |= CMD_RX_PAUSE_IGNORE | CMD_TX_PAUSE_IGNORE;
+
+		reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+		reg &= ~((CMD_SPEED_MASK << CMD_SPEED_SHIFT) |
+			       CMD_HD_EN |
+			       CMD_RX_PAUSE_IGNORE | CMD_TX_PAUSE_IGNORE);
+		reg |= cmd_bits;
+		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+	}
+
+	if (status_changed)
+		phy_print_status(phydev);
+}
+
+void bcmgenet_mii_reset(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	if (priv->phydev) {
+		phy_init_hw(priv->phydev);
+		phy_start_aneg(priv->phydev);
+	}
+}
+
+static void bcmgenet_ephy_power_up(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	u32 reg = 0;
+
+	/* EXT_GPHY_CTRL is only valid for GENETv4 and onward */
+	if (!GENET_IS_V4(priv))
+		return;
+
+	reg = bcmgenet_ext_readl(priv, EXT_GPHY_CTRL);
+	reg &= ~(EXT_CFG_IDDQ_BIAS | EXT_CFG_PWR_DOWN);
+	reg |= EXT_GPHY_RESET;
+	bcmgenet_ext_writel(priv, reg, EXT_GPHY_CTRL);
+	mdelay(2);
+
+	reg &= ~EXT_GPHY_RESET;
+	bcmgenet_ext_writel(priv, reg, EXT_GPHY_CTRL);
+	udelay(20);
+}
+
+static int bcmgenet_mii_probe(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct phy_device *phydev;
+	unsigned int phy_flags;
+
+	if (priv->phydev) {
+		pr_info("PHY already attached\n");
+		return 0;
+	}
+
+	phy_flags = PHY_BRCM_100MBPS_WAR;
+
+	/* workarounds are only needed for 100Mbps PHYs */
+	if (priv->phy_speed == SPEED_1000)
+		phy_flags = 0;
+
+	/* workarounds are only needed for some 40nm chips, exclude
+	 * GENET v1
+	 */
+	if (GENET_IS_V1(priv))
+		phy_flags = 0;
+
+	if (priv->phy_dn)
+		phydev = of_phy_connect(dev, priv->phy_dn,
+					bcmgenet_mii_setup, phy_flags,
+					priv->phy_interface);
+	else
+		phydev = of_phy_connect_fixed_link(dev,
+					bcmgenet_mii_setup,
+					priv->phy_interface);
+
+	if (!phydev) {
+		pr_err("could not attach to PHY\n");
+		return -ENODEV;
+	}
+
+	phydev->supported &= priv->phy_supported;
+	/* Adjust advertised speeds based on configured speed */
+	if (priv->phy_speed == SPEED_1000)
+		phydev->advertising = PHY_GBIT_FEATURES;
+	else
+		phydev->advertising = PHY_BASIC_FEATURES;
+
+	pr_info("attached PHY at address %d [%s]\n",
+			phydev->addr, phydev->drv->name);
+
+	priv->old_link = -1;
+	priv->old_duplex = -1;
+	priv->old_pause = -1;
+	priv->phydev = phydev;
+
+	return 0;
+}
+
+static int bcmgenet_mii_alloc(struct bcmgenet_priv *priv)
+{
+	struct mii_bus *bus;
+	int ret = 0;
+
+	if (priv->mii_bus)
+		return 0;
+
+	priv->mii_bus = mdiobus_alloc();
+	if (!priv->mii_bus) {
+		pr_err("failed to allocate\n");
+		return -ENOMEM;
+	}
+
+	bus = priv->mii_bus;
+	bus->priv = priv->dev;
+	bus->name = "bcmgenet MII bus";
+	bus->parent = &priv->pdev->dev;
+	bus->read = bcmgenet_mii_read;
+	bus->write = bcmgenet_mii_write;
+	snprintf(bus->id, MII_BUS_ID_SIZE, "%s-%d",
+			priv->pdev->name, priv->pdev->id);
+
+	bus->irq = kzalloc(sizeof(int) * PHY_MAX_ADDR, GFP_KERNEL);
+	if (!bus->irq) {
+		ret = -ENOMEM;
+		goto out_mdio_free;
+	}
+
+	/* The internal PHY has its link interrupts routed to the
+	 * Ethernet MAC ISRs
+	 */
+	if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL)
+		bus->irq[priv->phy_addr] = PHY_IGNORE_INTERRUPT;
+	else
+		bus->irq[priv->phy_addr] = PHY_POLL;
+
+	return 0;
+
+out_mdio_free:
+	mdiobus_free(priv->mii_bus);
+	return ret;
+}
+
+static void bcmgenet_internal_phy_setup(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	u32 reg;
+
+	/* Power up EPHY */
+	bcmgenet_ephy_power_up(dev);
+	/* enable APD */
+	reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+	reg |= EXT_PWR_DN_EN_LD;
+	bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+	bcmgenet_mii_reset(dev);
+}
+
+static void bcmgenet_moca_phy_setup(struct bcmgenet_priv *priv)
+{
+	u32 reg;
+
+	/* Speed settings are set in bcmgenet_mii_setup() */
+	reg = bcmgenet_sys_readl(priv, SYS_PORT_CTRL);
+	reg |= LED_ACT_SOURCE_MAC;
+	bcmgenet_sys_writel(priv, reg, SYS_PORT_CTRL);
+}
+
+int bcmgenet_mii_config(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct device *kdev = &priv->pdev->dev;
+	const char *phy_name = NULL;
+	u32 id_mode_dis = 0;
+	u32 port_ctrl;
+	u32 reg;
+
+	priv->ext_phy = (priv->phy_interface != PHY_INTERFACE_MODE_INTERNAL) &&
+			(priv->phy_interface != PHY_INTERFACE_MODE_MOCA);
+
+	switch (priv->phy_interface) {
+	case PHY_INTERFACE_MODE_INTERNAL:
+	case PHY_INTERFACE_MODE_MOCA:
+		/* Irrespective of the actually configured PHY speed (100 or
+		 * 1000) GENETv4 only has an internal GPHY so we will just end
+		 * up masking the Gigabit features from what we support, not
+		 * switching to the EPHY
+		 */
+		if (GENET_IS_V4(priv)) {
+			priv->phy_supported = PHY_GBIT_FEATURES;
+			port_ctrl = PORT_MODE_INT_GPHY;
+		} else {
+			priv->phy_supported = PHY_BASIC_FEATURES;
+			port_ctrl = PORT_MODE_INT_EPHY;
+		}
+
+		bcmgenet_sys_writel(priv, port_ctrl, SYS_PORT_CTRL);
+
+		if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL) {
+			phy_name = "internal PHY";
+			bcmgenet_internal_phy_setup(dev);
+		} else if (priv->phy_interface == PHY_INTERFACE_MODE_MOCA) {
+			phy_name = "MoCA";
+			bcmgenet_moca_phy_setup(priv);
+		}
+		break;
+
+	case PHY_INTERFACE_MODE_MII:
+		phy_name = "external MII";
+		priv->phy_supported = PHY_BASIC_FEATURES;
+		bcmgenet_sys_writel(priv,
+				PORT_MODE_EXT_EPHY, SYS_PORT_CTRL);
+		break;
+
+	case PHY_INTERFACE_MODE_REVMII:
+		phy_name = "external RvMII";
+		if (priv->phy_speed == SPEED_100) {
+			priv->phy_supported = PHY_BASIC_FEATURES;
+			port_ctrl = PORT_MODE_EXT_RVMII_25;
+		} else {
+			priv->phy_supported = PHY_GBIT_FEATURES;
+			port_ctrl = PORT_MODE_EXT_RVMII_50;
+		}
+		bcmgenet_sys_writel(priv, port_ctrl, SYS_PORT_CTRL);
+		break;
+
+	case PHY_INTERFACE_MODE_RGMII:
+		/* RGMII_NO_ID: TXC transitions at the same time as TXD
+		 *		(requires PCB or receiver-side delay)
+		 * RGMII:	Add 2ns delay on TXC (90 degree shift)
+		 *
+		 * ID is implicitly disabled for 100Mbps (RG)MII operation.
+		 */
+		id_mode_dis = BIT(16);
+		/* fall through */
+	case PHY_INTERFACE_MODE_RGMII_TXID:
+		if (id_mode_dis)
+			phy_name = "external RGMII (no delay)";
+		else
+			phy_name = "external RGMII (TX delay)";
+		reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
+		reg |= RGMII_MODE_EN | id_mode_dis;
+		bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
+		bcmgenet_sys_writel(priv,
+				PORT_MODE_EXT_GPHY, SYS_PORT_CTRL);
+		priv->phy_supported = PHY_GBIT_FEATURES;
+		/* setup mii based on configure speed and RGMII txclk is set in
+		 * umac->cmd, mii_setup() after link established.
+		 */
+		break;
+	default:
+		dev_err(kdev, "unknown phy mode: %d\n", priv->phy_interface);
+		return -EINVAL;
+	}
+
+	dev_info(kdev, "configuring instance for %s\n", phy_name);
+
+	return 0;
+}
+
+static int bcmgenet_mii_of_init(struct bcmgenet_priv *priv)
+{
+	struct device_node *dn = priv->pdev->dev.of_node;
+	struct device *kdev = &priv->pdev->dev;
+	struct device_node *mdio_dn;
+	const __be32 *fixed_link;
+	u32 propval;
+	int ret, sz;
+
+	mdio_dn = of_get_next_child(dn, NULL);
+	if (!mdio_dn) {
+		dev_err(kdev, "unable to find MDIO bus node\n");
+		return -ENODEV;
+	}
+
+	ret = of_mdiobus_register(priv->mii_bus, mdio_dn);
+	if (ret) {
+		dev_err(kdev, "failed to register MDIO bus\n");
+		return ret;
+	}
+
+	/* Check if we have an internal or external PHY */
+	priv->phy_dn = of_parse_phandle(dn, "phy-handle", 0);
+	if (priv->phy_dn) {
+		if (!of_property_read_u32(priv->phy_dn, "max-speed", &propval))
+			priv->phy_speed = propval;
+	} else {
+		/* Read the link speed from the fixed-link property */
+		fixed_link = of_get_property(dn, "fixed-link", &sz);
+		if (!fixed_link || sz < sizeof(*fixed_link)) {
+			ret = -ENODEV;
+			goto out;
+		}
+
+		priv->phy_speed = be32_to_cpu(fixed_link[2]);
+	}
+
+	/* Get the link mode */
+	priv->phy_interface = of_get_phy_mode(dn);
+
+	return 0;
+out:
+	mdiobus_unregister(priv->mii_bus);
+	return ret;
+}
+
+int bcmgenet_mii_init(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int ret;
+
+	ret = bcmgenet_mii_alloc(priv);
+	if (ret)
+		return ret;
+
+	ret = bcmgenet_mii_of_init(priv);
+	if (ret)
+		goto out_free;
+
+	ret = bcmgenet_mii_config(dev);
+	if (ret)
+		goto out;
+
+	ret = bcmgenet_mii_probe(dev);
+	if (ret)
+		goto out;
+
+	return 0;
+
+out:
+	mdiobus_unregister(priv->mii_bus);
+out_free:
+	kfree(priv->mii_bus->irq);
+	mdiobus_free(priv->mii_bus);
+	return ret;
+}
+
+void bcmgenet_mii_exit(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	mdiobus_unregister(priv->mii_bus);
+	kfree(priv->mii_bus->irq);
+	mdiobus_free(priv->mii_bus);
+}
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 07/10] net: bcmgenet: add MDIO routines
@ 2014-02-13  5:29   ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

This patch adds support for configuring the port multiplexer hardware
which resides in front of the GENET Ethernet MAC controller. This allows
us to support:

- internal PHYs (using drivers/net/phy/bcm7xxx.c)
- MoCA PHYs which are an entirely separate hardware block not covered
  here
- external PHYs and switches

Note that MoCA and switches are currently supported using the emulated
"fixed PHY" driver.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- fixed MDIO crash/warning when Device Tree probing fails
- removed the use of priv->phy_type and use priv->phy_interface
  directly

 drivers/net/ethernet/broadcom/genet/bcmmii.c | 481 +++++++++++++++++++++++++++
 1 file changed, 481 insertions(+)
 create mode 100644 drivers/net/ethernet/broadcom/genet/bcmmii.c

diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
new file mode 100644
index 0000000..bf8e3e0
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
@@ -0,0 +1,481 @@
+/*
+ * Broadcom GENET MDIO routines
+ *
+ * Copyright (c) 2014 Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+
+
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/wait.h>
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+#include <linux/bitops.h>
+#include <linux/netdevice.h>
+#include <linux/platform_device.h>
+#include <linux/phy.h>
+#include <linux/phy_fixed.h>
+#include <linux/brcmphy.h>
+#include <linux/of.h>
+#include <linux/of_net.h>
+#include <linux/of_mdio.h>
+
+#include "bcmgenet.h"
+
+/* read a value from the MII */
+static int bcmgenet_mii_read(struct mii_bus *bus, int phy_id, int location)
+{
+	int ret;
+	struct net_device *dev = bus->priv;
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	u32 reg;
+
+	bcmgenet_umac_writel(priv, (MDIO_RD | (phy_id << MDIO_PMD_SHIFT) |
+			(location << MDIO_REG_SHIFT)), UMAC_MDIO_CMD);
+	/* Start MDIO transaction*/
+	reg = bcmgenet_umac_readl(priv, UMAC_MDIO_CMD);
+	reg |= MDIO_START_BUSY;
+	bcmgenet_umac_writel(priv, reg, UMAC_MDIO_CMD);
+	wait_event_timeout(priv->wq,
+			!(bcmgenet_umac_readl(priv, UMAC_MDIO_CMD)
+				& MDIO_START_BUSY),
+			HZ / 100);
+	ret = bcmgenet_umac_readl(priv, UMAC_MDIO_CMD);
+
+	if (ret & MDIO_READ_FAIL)
+		return -EIO;
+
+	return ret & 0xffff;
+}
+
+/* write a value to the MII */
+static int bcmgenet_mii_write(struct mii_bus *bus, int phy_id,
+			int location, u16 val)
+{
+	struct net_device *dev = bus->priv;
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	u32 reg;
+
+	bcmgenet_umac_writel(priv, (MDIO_WR | (phy_id << MDIO_PMD_SHIFT) |
+			(location << MDIO_REG_SHIFT) | (0xffff & val)),
+			UMAC_MDIO_CMD);
+	reg = bcmgenet_umac_readl(priv, UMAC_MDIO_CMD);
+	reg |= MDIO_START_BUSY;
+	bcmgenet_umac_writel(priv, reg, UMAC_MDIO_CMD);
+	wait_event_timeout(priv->wq,
+			!(bcmgenet_umac_readl(priv, UMAC_MDIO_CMD) &
+				MDIO_START_BUSY),
+			HZ / 100);
+
+	return 0;
+}
+
+/* setup netdev link state when PHY link status change and
+ * update UMAC and RGMII block when link up
+ */
+static void bcmgenet_mii_setup(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct phy_device *phydev = priv->phydev;
+	u32 reg, cmd_bits = 0;
+	unsigned int status_changed = 0;
+
+	if (priv->old_link != phydev->link) {
+		status_changed = 1;
+		priv->old_link = phydev->link;
+	}
+
+	if (phydev->link) {
+		/* program UMAC and RGMII block based on established link
+		 * speed, pause, and duplex.
+		 * the speed set in umac->cmd tell RGMII block which clock
+		 * 25MHz(100Mbps)/125MHz(1Gbps) to use for transmit.
+		 * receive clock is provided by PHY.
+		 */
+		reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
+		reg &= ~OOB_DISABLE;
+		reg |= RGMII_LINK;
+		bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
+
+		/* speed */
+		if (phydev->speed == SPEED_1000)
+			cmd_bits = UMAC_SPEED_1000;
+		else if (phydev->speed == SPEED_100)
+			cmd_bits = UMAC_SPEED_100;
+		else
+			cmd_bits = UMAC_SPEED_10;
+		cmd_bits <<= CMD_SPEED_SHIFT;
+
+		if (priv->old_duplex != phydev->duplex) {
+			status_changed = 1;
+			priv->old_duplex = phydev->duplex;
+		}
+
+		/* duplex */
+		if (phydev->duplex != DUPLEX_FULL)
+			cmd_bits |= CMD_HD_EN;
+
+		if (priv->old_pause != phydev->pause) {
+			status_changed = 1;
+			priv->old_pause = phydev->pause;
+		}
+
+		/* pause capability */
+		if (!phydev->pause)
+			cmd_bits |= CMD_RX_PAUSE_IGNORE | CMD_TX_PAUSE_IGNORE;
+
+		reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+		reg &= ~((CMD_SPEED_MASK << CMD_SPEED_SHIFT) |
+			       CMD_HD_EN |
+			       CMD_RX_PAUSE_IGNORE | CMD_TX_PAUSE_IGNORE);
+		reg |= cmd_bits;
+		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+	}
+
+	if (status_changed)
+		phy_print_status(phydev);
+}
+
+void bcmgenet_mii_reset(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	if (priv->phydev) {
+		phy_init_hw(priv->phydev);
+		phy_start_aneg(priv->phydev);
+	}
+}
+
+static void bcmgenet_ephy_power_up(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	u32 reg = 0;
+
+	/* EXT_GPHY_CTRL is only valid for GENETv4 and onward */
+	if (!GENET_IS_V4(priv))
+		return;
+
+	reg = bcmgenet_ext_readl(priv, EXT_GPHY_CTRL);
+	reg &= ~(EXT_CFG_IDDQ_BIAS | EXT_CFG_PWR_DOWN);
+	reg |= EXT_GPHY_RESET;
+	bcmgenet_ext_writel(priv, reg, EXT_GPHY_CTRL);
+	mdelay(2);
+
+	reg &= ~EXT_GPHY_RESET;
+	bcmgenet_ext_writel(priv, reg, EXT_GPHY_CTRL);
+	udelay(20);
+}
+
+static int bcmgenet_mii_probe(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct phy_device *phydev;
+	unsigned int phy_flags;
+
+	if (priv->phydev) {
+		pr_info("PHY already attached\n");
+		return 0;
+	}
+
+	phy_flags = PHY_BRCM_100MBPS_WAR;
+
+	/* workarounds are only needed for 100Mbps PHYs */
+	if (priv->phy_speed == SPEED_1000)
+		phy_flags = 0;
+
+	/* workarounds are only needed for some 40nm chips, exclude
+	 * GENET v1
+	 */
+	if (GENET_IS_V1(priv))
+		phy_flags = 0;
+
+	if (priv->phy_dn)
+		phydev = of_phy_connect(dev, priv->phy_dn,
+					bcmgenet_mii_setup, phy_flags,
+					priv->phy_interface);
+	else
+		phydev = of_phy_connect_fixed_link(dev,
+					bcmgenet_mii_setup,
+					priv->phy_interface);
+
+	if (!phydev) {
+		pr_err("could not attach to PHY\n");
+		return -ENODEV;
+	}
+
+	phydev->supported &= priv->phy_supported;
+	/* Adjust advertised speeds based on configured speed */
+	if (priv->phy_speed == SPEED_1000)
+		phydev->advertising = PHY_GBIT_FEATURES;
+	else
+		phydev->advertising = PHY_BASIC_FEATURES;
+
+	pr_info("attached PHY at address %d [%s]\n",
+			phydev->addr, phydev->drv->name);
+
+	priv->old_link = -1;
+	priv->old_duplex = -1;
+	priv->old_pause = -1;
+	priv->phydev = phydev;
+
+	return 0;
+}
+
+static int bcmgenet_mii_alloc(struct bcmgenet_priv *priv)
+{
+	struct mii_bus *bus;
+	int ret = 0;
+
+	if (priv->mii_bus)
+		return 0;
+
+	priv->mii_bus = mdiobus_alloc();
+	if (!priv->mii_bus) {
+		pr_err("failed to allocate\n");
+		return -ENOMEM;
+	}
+
+	bus = priv->mii_bus;
+	bus->priv = priv->dev;
+	bus->name = "bcmgenet MII bus";
+	bus->parent = &priv->pdev->dev;
+	bus->read = bcmgenet_mii_read;
+	bus->write = bcmgenet_mii_write;
+	snprintf(bus->id, MII_BUS_ID_SIZE, "%s-%d",
+			priv->pdev->name, priv->pdev->id);
+
+	bus->irq = kzalloc(sizeof(int) * PHY_MAX_ADDR, GFP_KERNEL);
+	if (!bus->irq) {
+		ret = -ENOMEM;
+		goto out_mdio_free;
+	}
+
+	/* The internal PHY has its link interrupts routed to the
+	 * Ethernet MAC ISRs
+	 */
+	if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL)
+		bus->irq[priv->phy_addr] = PHY_IGNORE_INTERRUPT;
+	else
+		bus->irq[priv->phy_addr] = PHY_POLL;
+
+	return 0;
+
+out_mdio_free:
+	mdiobus_free(priv->mii_bus);
+	return ret;
+}
+
+static void bcmgenet_internal_phy_setup(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	u32 reg;
+
+	/* Power up EPHY */
+	bcmgenet_ephy_power_up(dev);
+	/* enable APD */
+	reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+	reg |= EXT_PWR_DN_EN_LD;
+	bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+	bcmgenet_mii_reset(dev);
+}
+
+static void bcmgenet_moca_phy_setup(struct bcmgenet_priv *priv)
+{
+	u32 reg;
+
+	/* Speed settings are set in bcmgenet_mii_setup() */
+	reg = bcmgenet_sys_readl(priv, SYS_PORT_CTRL);
+	reg |= LED_ACT_SOURCE_MAC;
+	bcmgenet_sys_writel(priv, reg, SYS_PORT_CTRL);
+}
+
+int bcmgenet_mii_config(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct device *kdev = &priv->pdev->dev;
+	const char *phy_name = NULL;
+	u32 id_mode_dis = 0;
+	u32 port_ctrl;
+	u32 reg;
+
+	priv->ext_phy = (priv->phy_interface != PHY_INTERFACE_MODE_INTERNAL) &&
+			(priv->phy_interface != PHY_INTERFACE_MODE_MOCA);
+
+	switch (priv->phy_interface) {
+	case PHY_INTERFACE_MODE_INTERNAL:
+	case PHY_INTERFACE_MODE_MOCA:
+		/* Irrespective of the actually configured PHY speed (100 or
+		 * 1000) GENETv4 only has an internal GPHY so we will just end
+		 * up masking the Gigabit features from what we support, not
+		 * switching to the EPHY
+		 */
+		if (GENET_IS_V4(priv)) {
+			priv->phy_supported = PHY_GBIT_FEATURES;
+			port_ctrl = PORT_MODE_INT_GPHY;
+		} else {
+			priv->phy_supported = PHY_BASIC_FEATURES;
+			port_ctrl = PORT_MODE_INT_EPHY;
+		}
+
+		bcmgenet_sys_writel(priv, port_ctrl, SYS_PORT_CTRL);
+
+		if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL) {
+			phy_name = "internal PHY";
+			bcmgenet_internal_phy_setup(dev);
+		} else if (priv->phy_interface == PHY_INTERFACE_MODE_MOCA) {
+			phy_name = "MoCA";
+			bcmgenet_moca_phy_setup(priv);
+		}
+		break;
+
+	case PHY_INTERFACE_MODE_MII:
+		phy_name = "external MII";
+		priv->phy_supported = PHY_BASIC_FEATURES;
+		bcmgenet_sys_writel(priv,
+				PORT_MODE_EXT_EPHY, SYS_PORT_CTRL);
+		break;
+
+	case PHY_INTERFACE_MODE_REVMII:
+		phy_name = "external RvMII";
+		if (priv->phy_speed == SPEED_100) {
+			priv->phy_supported = PHY_BASIC_FEATURES;
+			port_ctrl = PORT_MODE_EXT_RVMII_25;
+		} else {
+			priv->phy_supported = PHY_GBIT_FEATURES;
+			port_ctrl = PORT_MODE_EXT_RVMII_50;
+		}
+		bcmgenet_sys_writel(priv, port_ctrl, SYS_PORT_CTRL);
+		break;
+
+	case PHY_INTERFACE_MODE_RGMII:
+		/* RGMII_NO_ID: TXC transitions at the same time as TXD
+		 *		(requires PCB or receiver-side delay)
+		 * RGMII:	Add 2ns delay on TXC (90 degree shift)
+		 *
+		 * ID is implicitly disabled for 100Mbps (RG)MII operation.
+		 */
+		id_mode_dis = BIT(16);
+		/* fall through */
+	case PHY_INTERFACE_MODE_RGMII_TXID:
+		if (id_mode_dis)
+			phy_name = "external RGMII (no delay)";
+		else
+			phy_name = "external RGMII (TX delay)";
+		reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
+		reg |= RGMII_MODE_EN | id_mode_dis;
+		bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
+		bcmgenet_sys_writel(priv,
+				PORT_MODE_EXT_GPHY, SYS_PORT_CTRL);
+		priv->phy_supported = PHY_GBIT_FEATURES;
+		/* setup mii based on configure speed and RGMII txclk is set in
+		 * umac->cmd, mii_setup() after link established.
+		 */
+		break;
+	default:
+		dev_err(kdev, "unknown phy mode: %d\n", priv->phy_interface);
+		return -EINVAL;
+	}
+
+	dev_info(kdev, "configuring instance for %s\n", phy_name);
+
+	return 0;
+}
+
+static int bcmgenet_mii_of_init(struct bcmgenet_priv *priv)
+{
+	struct device_node *dn = priv->pdev->dev.of_node;
+	struct device *kdev = &priv->pdev->dev;
+	struct device_node *mdio_dn;
+	const __be32 *fixed_link;
+	u32 propval;
+	int ret, sz;
+
+	mdio_dn = of_get_next_child(dn, NULL);
+	if (!mdio_dn) {
+		dev_err(kdev, "unable to find MDIO bus node\n");
+		return -ENODEV;
+	}
+
+	ret = of_mdiobus_register(priv->mii_bus, mdio_dn);
+	if (ret) {
+		dev_err(kdev, "failed to register MDIO bus\n");
+		return ret;
+	}
+
+	/* Check if we have an internal or external PHY */
+	priv->phy_dn = of_parse_phandle(dn, "phy-handle", 0);
+	if (priv->phy_dn) {
+		if (!of_property_read_u32(priv->phy_dn, "max-speed", &propval))
+			priv->phy_speed = propval;
+	} else {
+		/* Read the link speed from the fixed-link property */
+		fixed_link = of_get_property(dn, "fixed-link", &sz);
+		if (!fixed_link || sz < sizeof(*fixed_link)) {
+			ret = -ENODEV;
+			goto out;
+		}
+
+		priv->phy_speed = be32_to_cpu(fixed_link[2]);
+	}
+
+	/* Get the link mode */
+	priv->phy_interface = of_get_phy_mode(dn);
+
+	return 0;
+out:
+	mdiobus_unregister(priv->mii_bus);
+	return ret;
+}
+
+int bcmgenet_mii_init(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	int ret;
+
+	ret = bcmgenet_mii_alloc(priv);
+	if (ret)
+		return ret;
+
+	ret = bcmgenet_mii_of_init(priv);
+	if (ret)
+		goto out_free;
+
+	ret = bcmgenet_mii_config(dev);
+	if (ret)
+		goto out;
+
+	ret = bcmgenet_mii_probe(dev);
+	if (ret)
+		goto out;
+
+	return 0;
+
+out:
+	mdiobus_unregister(priv->mii_bus);
+out_free:
+	kfree(priv->mii_bus->irq);
+	mdiobus_free(priv->mii_bus);
+	return ret;
+}
+
+void bcmgenet_mii_exit(struct net_device *dev)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+
+	mdiobus_unregister(priv->mii_bus);
+	kfree(priv->mii_bus->irq);
+	mdiobus_free(priv->mii_bus);
+}
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 08/10] net: bcmgenet: hook into the build system
  2014-02-13  5:29 ` Florian Fainelli
@ 2014-02-13  5:29   ` Florian Fainelli
  -1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

This patch adds a new configuration symbol: CONFIG_BCMGENET which allows
us to build the Broadcom GENET driver and hook the driver files into the
build system.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- rebased

 drivers/net/ethernet/broadcom/Kconfig        | 10 ++++++++++
 drivers/net/ethernet/broadcom/Makefile       |  1 +
 drivers/net/ethernet/broadcom/genet/Makefile |  2 ++
 3 files changed, 13 insertions(+)
 create mode 100644 drivers/net/ethernet/broadcom/genet/Makefile

diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
index 3f97d9f..a489712 100644
--- a/drivers/net/ethernet/broadcom/Kconfig
+++ b/drivers/net/ethernet/broadcom/Kconfig
@@ -60,6 +60,16 @@ config BCM63XX_ENET
 	  This driver supports the ethernet MACs in the Broadcom 63xx
 	  MIPS chipset family (BCM63XX).
 
+config BCMGENET
+	tristate "Broadcom GENET internal MAC support"
+	select MII
+	select PHYLIB
+	select FIXED_PHY
+	select BCM7XXX_PHY
+	help
+	  This driver supports the built-in Ethernet MACs found in the
+	  Broadcom BCM7xxx Set Top Box family chipset.
+
 config BNX2
 	tristate "Broadcom NetXtremeII support"
 	depends on PCI
diff --git a/drivers/net/ethernet/broadcom/Makefile b/drivers/net/ethernet/broadcom/Makefile
index 68efa1a..fd639a0 100644
--- a/drivers/net/ethernet/broadcom/Makefile
+++ b/drivers/net/ethernet/broadcom/Makefile
@@ -4,6 +4,7 @@
 
 obj-$(CONFIG_B44) += b44.o
 obj-$(CONFIG_BCM63XX_ENET) += bcm63xx_enet.o
+obj-$(CONFIG_BCMGENET) += genet/
 obj-$(CONFIG_BNX2) += bnx2.o
 obj-$(CONFIG_CNIC) += cnic.o
 obj-$(CONFIG_BNX2X) += bnx2x/
diff --git a/drivers/net/ethernet/broadcom/genet/Makefile b/drivers/net/ethernet/broadcom/genet/Makefile
new file mode 100644
index 0000000..31f55a9
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/genet/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_BCMGENET) += genet.o
+genet-objs := bcmgenet.o bcmmii.o
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 08/10] net: bcmgenet: hook into the build system
@ 2014-02-13  5:29   ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

This patch adds a new configuration symbol: CONFIG_BCMGENET which allows
us to build the Broadcom GENET driver and hook the driver files into the
build system.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- rebased

 drivers/net/ethernet/broadcom/Kconfig        | 10 ++++++++++
 drivers/net/ethernet/broadcom/Makefile       |  1 +
 drivers/net/ethernet/broadcom/genet/Makefile |  2 ++
 3 files changed, 13 insertions(+)
 create mode 100644 drivers/net/ethernet/broadcom/genet/Makefile

diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
index 3f97d9f..a489712 100644
--- a/drivers/net/ethernet/broadcom/Kconfig
+++ b/drivers/net/ethernet/broadcom/Kconfig
@@ -60,6 +60,16 @@ config BCM63XX_ENET
 	  This driver supports the ethernet MACs in the Broadcom 63xx
 	  MIPS chipset family (BCM63XX).
 
+config BCMGENET
+	tristate "Broadcom GENET internal MAC support"
+	select MII
+	select PHYLIB
+	select FIXED_PHY
+	select BCM7XXX_PHY
+	help
+	  This driver supports the built-in Ethernet MACs found in the
+	  Broadcom BCM7xxx Set Top Box family chipset.
+
 config BNX2
 	tristate "Broadcom NetXtremeII support"
 	depends on PCI
diff --git a/drivers/net/ethernet/broadcom/Makefile b/drivers/net/ethernet/broadcom/Makefile
index 68efa1a..fd639a0 100644
--- a/drivers/net/ethernet/broadcom/Makefile
+++ b/drivers/net/ethernet/broadcom/Makefile
@@ -4,6 +4,7 @@
 
 obj-$(CONFIG_B44) += b44.o
 obj-$(CONFIG_BCM63XX_ENET) += bcm63xx_enet.o
+obj-$(CONFIG_BCMGENET) += genet/
 obj-$(CONFIG_BNX2) += bnx2.o
 obj-$(CONFIG_CNIC) += cnic.o
 obj-$(CONFIG_BNX2X) += bnx2x/
diff --git a/drivers/net/ethernet/broadcom/genet/Makefile b/drivers/net/ethernet/broadcom/genet/Makefile
new file mode 100644
index 0000000..31f55a9
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/genet/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_BCMGENET) += genet.o
+genet-objs := bcmgenet.o bcmmii.o
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 09/10] Documentation: add Device tree bindings for Broadcom GENET
  2014-02-13  5:29 ` Florian Fainelli
@ 2014-02-13  5:29   ` Florian Fainelli
  -1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

This patch adds the Device Tree bindings for the Broadcom GENET Gigabit
Ethernet controller. A bunch of examples are provided to illustrate the
versatile aspect of the hardare.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- rebased

 .../devicetree/bindings/net/broadcom-bcmgenet.txt  | 111 +++++++++++++++++++++
 1 file changed, 111 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt

diff --git a/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt b/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
new file mode 100644
index 0000000..93c58e9
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
@@ -0,0 +1,111 @@
+* Broadcom BCM7xxx Ethernet Controller (GENET)
+
+Required properties:
+- compatible: should be "brcm,genet-v1", "brcm,genet-v2", "brcm,genet-v3",
+  "brcm,genet-v4".
+- reg: address and length of the register set for the device.
+- interrupts: interrupt for the device
+- mdio bus node: this node should always be present regarless of the PHY
+  configuration of the GENET instance
+- phy-mode: The interface between the SoC and the PHY (a string that
+  of_get_phy_mode() can understand).
+
+MDIO bus node required properties:
+
+- compatible: should be "brcm,genet-v<N>-mdio"
+- reg: address and length relative to the parent node base register address
+- address-cells: address cell for MDIO bus addressing, should be 1
+- size-cells: size of the cells for MDIO bus addressing, should be 0
+
+Optional properties:
+- phy-handle: A phandle to a phy node defining the PHY address (as the reg
+  property, a single integer), used to describe configurations where a PHY
+  (internal or external) is used.
+
+- fixed-link: When the GENET interface is connected to a MoCA hardware block
+  or when operating in a RGMII to RGMII type of connection, or when the
+  MDIO bus is voluntarily disabled, this property should be used to describe
+  the "fixed link", the property is described as follows:
+
+  fixed-link: <a b c d e> where a is emulated phy id - choose any,
+  but unique to the all specified fixed-links, b is duplex - 0 half,
+  1 full, c is link speed - d#10/d#100/d#1000, d is pause - 0 no
+  pause, 1 pause, e is asym_pause - 0 no asym_pause, 1 asym_pause.
+
+Internal Gigabit PHY example:
+
+ethernet@f0b60000 {
+	phy-mode = "internal";
+	phy-handle = <&phy1>;
+	mac-address = [ 00 10 18 36 23 1a ];
+	compatible = "brcm,genet-v4";
+	#address-cells = <0x1>;
+	#size-cells = <0x1>;
+	device_type = "ethernet";
+	reg = <0xf0b60000 0xfc4c>;
+	interrupts = <0x0 0x14 0x0 0x0 0x15 0x0>;
+
+	mdio@b60e14 {
+		compatible = "brcm,genet-mdio-v4";
+		#address-cells = <0x1>;
+		#size-cells = <0x0>;
+		reg = <0xb60e14 0x8>;
+
+		phy1: ethernet-phy@1 {
+			device_type = "ethernet-phy";
+			max-speed = <1000>;
+			reg = <0x1>;
+			compatible = "brcm,28nm-gphy", "ethernet-phy-ieee802.3-c22";
+		};
+	};
+};
+
+MoCA interface / MAC to MAC example:
+
+ethernet@f0b80000 {
+	phy-mode = "moca";
+	fixed-link = <1 0 1000 0 0>;
+	mac-address = [ 00 10 18 36 24 1a ];
+	compatible = "brcm,genet-v4";
+	#address-cells = <0x1>;
+	#size-cells = <0x1>;
+	device_type = "ethernet";
+	reg = <0xf0b80000 0xfc4c>;
+	interrupts = <0x0 0x16 0x0 0x0 0x17 0x0>;
+
+	mdio@b80e14 {
+		compatible = "brcm,genet-mdio-v4";
+		#address-cells = <0x1>;
+		#size-cells = <0x0>;
+		reg = <0xb80e14 0x8>;
+	};
+};
+
+
+External MDIO-connected Gigabit PHY/switch:
+
+ethernet@f0ba0000 {
+	phy-mode = "rgmii";
+	phy-handle = <&phy0>;
+	mac-address = [ 00 10 18 36 26 1a ];
+	compatible = "brcm,genet-v4";
+	#address-cells = <0x1>;
+	#size-cells = <0x1>;
+	device_type = "ethernet";
+	reg = <0xf0ba0000 0xfc4c>;
+	interrupts = <0x0 0x18 0x0 0x0 0x19 0x0>;
+
+	mdio@ba0e14 {
+		compatible = "brcm,genet-mdio-v4";
+		#address-cells = <0x1>;
+		#size-cells = <0x0>;
+		reg = <0xba0e14 0x8>;
+
+		phy0: ethernet-phy@0 {
+			device_type = "ethernet-phy";
+			max-speed = <1000>;
+			reg = <0x0>;
+			compatible = "brcm,bcm53125", "ethernet-phy-ieee802.3-c22";
+		};
+	};
+};
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 09/10] Documentation: add Device tree bindings for Broadcom GENET
@ 2014-02-13  5:29   ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

This patch adds the Device Tree bindings for the Broadcom GENET Gigabit
Ethernet controller. A bunch of examples are provided to illustrate the
versatile aspect of the hardare.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- rebased

 .../devicetree/bindings/net/broadcom-bcmgenet.txt  | 111 +++++++++++++++++++++
 1 file changed, 111 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt

diff --git a/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt b/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
new file mode 100644
index 0000000..93c58e9
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
@@ -0,0 +1,111 @@
+* Broadcom BCM7xxx Ethernet Controller (GENET)
+
+Required properties:
+- compatible: should be "brcm,genet-v1", "brcm,genet-v2", "brcm,genet-v3",
+  "brcm,genet-v4".
+- reg: address and length of the register set for the device.
+- interrupts: interrupt for the device
+- mdio bus node: this node should always be present regarless of the PHY
+  configuration of the GENET instance
+- phy-mode: The interface between the SoC and the PHY (a string that
+  of_get_phy_mode() can understand).
+
+MDIO bus node required properties:
+
+- compatible: should be "brcm,genet-v<N>-mdio"
+- reg: address and length relative to the parent node base register address
+- address-cells: address cell for MDIO bus addressing, should be 1
+- size-cells: size of the cells for MDIO bus addressing, should be 0
+
+Optional properties:
+- phy-handle: A phandle to a phy node defining the PHY address (as the reg
+  property, a single integer), used to describe configurations where a PHY
+  (internal or external) is used.
+
+- fixed-link: When the GENET interface is connected to a MoCA hardware block
+  or when operating in a RGMII to RGMII type of connection, or when the
+  MDIO bus is voluntarily disabled, this property should be used to describe
+  the "fixed link", the property is described as follows:
+
+  fixed-link: <a b c d e> where a is emulated phy id - choose any,
+  but unique to the all specified fixed-links, b is duplex - 0 half,
+  1 full, c is link speed - d#10/d#100/d#1000, d is pause - 0 no
+  pause, 1 pause, e is asym_pause - 0 no asym_pause, 1 asym_pause.
+
+Internal Gigabit PHY example:
+
+ethernet@f0b60000 {
+	phy-mode = "internal";
+	phy-handle = <&phy1>;
+	mac-address = [ 00 10 18 36 23 1a ];
+	compatible = "brcm,genet-v4";
+	#address-cells = <0x1>;
+	#size-cells = <0x1>;
+	device_type = "ethernet";
+	reg = <0xf0b60000 0xfc4c>;
+	interrupts = <0x0 0x14 0x0 0x0 0x15 0x0>;
+
+	mdio@b60e14 {
+		compatible = "brcm,genet-mdio-v4";
+		#address-cells = <0x1>;
+		#size-cells = <0x0>;
+		reg = <0xb60e14 0x8>;
+
+		phy1: ethernet-phy@1 {
+			device_type = "ethernet-phy";
+			max-speed = <1000>;
+			reg = <0x1>;
+			compatible = "brcm,28nm-gphy", "ethernet-phy-ieee802.3-c22";
+		};
+	};
+};
+
+MoCA interface / MAC to MAC example:
+
+ethernet@f0b80000 {
+	phy-mode = "moca";
+	fixed-link = <1 0 1000 0 0>;
+	mac-address = [ 00 10 18 36 24 1a ];
+	compatible = "brcm,genet-v4";
+	#address-cells = <0x1>;
+	#size-cells = <0x1>;
+	device_type = "ethernet";
+	reg = <0xf0b80000 0xfc4c>;
+	interrupts = <0x0 0x16 0x0 0x0 0x17 0x0>;
+
+	mdio@b80e14 {
+		compatible = "brcm,genet-mdio-v4";
+		#address-cells = <0x1>;
+		#size-cells = <0x0>;
+		reg = <0xb80e14 0x8>;
+	};
+};
+
+
+External MDIO-connected Gigabit PHY/switch:
+
+ethernet@f0ba0000 {
+	phy-mode = "rgmii";
+	phy-handle = <&phy0>;
+	mac-address = [ 00 10 18 36 26 1a ];
+	compatible = "brcm,genet-v4";
+	#address-cells = <0x1>;
+	#size-cells = <0x1>;
+	device_type = "ethernet";
+	reg = <0xf0ba0000 0xfc4c>;
+	interrupts = <0x0 0x18 0x0 0x0 0x19 0x0>;
+
+	mdio@ba0e14 {
+		compatible = "brcm,genet-mdio-v4";
+		#address-cells = <0x1>;
+		#size-cells = <0x0>;
+		reg = <0xba0e14 0x8>;
+
+		phy0: ethernet-phy@0 {
+			device_type = "ethernet-phy";
+			max-speed = <1000>;
+			reg = <0x0>;
+			compatible = "brcm,bcm53125", "ethernet-phy-ieee802.3-c22";
+		};
+	};
+};
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 10/10] MAINTAINERS: add entry for the Broadcom GENET driver
  2014-02-13  5:29 ` Florian Fainelli
@ 2014-02-13  5:29   ` Florian Fainelli
  -1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

Add myself as a maintainer of the Broadcom GENET driver.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- rebased

 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 091b50e..5a7b3ec 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1845,6 +1845,12 @@ L:	netdev@vger.kernel.org
 S:	Supported
 F:	drivers/net/ethernet/broadcom/b44.*
 
+BROADCOM GENET ETHERNET DRIVER
+M:	Florian Fainelli <f.fainelli@gmail.com>
+L:	netdev@vger.kernel.org
+S:	Supported
+F:	drivers/net/ethernet/broadcom/genet/
+
 BROADCOM BNX2 GIGABIT ETHERNET DRIVER
 M:	Michael Chan <mchan@broadcom.com>
 L:	netdev@vger.kernel.org
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH net-next v2 10/10] MAINTAINERS: add entry for the Broadcom GENET driver
@ 2014-02-13  5:29   ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13  5:29 UTC (permalink / raw)
  To: netdev; +Cc: davem, cernekee, devicetree, Florian Fainelli

Add myself as a maintainer of the Broadcom GENET driver.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Changes since v1:
- rebased

 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 091b50e..5a7b3ec 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1845,6 +1845,12 @@ L:	netdev@vger.kernel.org
 S:	Supported
 F:	drivers/net/ethernet/broadcom/b44.*
 
+BROADCOM GENET ETHERNET DRIVER
+M:	Florian Fainelli <f.fainelli@gmail.com>
+L:	netdev@vger.kernel.org
+S:	Supported
+F:	drivers/net/ethernet/broadcom/genet/
+
 BROADCOM BNX2 GIGABIT ETHERNET DRIVER
 M:	Michael Chan <mchan@broadcom.com>
 L:	netdev@vger.kernel.org
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH net-next v2 04/10] net: phy: add Broadcom BCM7xxx internal PHY driver
  2014-02-13  5:29     ` Florian Fainelli
  (?)
@ 2014-02-13 10:34     ` Francois Romieu
  2014-02-13 18:41       ` Florian Fainelli
  -1 siblings, 1 reply; 33+ messages in thread
From: Francois Romieu @ 2014-02-13 10:34 UTC (permalink / raw)
  To: Florian Fainelli; +Cc: netdev, davem, cernekee, devicetree

Florian Fainelli <f.fainelli@gmail.com> :
[...]
> diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c
> new file mode 100644
> index 0000000..6aea6e2
> --- /dev/null
> +++ b/drivers/net/phy/bcm7xxx.c
[...]
> +static int bcm7445_config_init(struct phy_device *phydev)
> +{
> +	int ret;

It could be declared after 'i' below.

> +	const struct bcm7445_regs {

static const

> +		int reg;
> +		u16 value;
> +	} bcm7445_regs_cfg[] = {
> +		/* increases ADC latency by 24ns */
> +		{ 0x17, 0x0038 },
> +		{ 0x15, 0xAB95 },
> +		/* increases internal 1V LDO voltage by 5% */
> +		{ 0x17, 0x2038 },
> +		{ 0x15, 0xBB22 },
> +		/* reduce RX low pass filter corner frequency */
> +		{ 0x17, 0x6038 },
> +		{ 0x15, 0xFFC5 },
> +		/* reduce RX high pass filter corner frequency */
> +		{ 0x17, 0x003a },
> +		{ 0x15, 0x2002 },
> +	};
> +	unsigned int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(bcm7445_regs_cfg); i++) {
> +		ret = phy_write(phydev,
> +				bcm7445_regs_cfg[i].reg,
> +				bcm7445_regs_cfg[i].value);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static void phy_write_exp(struct phy_device *phydev,
> +					u16 reg, u16 value)

static void phy_write_exp(struct phy_device *phydev, u16 reg, u16 value)

> +{
> +	phy_write(phydev, 0x17, 0xf00 | reg);
> +	phy_write(phydev, 0x15, value);
> +}
> +
> +static void phy_write_misc(struct phy_device *phydev,
> +					u16 reg, u16 chl, u16 value)
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ all tabs that don't line up

static void phy_write_misc(struct phy_device *phydev,
			   u16 reg, u16 chl, u16 value)

static void phy_write_misc(struct phy_device *phydev, u16 reg, u16 chl,
			   u16 value)

static void phy_write_misc(struct phy_device *phydev, u16 reg, u16 chl, u16 val)


> +{
> +	int tmp;
> +
> +	phy_write(phydev, 0x18, 0x7);
> +
> +	tmp = phy_read(phydev, 0x18);
> +	tmp |= 0x800;
> +	phy_write(phydev, 0x18, tmp);
> +
> +	tmp = (chl * 0x2000) | reg;
> +	phy_write(phydev, 0x17, tmp);
> +
> +	phy_write(phydev, 0x15, value);

You may use some #define for the 0x15, 0x17 and 0x18 values.

> +}
> +
> +static int bcm7xxx_28nm_afe_config_init(struct phy_device *phydev)
> +{
> +	/* write AFE_RXCONFIG_0 */
> +	phy_write_misc(phydev, 0x38, 0x0000, 0xeb19);
> +
> +	/* write AFE_RXCONFIG_1 */
> +	phy_write_misc(phydev, 0x38, 0x0001, 0x9a3f);
> +
> +	/* write AFE_RX_LP_COUNTER */
> +	phy_write_misc(phydev, 0x38, 0x0003, 0x7fc7);
> +
> +	/* write AFE_HPF_TRIM_OTHERS */
> +	phy_write_misc(phydev, 0x3A, 0x0000, 0x000b);
> +
> +	/* write AFTE_TX_CONFIG */
> +	phy_write_misc(phydev, 0x39, 0x0000, 0x0800);

Some #define may be welcome to replace the comments.

[...]
> +static int bcm7xxx_28nm_config_init(struct phy_device *phydev)
> +{
> +	int ret;
> +
> +	ret = bcm7445_config_init(phydev);
> +	if (ret)
> +		return ret;
> +
> +	return bcm7xxx_28nm_afe_config_init(phydev);
> +}
> +
> +static int phy_set_clr_bits(struct phy_device *dev, int location,
> +					int set_mask, int clr_mask)
> +{
> +	int v, ret;
> +
> +	v = phy_read(dev, location);
> +	if (v < 0)
> +		return v;
> +
> +	v &= ~clr_mask;
> +	v |= set_mask;
> +
> +	ret = phy_write(dev, location, v);
> +	if (ret < 0)
> +		return ret;
> +
> +	return v;
> +}
> +
> +static int bcm7xxx_config_init(struct phy_device *phydev)
> +{
> +	/* Enable 64 clock MDIO */
> +	phy_write(phydev, 0x1d, 0x1000);
> +	phy_read(phydev, 0x1d);
> +
> +	/* Workaround only required for 100Mbits/sec */
> +	if (!(phydev->dev_flags & PHY_BRCM_100MBPS_WAR))
> +		return 0;
> +
> +	/* set shadow mode 2 */
> +	phy_set_clr_bits(phydev, 0x1f, 0x0004, 0x0004);

phy_set_clr_bits returned status code is not checked.

> +
> +	/* set iddq_clkbias */
> +	phy_write(phydev, 0x14, 0x0F00);
> +	udelay(10);
> +
> +	/* reset iddq_clkbias */
> +	phy_write(phydev, 0x14, 0x0C00);
> +
> +	phy_write(phydev, 0x13, 0x7555);
> +
> +	/* reset shadow mode 2 */
> +	phy_set_clr_bits(phydev, 0x1f, 0x0004, 0);

phy_set_clr_bits returned status code is not checked.

> +
> +	return 0;
> +}
> +
> +/* Workaround for putting the PHY in IDDQ mode, required
> + * for all BCM7XXX PHYs
> + */
> +static int bcm7xxx_suspend(struct phy_device *phydev)

Factor out with bcm7445_config_init and some helper ?

> +{
> +	int ret;
> +	const struct bcm7xxx_regs {
> +		int reg;
> +		u16 value;
> +	} bcm7xxx_suspend_cfg[] = {
> +		{ 0x1f, 0x008b },
> +		{ 0x10, 0x01c0 },
> +		{ 0x14, 0x7000 },
> +		{ 0x1f, 0x000f },
> +		{ 0x10, 0x20d0 },
> +		{ 0x1f, 0x000b },
> +	};
> +	unsigned int i;

-- 
Ueimor

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH net-next v2 06/10] net: bcmgenet: add main driver file
  2014-02-13  5:29   ` Florian Fainelli
  (?)
@ 2014-02-13 10:35   ` Francois Romieu
  2014-02-13 10:58     ` Joe Perches
  -1 siblings, 1 reply; 33+ messages in thread
From: Francois Romieu @ 2014-02-13 10:35 UTC (permalink / raw)
  To: Florian Fainelli; +Cc: netdev, davem, cernekee, devicetree

Florian Fainelli <f.fainelli@gmail.com> :
[...]
> diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
> new file mode 100644
> index 0000000..ab71e81
> --- /dev/null
> +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
[...]
> +static int bcmgenet_set_rx_csum(struct net_device *dev,
> +				netdev_features_t wanted)
> +{
> +	struct bcmgenet_priv *priv = netdev_priv(dev);
> +	u32 rbuf_chk_ctrl;
> +	int rx_csum_en;
> +
> +	rx_csum_en = !!(wanted & NETIF_F_RXCSUM);

It's a bool.

> +
> +	spin_lock_bh(&priv->bh_lock);
> +	rbuf_chk_ctrl = bcmgenet_rbuf_readl(priv, RBUF_CHK_CTRL);
> +
> +	/* enable rx checksumming */
> +	if (!rx_csum_en)
> +		rbuf_chk_ctrl &= ~RBUF_RXCHK_EN;
> +	else
> +		rbuf_chk_ctrl |= RBUF_RXCHK_EN;
> +	priv->desc_rxchk_en = rx_csum_en;
> +	bcmgenet_rbuf_writel(priv, rbuf_chk_ctrl, RBUF_CHK_CTRL);
> +
> +	spin_unlock_bh(&priv->bh_lock);
> +
> +	return 0;
> +}
> +static int bcmgenet_set_tx_csum(struct net_device *dev,

Missing empty line.

[...]
> +static void bcmgenet_update_mib_counters(struct bcmgenet_priv *priv)
> +{
> +	int i, j = 0;
> +
> +	for (i = 0; i < BCMGENET_STATS_LEN; i++) {
> +		const struct bcmgenet_stats *s;
> +		u32 val = 0;
> +		char *p;
> +		u8 offset = 0;

Xmas tree please.

[...]
> +static void bcmgenet_get_ethtool_stats(struct net_device *dev,
> +					struct ethtool_stats *stats,
> +					u64 *data)
> +{
> +	struct bcmgenet_priv *priv = netdev_priv(dev);
> +	int i;
> +
> +	mutex_lock(&priv->mib_mutex);
> +	if (netif_running(dev))
> +		bcmgenet_update_mib_counters(priv);
> +
> +	for (i = 0; i < BCMGENET_STATS_LEN; i++) {
> +		const struct bcmgenet_stats *s;
> +		char *p;
> +
> +		s = &bcmgenet_gstrings_stats[i];
> +		if (s->type == BCMGENET_STAT_NETDEV)
> +			p = (char *)&dev->stats;
> +		else
> +			p = (char *)priv;
> +		p += s->stat_offset;
> +		data[i] = *(u32 *)p;
> +	}
> +	mutex_unlock(&priv->mib_mutex);

The mutex is not used anywhere else and dev_ethtool runs under RTNL.

[...]
> +/* Assign skb to RX DMA descriptor. */
> +static int bcmgenet_alloc_rx_buffers(struct bcmgenet_priv *priv)
> +{
> +	struct enet_cb *cb;

Wrong scope.

> +	int ret = 0;
> +	int i;
> +	u32 reg;
> +
> +	netif_dbg(priv, hw, priv->dev, "%s:\n", __func__);
> +
> +	/* This function may be called from irq bottom-half. */
> +	spin_lock_bh(&priv->bh_lock);

The Rx part of the NAPI handler directly calls bcmgenet_rx_refill through
bcmgenet_desc_rx. bcmgenet_poll does not sync with bh_lock.

Either some factoring was forgotten or some legacy locking / comment was
left in place (there should not be any ->open vs ->poll race)

> +
> +	/* loop here for each buffer needing assign */
> +	for (i = 0; i < priv->num_rx_bds; i++) {
> +		cb = &priv->rx_cbs[priv->rx_bd_assign_index];
> +		if (cb->skb)
> +			continue;
> +
> +		/* set the DMA descriptor length once and for all
> +		 * it will only change if we support dynamically sizing
> +		 * priv->rx_buf_len, but we do not
> +		 */
> +		dmadesc_set_length_status(priv, priv->rx_bd_assign_ptr,
> +				priv->rx_buf_len << DMA_BUFLENGTH_SHIFT);
> +
> +		ret = bcmgenet_rx_refill(priv, cb);
> +		if (ret)
> +			break;
> +
> +	}
> +
> +	/* Enable rx DMA incase it was disabled due to running out of rx BD */

Nit: nothing proves even a single descriptor suceeded allocation.

[...]
> +static int reset_umac(struct bcmgenet_priv *priv)
> +{
> +	struct device *kdev = &priv->pdev->dev;
> +	unsigned int timeout = 0;
> +	u32 reg;
> +
> +	/* 7358a0/7552a0: bad default in RBUF_FLUSH_CTRL.umac_sw_rst */
> +	bcmgenet_rbuf_ctrl_set(priv, 0);
> +	udelay(10);
> +
> +	/* disable MAC while updating its registers */
> +	bcmgenet_umac_writel(priv, 0, UMAC_CMD);
> +
> +	/* issue soft reset, wait for it to complete */
> +	bcmgenet_umac_writel(priv, CMD_SW_RESET, UMAC_CMD);
> +	while (timeout++ < 1000) {
> +		reg = bcmgenet_umac_readl(priv, UMAC_CMD);
> +		if (!(reg & CMD_SW_RESET))
> +			break;

			return 0;

> +		udelay(1);
> +	}
> +
> +	if (timeout == 1000) {
> +		dev_err(kdev,
> +			"timeout waiting for MAC to come out of resetn\n");
> +		return -ETIMEDOUT;
> +	}
> +
> +	return 0;
> +}
> +
> +/* init_umac: Initializes the uniMac controller */

Useless.

> +static int init_umac(struct bcmgenet_priv *priv)
> +{
[...]
> +static void bcmgenet_init_multiq(struct net_device *dev)
> +{
> +	struct bcmgenet_priv *priv = netdev_priv(dev);
> +	unsigned int i, dma_enable;
> +	u32 reg, dma_ctrl, ring_cfg = 0, dma_priority = 0;
> +
> +	if (!netif_is_multiqueue(dev)) {
> +		netdev_warn(dev, "called with non multi queue aware HW\n");
> +		return;
> +	}
> +
> +	dma_ctrl = bcmgenet_tdma_readl(priv, DMA_CTRL);
> +	dma_enable = dma_ctrl & DMA_EN;
> +	dma_ctrl &= ~DMA_EN;
> +	bcmgenet_tdma_writel(priv, dma_ctrl, DMA_CTRL);
> +
> +	/* Enable strict priority arbiter mode */
> +	bcmgenet_tdma_writel(priv, DMA_ARBITER_SP, DMA_ARB_CTRL);
> +
> +	for (i = 0; i < priv->hw_params->tx_queues; i++) {
> +		/* first 64 tx_cbs are reserved for default tx queue
> +		 * (ring 16)
> +		 */
> +		bcmgenet_init_tx_ring(priv, i, priv->hw_params->bds_cnt,
> +					i * priv->hw_params->bds_cnt,
> +					(i + 1) * priv->hw_params->bds_cnt);
> +
> +		/* Configure ring as decriptor ring and setup priority */
> +		ring_cfg |= (1 << i);
> +		dma_priority |= ((GENET_Q0_PRIORITY + i) <<
> +				(GENET_MAX_MQ_CNT + 1) * i);
> +		dma_ctrl |= (1 << (i + DMA_RING_BUF_EN_SHIFT));

Excess parenthesis.

[...]
> +/* NAPI polling method*/
> +static int bcmgenet_poll(struct napi_struct *napi, int budget)
> +{
> +	struct bcmgenet_priv *priv = container_of(napi,
> +			struct bcmgenet_priv, napi);
> +	unsigned int work_done;
> +
> +	work_done = bcmgenet_desc_rx(priv, budget);
> +
> +	/* tx reclaim */
> +	bcmgenet_tx_reclaim(priv->dev, &priv->tx_rings[DESC_INDEX]);

You may move the quick Tx reclaim before the slower Rx protocol processing.

> +	/* Advancing our consumer index*/
> +	priv->rx_c_index += work_done;
> +	priv->rx_c_index &= DMA_C_INDEX_MASK;
> +	bcmgenet_rdma_ring_writel(priv, DESC_INDEX,
> +				priv->rx_c_index, RDMA_CONS_INDEX);
> +	if (work_done < budget) {
> +		napi_complete(napi);
> +		bcmgenet_intrl2_0_writel(priv,
> +			UMAC_IRQ_RXDMA_BDONE, INTRL2_CPU_MASK_CLEAR);
> +	}
> +
> +	return work_done;
> +}
> +
> +/* Interrupt bottom half */
> +static void bcmgenet_irq_task(struct work_struct *work)
> +{
> +	struct bcmgenet_priv *priv = container_of(
> +			work, struct bcmgenet_priv, bcmgenet_irq_work);
> +	struct net_device *dev;

	struct net_device *dev = priv->dev;

> +	u32 reg;
> +
> +	dev = priv->dev;
> +
> +	netif_dbg(priv, intr, dev, "%s\n", __func__);
> +	/* Cable plugged/unplugged event */
> +	if (priv->phy_interface == PHY_INTERFACE_MODE_INTERNAL) {
> +		if (priv->irq0_stat & UMAC_IRQ_PHY_DET_R) {
> +			priv->irq0_stat &= ~UMAC_IRQ_PHY_DET_R;
> +			netif_crit(priv, link, dev,
> +				"cable plugged in, powering up\n");
> +			bcmgenet_power_up(priv, GENET_POWER_CABLE_SENSE);
> +		} else if (priv->irq0_stat & UMAC_IRQ_PHY_DET_F) {
> +			priv->irq0_stat &= ~UMAC_IRQ_PHY_DET_F;
> +			netif_crit(priv, link, dev,
> +				"cable unplugged, powering down\n");
> +			bcmgenet_power_down(priv, GENET_POWER_CABLE_SENSE);
> +		}
> +	}
> +	if (priv->irq0_stat & UMAC_IRQ_MPD_R) {
> +		priv->irq0_stat &= ~UMAC_IRQ_MPD_R;
> +		netif_crit(priv, wol, dev,
> +			"magic packet detected, waking up\n");
> +		/* disable mpd interrupt */
> +		bcmgenet_intrl2_0_writel(priv,
> +			UMAC_IRQ_MPD_R, INTRL2_CPU_MASK_SET);
> +		/* disable CRC forward.*/
> +		reg = bcmgenet_umac_readl(priv, UMAC_CMD);
> +		reg &= ~CMD_CRC_FWD;
> +		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
> +		priv->crc_fwd_en = 0;
> +		bcmgenet_power_up(priv, GENET_POWER_WOL_MAGIC);
> +
> +	} else if (priv->irq0_stat & (UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM)) {
> +		priv->irq0_stat &= ~(UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM);
> +		netif_crit(priv, wol, dev,
> +			"ACPI pattern matched, waking up\n");
> +		/* disable HFB match interrupts */
> +		bcmgenet_intrl2_0_writel(priv,
> +			UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM, INTRL2_CPU_MASK_SET);
> +		bcmgenet_power_up(priv, GENET_POWER_WOL_ACPI);
> +	}

It smells of half-backed wol / runtime power support. Imvho it deserves
some comment in the changelog message to hint about its maturity.

> +
> +	/* Link UP/DOWN event */
> +	if ((priv->hw_params->flags & GENET_HAS_MDIO_INTR) &&
> +		(priv->irq0_stat & (UMAC_IRQ_LINK_UP|UMAC_IRQ_LINK_DOWN))) {
> +		if (priv->phydev)
> +			phy_mac_interrupt(priv->phydev,
> +				(priv->irq0_stat & UMAC_IRQ_LINK_UP));
> +		priv->irq0_stat &= ~(UMAC_IRQ_LINK_UP|UMAC_IRQ_LINK_DOWN);
> +	}
> +}
> +
> +/* bcmgenet_isr1: interrupt handler for ring buffer. */
> +static irqreturn_t bcmgenet_isr1(int irq, void *dev_id)
> +{
> +	struct bcmgenet_priv *priv = dev_id;
> +	unsigned int index;

Wrong scope.

> +
> +	/* Save irq status for bottom-half processing. */
> +	priv->irq1_stat =
> +		bcmgenet_intrl2_1_readl(priv, INTRL2_CPU_STAT) &
> +		~priv->int1_mask;
> +	/* clear inerrupts*/
> +	bcmgenet_intrl2_1_writel(priv, priv->irq1_stat, INTRL2_CPU_CLEAR);
> +
> +	netif_dbg(priv, intr, priv->dev,
> +		"%s: IRQ=0x%x\n", __func__, priv->irq1_stat);
> +	/* Check the MBDONE interrupts.
> +	 * packet is done, reclaim descriptors
> +	 */
> +	if (priv->irq1_stat & 0x0000ffff) {
> +		index = 0;
> +		for (index = 0; index < 16; index++) {

Proofread patrol alert :o)

(...]
> +static int bcmgenet_dma_teardown(struct bcmgenet_priv *priv)
> +{
> +	int timeout = 0;
> +	u32 reg;
> +
> +	/* Disable TDMA to stop add more frames in TX DMA */
> +	reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
> +	reg &= ~DMA_EN;
> +	bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
> +
> +	/* Check TDMA status register to confirm TDMA is disabled */
> +	while (!(bcmgenet_tdma_readl(priv, DMA_STATUS) & DMA_DISABLED)) {
> +		if (timeout++ == 5000) {
> +			netdev_warn(priv->dev,
> +				"Timed out while disabling TX DMA\n");
> +			return -ETIMEDOUT;
> +		}
> +		udelay(1);
> +	}

On the stylistic side, the driver hesitates between "while", "for" timeout
and conditions loop. I'd go for the boring "int i; for (i = 0; i < max; i++)"
style but it's your call.

On the worrying side, even if Tx DMA does not stop, the driver should
try to disable Rx DMA.

> +
> +	/* Wait 10ms for packet drain in both tx and rx dma */
> +	usleep_range(10000, 20000);
> +
> +	/* Disable RDMA */
> +	reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
> +	reg &= ~DMA_EN;
> +	bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
> +
> +	timeout = 0;
> +	/* Check RDMA status register to confirm RDMA is disabled */
> +	while (!(bcmgenet_rdma_readl(priv, DMA_STATUS) & DMA_DISABLED)) {
> +		if (timeout++ == 5000) {
> +			netdev_warn(priv->dev,
> +				"Timed out while disabling RX DMA\n");
> +			return -ETIMEDOUT;
> +		}
> +		udelay(1);
> +	}
> +
> +	return 0;
> +}
[...]
> +static void bcmgenet_timeout(struct net_device *dev)
> +{
> +	struct bcmgenet_priv *priv = netdev_priv(dev);
> +
> +	BUG_ON(dev == NULL);

dev == NULL should be noticed quickly.

> +
> +	netif_dbg(priv, tx_err, dev, "bcmgenet_timeout\n");
> +
> +	dev->trans_start = jiffies;
> +
> +	dev->stats.tx_errors++;
> +
> +	netif_tx_wake_all_queues(dev);

dev_watchdog already complains (loudly).

Is it really supposed to recover ?

> +}
> +
> +#define MAX_MC_COUNT	16
> +
> +static inline void bcmgenet_set_mdf_addr(struct bcmgenet_priv *priv,
> +					 unsigned char *addr,
> +					 int *i,
> +					 int *mc)
> +{
> +	u32 reg;
> +
> +	bcmgenet_umac_writel(priv, addr[0] << 8 | addr[1],
> +			UMAC_MDF_ADDR + (*i * 4));
> +	bcmgenet_umac_writel(priv,
> +			addr[2] << 24 | addr[3] << 16 |
> +			addr[4] << 8 | addr[5],
> +			UMAC_MDF_ADDR + ((*i + 1) * 4));

I would not expect such an indent to pass beyond davem.

> +	reg = bcmgenet_umac_readl(priv, UMAC_MDF_CTRL);
> +	reg |= (1 << (MAX_MC_COUNT - *mc));
> +	bcmgenet_umac_writel(priv, reg, UMAC_MDF_CTRL);
> +	*i += 2;
> +	(*mc)++;
> +}
> +
> +static void bcmgenet_set_rx_mode(struct net_device *dev)
> +{
> +	struct bcmgenet_priv *priv = netdev_priv(dev);
> +	struct netdev_hw_addr *ha;
> +	int i, mc;
> +	u32 reg;
> +
> +	netif_dbg(priv, hw, dev, "%s: %08X\n", __func__, dev->flags);
> +
> +	/* Promiscous mode */
> +	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
> +	if (dev->flags & IFF_PROMISC) {
> +		reg |= CMD_PROMISC;
> +		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
> +		bcmgenet_umac_writel(priv, 0, UMAC_MDF_CTRL);
> +		return;
> +	} else {
> +		reg &= ~CMD_PROMISC;
> +		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
> +	}
> +
> +	/* UniMac doesn't support ALLMULTI */
> +	if (dev->flags & IFF_ALLMULTI)
> +		return;

The driver did not fulfill the request. It could complain to help user.

[...]
> +static int bcmgenet_set_mac_addr(struct net_device *dev, void *p)
> +{
> +	struct sockaddr *addr = p;
> +
> +	if (netif_running(dev))
> +		return -EBUSY;

Add a comment to specify if it is a hardware shortcoming or something else ?

> +
> +	ether_addr_copy(dev->dev_addr, addr->sa_data);
> +
> +	return 0;
> +}

[...]
> +static const struct net_device_ops bcmgenet_netdev_ops = {
> +	.ndo_open = bcmgenet_open,
> +	.ndo_stop = bcmgenet_close,
> +	.ndo_start_xmit = bcmgenet_xmit,
> +	.ndo_select_queue = bcmgenet_select_queue,
> +	.ndo_tx_timeout = bcmgenet_timeout,
> +	.ndo_set_rx_mode = bcmgenet_set_rx_mode,
> +	.ndo_set_mac_address = bcmgenet_set_mac_addr,
> +	.ndo_do_ioctl = bcmgenet_ioctl,
> +	.ndo_set_features = bcmgenet_set_features,
> +};

Please use tabs before '=' to line things up.

[...]
> +static int bcmgenet_probe(struct platform_device *pdev)
> +{
> +	struct device_node *dn = pdev->dev.of_node;
> +	struct bcmgenet_priv *priv;
> +	struct net_device *dev;
> +	const void *macaddr;
> +	struct resource *r;
> +	int err = -EIO;
> +
> +	/* Up to GENET_MAX_MQ_CNT + 1 TX queues and a single RX queue */
> +	dev = alloc_etherdev_mqs(sizeof(*priv), GENET_MAX_MQ_CNT + 1, 1);
> +	if (!dev) {
> +		dev_err(&pdev->dev, "can't allocate net device\n");
> +		return -ENOMEM;
> +	}
> +
> +	priv = netdev_priv(dev);
> +	priv->irq0 = platform_get_irq(pdev, 0);
> +	priv->irq1 = platform_get_irq(pdev, 1);
> +	if (!priv->irq0 || !priv->irq1) {
> +		dev_err(&pdev->dev, "can't find IRQs\n");
> +		err = -EINVAL;
> +		goto err;
> +	}
> +
> +	macaddr = of_get_mac_address(dn);
> +	if (!macaddr) {
> +		dev_err(&pdev->dev, "can't find MAC address\n");
> +		err = -EINVAL;
> +		goto err;
> +	}
> +
> +	r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +	priv->base = devm_request_and_ioremap(&pdev->dev, r);
> +	if (!priv->base) {
> +		dev_err(&pdev->dev, "can't ioremap\n");
> +		err = -EINVAL;
> +		goto err;
> +	}
> +
> +	dev->base_addr = (unsigned long)priv->base;

base_addr in net_device is a legacy hack.

> +	SET_NETDEV_DEV(dev, &pdev->dev);
> +	dev_set_drvdata(&pdev->dev, dev);
> +	ether_addr_copy(dev->dev_addr, macaddr);
> +	dev->irq = priv->irq0;

And so is irq.

-- 
Ueimor

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH net-next v2 06/10] net: bcmgenet: add main driver file
  2014-02-13 10:35   ` Francois Romieu
@ 2014-02-13 10:58     ` Joe Perches
  0 siblings, 0 replies; 33+ messages in thread
From: Joe Perches @ 2014-02-13 10:58 UTC (permalink / raw)
  To: Francois Romieu; +Cc: Florian Fainelli, netdev, davem, cernekee, devicetree

On Thu, 2014-02-13 at 11:35 +0100, Francois Romieu wrote:
> Florian Fainelli <f.fainelli@gmail.com> :
> [...]
> > diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
[]
> > +static int bcmgenet_set_rx_csum(struct net_device *dev,
> > +				netdev_features_t wanted)
> > +{
> > +	struct bcmgenet_priv *priv = netdev_priv(dev);
> > +	u32 rbuf_chk_ctrl;
> > +	int rx_csum_en;
> > +
> > +	rx_csum_en = !!(wanted & NETIF_F_RXCSUM);
> 
> It's a bool.

It could be a bool.

The struct definition has:

+	unsigned int desc_rxchk_en;

but perhaps a lot of these members could be bool.

It'd be nicer if the variable types were the same.

> > +	spin_lock_bh(&priv->bh_lock);
> > +	rbuf_chk_ctrl = bcmgenet_rbuf_readl(priv, RBUF_CHK_CTRL);
> > +
> > +	/* enable rx checksumming */
> > +	if (!rx_csum_en)
> > +		rbuf_chk_ctrl &= ~RBUF_RXCHK_EN;
> > +	else
> > +		rbuf_chk_ctrl |= RBUF_RXCHK_EN;

This is more normally written with a positive test like:

	if (rx_csum_en)
		rbuf_chk_ctrl |= RBUF_RXCHK_EN;
	else
		rbuf_chk_ctrl &= RBUF_RXCHK_EN;

> > +	priv->desc_rxchk_en = rx_csum_en;

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH net-next v2 09/10] Documentation: add Device tree bindings for Broadcom GENET
       [not found]   ` <1392269395-23513-10-git-send-email-f.fainelli-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2014-02-13 11:13     ` Mark Rutland
       [not found]       ` <20140213111328.GB30705-NuALmloUBlrZROr8t4l/smS4ubULX0JqMm0uRHvK7Nw@public.gmane.org>
  0 siblings, 1 reply; 33+ messages in thread
From: Mark Rutland @ 2014-02-13 11:13 UTC (permalink / raw)
  To: Florian Fainelli
  Cc: netdev-u79uwXL29TY76Z2rM5mHXA, davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	cernekee-Re5JQEeQqe8AvxtiuMwx3w,
	devicetree-u79uwXL29TY76Z2rM5mHXA

On Thu, Feb 13, 2014 at 05:29:54AM +0000, Florian Fainelli wrote:
> This patch adds the Device Tree bindings for the Broadcom GENET Gigabit
> Ethernet controller. A bunch of examples are provided to illustrate the
> versatile aspect of the hardare.
> 
> Signed-off-by: Florian Fainelli <f.fainelli-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> ---
> Changes since v1:
> - rebased
> 
>  .../devicetree/bindings/net/broadcom-bcmgenet.txt  | 111 +++++++++++++++++++++
>  1 file changed, 111 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
> 
> diff --git a/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt b/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
> new file mode 100644
> index 0000000..93c58e9
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
> @@ -0,0 +1,111 @@
> +* Broadcom BCM7xxx Ethernet Controller (GENET)
> +
> +Required properties:
> +- compatible: should be "brcm,genet-v1", "brcm,genet-v2", "brcm,genet-v3",
> +  "brcm,genet-v4".

Presumably "should contain one of" is a better description than "should
be"?

Are the newer revisions compatible with older revisions?

> +- reg: address and length of the register set for the device.
> +- interrupts: interrupt for the device
> +- mdio bus node: this node should always be present regarless of the PHY
> +  configuration of the GENET instance

Nit: a node is not a property, list it after properties.

> +- phy-mode: The interface between the SoC and the PHY (a string that
> +  of_get_phy_mode() can understand).

Do we not have a document under bindings listing these? I really don't
like referring to code in bindings docs.

> +
> +MDIO bus node required properties:
> +
> +- compatible: should be "brcm,genet-v<N>-mdio"

Where N is? Could this not be an explicit list as above? It helps when
searching for bindings.

> +- reg: address and length relative to the parent node base register address

The parent node will require #address-cells and #size-cells too then.

> +- address-cells: address cell for MDIO bus addressing, should be 1
> +- size-cells: size of the cells for MDIO bus addressing, should be 0
> +
> +Optional properties:
> +- phy-handle: A phandle to a phy node defining the PHY address (as the reg
> +  property, a single integer), used to describe configurations where a PHY
> +  (internal or external) is used.

Is there not a phy binding you could refer to instead?

> +
> +- fixed-link: When the GENET interface is connected to a MoCA hardware block
> +  or when operating in a RGMII to RGMII type of connection, or when the
> +  MDIO bus is voluntarily disabled, this property should be used to describe
> +  the "fixed link", the property is described as follows:
> +
> +  fixed-link: <a b c d e> where a is emulated phy id - choose any,
> +  but unique to the all specified fixed-links, b is duplex - 0 half,
> +  1 full, c is link speed - d#10/d#100/d#1000, d is pause - 0 no
> +  pause, 1 pause, e is asym_pause - 0 no asym_pause, 1 asym_pause.

Is this not documented elsewhere such that it can be referred to?

> +
> +Internal Gigabit PHY example:
> +
> +ethernet@f0b60000 {
> +	phy-mode = "internal";
> +	phy-handle = <&phy1>;
> +	mac-address = [ 00 10 18 36 23 1a ];
> +	compatible = "brcm,genet-v4";
> +	#address-cells = <0x1>;
> +	#size-cells = <0x1>;
> +	device_type = "ethernet";

What's this needed by? I can't see any other devices with this
device_type, and I was under the impression that we didn't want new
device_type properties cropping up.

> +	reg = <0xf0b60000 0xfc4c>;
> +	interrupts = <0x0 0x14 0x0 0x0 0x15 0x0>;

How many? The binding implied only one, and I'm not away of any
interrupt controller bindings with #interrupt-cells = <6>.

Cheers,
Mark.
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH net-next v2 06/10] net: bcmgenet: add main driver file
  2014-02-13  5:29   ` Florian Fainelli
  (?)
  (?)
@ 2014-02-13 11:38   ` Mark Rutland
  -1 siblings, 0 replies; 33+ messages in thread
From: Mark Rutland @ 2014-02-13 11:38 UTC (permalink / raw)
  To: Florian Fainelli; +Cc: netdev, davem, cernekee, devicetree

On Thu, Feb 13, 2014 at 05:29:51AM +0000, Florian Fainelli wrote:
> This patch adds the BCMGENET main driver file which supports the
> following:
>
> - GENET hardware from V1 to V4
> - support for reading the UniMAC MIB counters statistics
> - support for the 5 transmit queues
> - support for RX/TX checksum offload and SG
>
> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
> ---
> Changes since v1:
> - use module_platform_driver boilerplate macro
> - renamed bcmgenet_plat_drv to bcmgenet_driver
> - renamed bcmgenet_drv_{probe,remove} to bcmgenet_{probe,remove}
> - removed priv->phy_type usage and use priv->phy_interface which
>   contains the exact same value
> - removed debug module parameters, unused
> - added MODULDE_{AUTHOR,ALIAS,DESCRIPTION} macros
> - remove hardcoded queue index check in bcmgenet_xmit

[...]

> +static int bcmgenet_probe(struct platform_device *pdev)
> +{
> +       struct device_node *dn = pdev->dev.of_node;
> +       struct bcmgenet_priv *priv;
> +       struct net_device *dev;
> +       const void *macaddr;
> +       struct resource *r;
> +       int err = -EIO;
> +
> +       /* Up to GENET_MAX_MQ_CNT + 1 TX queues and a single RX queue */
> +       dev = alloc_etherdev_mqs(sizeof(*priv), GENET_MAX_MQ_CNT + 1, 1);
> +       if (!dev) {
> +               dev_err(&pdev->dev, "can't allocate net device\n");
> +               return -ENOMEM;
> +       }
> +
> +       priv = netdev_priv(dev);
> +       priv->irq0 = platform_get_irq(pdev, 0);
> +       priv->irq1 = platform_get_irq(pdev, 1);
> +       if (!priv->irq0 || !priv->irq1) {
> +               dev_err(&pdev->dev, "can't find IRQs\n");
> +               err = -EINVAL;
> +               goto err;
> +       }

The binding did not describe that there were two interrupts. What are
each of them for? Are they named in the documentation?

> +
> +       macaddr = of_get_mac_address(dn);
> +       if (!macaddr) {
> +               dev_err(&pdev->dev, "can't find MAC address\n");
> +               err = -EINVAL;
> +               goto err;
> +       }
> +
> +       r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +       priv->base = devm_request_and_ioremap(&pdev->dev, r);
> +       if (!priv->base) {
> +               dev_err(&pdev->dev, "can't ioremap\n");
> +               err = -EINVAL;
> +               goto err;
> +       }
> +
> +       dev->base_addr = (unsigned long)priv->base;

Does the net core actually need this?

I can't see anywhere else it's used in this file.

[...]

> +       if (of_device_is_compatible(dn, "brcm,genet-v4"))
> +               priv->version = GENET_V4;
> +       else if (of_device_is_compatible(dn, "brcm,genet-v3"))
> +               priv->version = GENET_V3;
> +       else if (of_device_is_compatible(dn, "brcm,genet-v2"))
> +               priv->version = GENET_V2;
> +       else if (of_device_is_compatible(dn, "brcm,genet-v1"))
> +               priv->version = GENET_V1;
> +       else {
> +               dev_err(&pdev->dev, "unknown GENET version\n");
> +               err = -EINVAL;
> +               goto err;

Surely you can't have probed if none of these are in the compatible
list?

Might it make more sense to place this value in of_device_id::data? You
can get it it with of_match_node, and you only have to place the strings
in one place, so no possible typo issues.

> +       }
> +
> +       bcmgenet_set_hw_params(priv);
> +
> +       spin_lock_init(&priv->lock);
> +       spin_lock_init(&priv->bh_lock);
> +       mutex_init(&priv->mib_mutex);
> +       /* Mii wait queue */
> +       init_waitqueue_head(&priv->wq);
> +       /* Always use RX_BUF_LENGTH (2KB) buffer for all chips */
> +       priv->rx_buf_len = RX_BUF_LENGTH;
> +       INIT_WORK(&priv->bcmgenet_irq_work, bcmgenet_irq_task);
> +
> +       priv->clk = devm_clk_get(&priv->pdev->dev, "enet");
> +       if (IS_ERR(priv->clk))
> +               dev_warn(&priv->pdev->dev, "failed to get enet clock\n");

This wasn't in the binding.

> +
> +       priv->clk_wol = devm_clk_get(&priv->pdev->dev, "enet-wol");
> +       if (IS_ERR(priv->clk_wol))
> +               dev_warn(&priv->pdev->dev, "failed to get enet-wol clock\n");

This is also missing from the binding.

> +       /* Turn off the clocks */
> +       if (!IS_ERR(priv->clk))
> +               clk_disable_unprepare(priv->clk);

Either the comment is misleading (s/clocks/clock/), or you're forgetting
about the enet-wol clock here.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH net-next v2 07/10] net: bcmgenet: add MDIO routines
  2014-02-13  5:29   ` Florian Fainelli
  (?)
@ 2014-02-13 11:50   ` Mark Rutland
  2014-02-13 17:00     ` Florian Fainelli
  -1 siblings, 1 reply; 33+ messages in thread
From: Mark Rutland @ 2014-02-13 11:50 UTC (permalink / raw)
  To: Florian Fainelli; +Cc: netdev, davem, cernekee, devicetree

On Thu, Feb 13, 2014 at 05:29:52AM +0000, Florian Fainelli wrote:
> This patch adds support for configuring the port multiplexer hardware
> which resides in front of the GENET Ethernet MAC controller. This allows
> us to support:
> 
> - internal PHYs (using drivers/net/phy/bcm7xxx.c)
> - MoCA PHYs which are an entirely separate hardware block not covered
>   here
> - external PHYs and switches
> 
> Note that MoCA and switches are currently supported using the emulated
> "fixed PHY" driver.
> 
> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
> ---
> Changes since v1:
> - fixed MDIO crash/warning when Device Tree probing fails
> - removed the use of priv->phy_type and use priv->phy_interface
>   directly
> 
>  drivers/net/ethernet/broadcom/genet/bcmmii.c | 481 +++++++++++++++++++++++++++
>  1 file changed, 481 insertions(+)
>  create mode 100644 drivers/net/ethernet/broadcom/genet/bcmmii.c

[...]

> +static int bcmgenet_mii_of_init(struct bcmgenet_priv *priv)
> +{
> +       struct device_node *dn = priv->pdev->dev.of_node;
> +       struct device *kdev = &priv->pdev->dev;
> +       struct device_node *mdio_dn;
> +       const __be32 *fixed_link;

This looks a bit odd. Could we not have a common parser for fixed-link
properties?

> +       u32 propval;
> +       int ret, sz;
> +
> +       mdio_dn = of_get_next_child(dn, NULL);
> +       if (!mdio_dn) {
> +               dev_err(kdev, "unable to find MDIO bus node\n");
> +               return -ENODEV;
> +       }

Could you please check that this is the node you expect (by looking at
the compatible string list).

> +
> +       ret = of_mdiobus_register(priv->mii_bus, mdio_dn);
> +       if (ret) {
> +               dev_err(kdev, "failed to register MDIO bus\n");
> +               return ret;
> +       }
> +
> +       /* Check if we have an internal or external PHY */
> +       priv->phy_dn = of_parse_phandle(dn, "phy-handle", 0);
> +       if (priv->phy_dn) {
> +               if (!of_property_read_u32(priv->phy_dn, "max-speed", &propval))
> +                       priv->phy_speed = propval;

Is there no way to find this out without reading values directly off of
the PHY? It seems like something we should have an abstraction for.

> +       } else {
> +               /* Read the link speed from the fixed-link property */
> +               fixed_link = of_get_property(dn, "fixed-link", &sz);
> +               if (!fixed_link || sz < sizeof(*fixed_link)) {
> +                       ret = -ENODEV;
> +                       goto out;
> +               }
> +
> +               priv->phy_speed = be32_to_cpu(fixed_link[2]);

Similarly can we not have a common fixed-link parser? Or abstraction
such that you query the phy regardless of what it is and how its binding
represents this?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH net-next v2 09/10] Documentation: add Device tree bindings for Broadcom GENET
       [not found]       ` <20140213111328.GB30705-NuALmloUBlrZROr8t4l/smS4ubULX0JqMm0uRHvK7Nw@public.gmane.org>
@ 2014-02-13 16:57         ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13 16:57 UTC (permalink / raw)
  To: Mark Rutland
  Cc: netdev-u79uwXL29TY76Z2rM5mHXA, davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	cernekee-Re5JQEeQqe8AvxtiuMwx3w,
	devicetree-u79uwXL29TY76Z2rM5mHXA

Hi Mark,

2014-02-13 3:13 GMT-08:00 Mark Rutland <mark.rutland-5wv7dgnIgG8@public.gmane.org>:
> On Thu, Feb 13, 2014 at 05:29:54AM +0000, Florian Fainelli wrote:
>> This patch adds the Device Tree bindings for the Broadcom GENET Gigabit
>> Ethernet controller. A bunch of examples are provided to illustrate the
>> versatile aspect of the hardare.
>>
>> Signed-off-by: Florian Fainelli <f.fainelli-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>> ---
>> Changes since v1:
>> - rebased
>>
>>  .../devicetree/bindings/net/broadcom-bcmgenet.txt  | 111 +++++++++++++++++++++
>>  1 file changed, 111 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
>>
>> diff --git a/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt b/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
>> new file mode 100644
>> index 0000000..93c58e9
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
>> @@ -0,0 +1,111 @@
>> +* Broadcom BCM7xxx Ethernet Controller (GENET)
>> +
>> +Required properties:
>> +- compatible: should be "brcm,genet-v1", "brcm,genet-v2", "brcm,genet-v3",
>> +  "brcm,genet-v4".
>
> Presumably "should contain one of" is a better description than "should
> be"?
>
> Are the newer revisions compatible with older revisions?

Not entirely, the driver has internal macros: GENET_IS_V<N>() to help
figuring out which parts are different.

>
>> +- reg: address and length of the register set for the device.
>> +- interrupts: interrupt for the device
>> +- mdio bus node: this node should always be present regarless of the PHY
>> +  configuration of the GENET instance
>
> Nit: a node is not a property, list it after properties.
>
>> +- phy-mode: The interface between the SoC and the PHY (a string that
>> +  of_get_phy_mode() can understand).
>
> Do we not have a document under bindings listing these? I really don't
> like referring to code in bindings docs.

Sergei has been working on a patch that centralizes the Ethernet DT
binding, but I am targetting the net-next/master tree in which this is
not yet present. I could probably pro-actively mention it though.

>
>> +
>> +MDIO bus node required properties:
>> +
>> +- compatible: should be "brcm,genet-v<N>-mdio"
>
> Where N is? Could this not be an explicit list as above? It helps when
> searching for bindings.

Yes, this should match the genet-v<N> compatible string.

>
>> +- reg: address and length relative to the parent node base register address
>
> The parent node will require #address-cells and #size-cells too then.

Correct.

>
>> +- address-cells: address cell for MDIO bus addressing, should be 1
>> +- size-cells: size of the cells for MDIO bus addressing, should be 0
>> +
>> +Optional properties:
>> +- phy-handle: A phandle to a phy node defining the PHY address (as the reg
>> +  property, a single integer), used to describe configurations where a PHY
>> +  (internal or external) is used.
>
> Is there not a phy binding you could refer to instead?

This is ePAPR, but once again, Sergei's document centralizes that nicely.

>
>> +
>> +- fixed-link: When the GENET interface is connected to a MoCA hardware block
>> +  or when operating in a RGMII to RGMII type of connection, or when the
>> +  MDIO bus is voluntarily disabled, this property should be used to describe
>> +  the "fixed link", the property is described as follows:
>> +
>> +  fixed-link: <a b c d e> where a is emulated phy id - choose any,
>> +  but unique to the all specified fixed-links, b is duplex - 0 half,
>> +  1 full, c is link speed - d#10/d#100/d#1000, d is pause - 0 no
>> +  pause, 1 pause, e is asym_pause - 0 no asym_pause, 1 asym_pause.
>
> Is this not documented elsewhere such that it can be referred to?

The Freescale FEC driver is the one holding most of the documentation
for fixed-link, I can refer to it until Sergei's patch which
centralizes the Ethernet DT bindings gets merged.

>
>> +
>> +Internal Gigabit PHY example:
>> +
>> +ethernet@f0b60000 {
>> +     phy-mode = "internal";
>> +     phy-handle = <&phy1>;
>> +     mac-address = [ 00 10 18 36 23 1a ];
>> +     compatible = "brcm,genet-v4";
>> +     #address-cells = <0x1>;
>> +     #size-cells = <0x1>;
>> +     device_type = "ethernet";
>
> What's this needed by? I can't see any other devices with this
> device_type, and I was under the impression that we didn't want new
> device_type properties cropping up.

This is just an oversight, and is not required per-se, I will get it removed.

>
>> +     reg = <0xf0b60000 0xfc4c>;
>> +     interrupts = <0x0 0x14 0x0 0x0 0x15 0x0>;
>
> How many? The binding implied only one, and I'm not away of any
> interrupt controller bindings with #interrupt-cells = <6>.

In fact, only two interrupts, cells, this is a bad copy-pasting from
the bootloader providing the DTB.

Thanks for the review!
-- 
Florian
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH net-next v2 07/10] net: bcmgenet: add MDIO routines
  2014-02-13 11:50   ` Mark Rutland
@ 2014-02-13 17:00     ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13 17:00 UTC (permalink / raw)
  To: Mark Rutland; +Cc: netdev, davem, cernekee, devicetree

Hi Mark,

2014-02-13 3:50 GMT-08:00 Mark Rutland <mark.rutland@arm.com>:
> On Thu, Feb 13, 2014 at 05:29:52AM +0000, Florian Fainelli wrote:
>> This patch adds support for configuring the port multiplexer hardware
>> which resides in front of the GENET Ethernet MAC controller. This allows
>> us to support:
>>
>> - internal PHYs (using drivers/net/phy/bcm7xxx.c)
>> - MoCA PHYs which are an entirely separate hardware block not covered
>>   here
>> - external PHYs and switches
>>
>> Note that MoCA and switches are currently supported using the emulated
>> "fixed PHY" driver.
>>
>> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
>> ---
>> Changes since v1:
>> - fixed MDIO crash/warning when Device Tree probing fails
>> - removed the use of priv->phy_type and use priv->phy_interface
>>   directly
>>
>>  drivers/net/ethernet/broadcom/genet/bcmmii.c | 481 +++++++++++++++++++++++++++
>>  1 file changed, 481 insertions(+)
>>  create mode 100644 drivers/net/ethernet/broadcom/genet/bcmmii.c
>
> [...]
>
>> +static int bcmgenet_mii_of_init(struct bcmgenet_priv *priv)
>> +{
>> +       struct device_node *dn = priv->pdev->dev.of_node;
>> +       struct device *kdev = &priv->pdev->dev;
>> +       struct device_node *mdio_dn;
>> +       const __be32 *fixed_link;
>
> This looks a bit odd. Could we not have a common parser for fixed-link
> properties?

Do you remember the fixed-link revamp that Thomas Petazzoni submitted
a while ago? I was planning on using it, but there were still some
disagreements on how to do it, so I ended up using the good old
"fixed-link" property as-is.

>
>> +       u32 propval;
>> +       int ret, sz;
>> +
>> +       mdio_dn = of_get_next_child(dn, NULL);
>> +       if (!mdio_dn) {
>> +               dev_err(kdev, "unable to find MDIO bus node\n");
>> +               return -ENODEV;
>> +       }
>
> Could you please check that this is the node you expect (by looking at
> the compatible string list).

Makes sense.

>
>> +
>> +       ret = of_mdiobus_register(priv->mii_bus, mdio_dn);
>> +       if (ret) {
>> +               dev_err(kdev, "failed to register MDIO bus\n");
>> +               return ret;
>> +       }
>> +
>> +       /* Check if we have an internal or external PHY */
>> +       priv->phy_dn = of_parse_phandle(dn, "phy-handle", 0);
>> +       if (priv->phy_dn) {
>> +               if (!of_property_read_u32(priv->phy_dn, "max-speed", &propval))
>> +                       priv->phy_speed = propval;
>
> Is there no way to find this out without reading values directly off of
> the PHY? It seems like something we should have an abstraction for.

In fact, this is a remnant from when I did not submit the patch doing
this in of_mdiobus_register(), so this is now useless.

>
>> +       } else {
>> +               /* Read the link speed from the fixed-link property */
>> +               fixed_link = of_get_property(dn, "fixed-link", &sz);
>> +               if (!fixed_link || sz < sizeof(*fixed_link)) {
>> +                       ret = -ENODEV;
>> +                       goto out;
>> +               }
>> +
>> +               priv->phy_speed = be32_to_cpu(fixed_link[2]);
>
> Similarly can we not have a common fixed-link parser? Or abstraction
> such that you query the phy regardless of what it is and how its binding
> represents this?

I do not think I need this line anymore. I will do some more testing
and will remove this parsing. I do agree that we need a common
fixed-link parser, but that is part of the discussion initiated by
Thomas Petazzoni.

Thanks for the review!
-- 
Florian

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH net-next v2 04/10] net: phy: add Broadcom BCM7xxx internal PHY driver
  2014-02-13 10:34     ` Francois Romieu
@ 2014-02-13 18:41       ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13 18:41 UTC (permalink / raw)
  To: Francois Romieu; +Cc: netdev, David Miller, Kevin Cernekee, devicetree

Hi Francois,

2014-02-13 2:34 GMT-08:00 Francois Romieu <romieu@fr.zoreil.com>:
> Florian Fainelli <f.fainelli@gmail.com> :
> [...]
>> diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c
>> new file mode 100644
>> index 0000000..6aea6e2
>> --- /dev/null
>> +++ b/drivers/net/phy/bcm7xxx.c
> [...]
>> +static int bcm7445_config_init(struct phy_device *phydev)
>> +{
>> +     int ret;
>
> It could be declared after 'i' below.
>
>> +     const struct bcm7445_regs {
>
> static const
>
>> +             int reg;
>> +             u16 value;
>> +     } bcm7445_regs_cfg[] = {
>> +             /* increases ADC latency by 24ns */
>> +             { 0x17, 0x0038 },
>> +             { 0x15, 0xAB95 },
>> +             /* increases internal 1V LDO voltage by 5% */
>> +             { 0x17, 0x2038 },
>> +             { 0x15, 0xBB22 },
>> +             /* reduce RX low pass filter corner frequency */
>> +             { 0x17, 0x6038 },
>> +             { 0x15, 0xFFC5 },
>> +             /* reduce RX high pass filter corner frequency */
>> +             { 0x17, 0x003a },
>> +             { 0x15, 0x2002 },
>> +     };
>> +     unsigned int i;
>> +
>> +     for (i = 0; i < ARRAY_SIZE(bcm7445_regs_cfg); i++) {
>> +             ret = phy_write(phydev,
>> +                             bcm7445_regs_cfg[i].reg,
>> +                             bcm7445_regs_cfg[i].value);
>> +             if (ret)
>> +                     return ret;
>> +     }
>> +
>> +     return 0;
>> +}
>> +
>> +static void phy_write_exp(struct phy_device *phydev,
>> +                                     u16 reg, u16 value)
>
> static void phy_write_exp(struct phy_device *phydev, u16 reg, u16 value)
>
>> +{
>> +     phy_write(phydev, 0x17, 0xf00 | reg);
>> +     phy_write(phydev, 0x15, value);
>> +}
>> +
>> +static void phy_write_misc(struct phy_device *phydev,
>> +                                     u16 reg, u16 chl, u16 value)
>    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ all tabs that don't line up
>
> static void phy_write_misc(struct phy_device *phydev,
>                            u16 reg, u16 chl, u16 value)
>
> static void phy_write_misc(struct phy_device *phydev, u16 reg, u16 chl,
>                            u16 value)
>
> static void phy_write_misc(struct phy_device *phydev, u16 reg, u16 chl, u16 val)
>
>
>> +{
>> +     int tmp;
>> +
>> +     phy_write(phydev, 0x18, 0x7);
>> +
>> +     tmp = phy_read(phydev, 0x18);
>> +     tmp |= 0x800;
>> +     phy_write(phydev, 0x18, tmp);
>> +
>> +     tmp = (chl * 0x2000) | reg;
>> +     phy_write(phydev, 0x17, tmp);
>> +
>> +     phy_write(phydev, 0x15, value);
>
> You may use some #define for the 0x15, 0x17 and 0x18 values.

Right, those are actually inherited from the BCM54xx PHY driver, I
will move those register to a common location e.g: brcmphy.h

>
>> +}
>> +
>> +static int bcm7xxx_28nm_afe_config_init(struct phy_device *phydev)
>> +{
>> +     /* write AFE_RXCONFIG_0 */
>> +     phy_write_misc(phydev, 0x38, 0x0000, 0xeb19);
>> +
>> +     /* write AFE_RXCONFIG_1 */
>> +     phy_write_misc(phydev, 0x38, 0x0001, 0x9a3f);
>> +
>> +     /* write AFE_RX_LP_COUNTER */
>> +     phy_write_misc(phydev, 0x38, 0x0003, 0x7fc7);
>> +
>> +     /* write AFE_HPF_TRIM_OTHERS */
>> +     phy_write_misc(phydev, 0x3A, 0x0000, 0x000b);
>> +
>> +     /* write AFTE_TX_CONFIG */
>> +     phy_write_misc(phydev, 0x39, 0x0000, 0x0800);
>
> Some #define may be welcome to replace the comments.

I would rather keep those as comments as they might change over time
if I get to incorporate a new workaround sequence.

>
> [...]
>> +static int bcm7xxx_28nm_config_init(struct phy_device *phydev)
>> +{
>> +     int ret;
>> +
>> +     ret = bcm7445_config_init(phydev);
>> +     if (ret)
>> +             return ret;
>> +
>> +     return bcm7xxx_28nm_afe_config_init(phydev);
>> +}
>> +
>> +static int phy_set_clr_bits(struct phy_device *dev, int location,
>> +                                     int set_mask, int clr_mask)
>> +{
>> +     int v, ret;
>> +
>> +     v = phy_read(dev, location);
>> +     if (v < 0)
>> +             return v;
>> +
>> +     v &= ~clr_mask;
>> +     v |= set_mask;
>> +
>> +     ret = phy_write(dev, location, v);
>> +     if (ret < 0)
>> +             return ret;
>> +
>> +     return v;
>> +}
>> +
>> +static int bcm7xxx_config_init(struct phy_device *phydev)
>> +{
>> +     /* Enable 64 clock MDIO */
>> +     phy_write(phydev, 0x1d, 0x1000);
>> +     phy_read(phydev, 0x1d);
>> +
>> +     /* Workaround only required for 100Mbits/sec */
>> +     if (!(phydev->dev_flags & PHY_BRCM_100MBPS_WAR))
>> +             return 0;
>> +
>> +     /* set shadow mode 2 */
>> +     phy_set_clr_bits(phydev, 0x1f, 0x0004, 0x0004);
>
> phy_set_clr_bits returned status code is not checked.
>
>> +
>> +     /* set iddq_clkbias */
>> +     phy_write(phydev, 0x14, 0x0F00);
>> +     udelay(10);
>> +
>> +     /* reset iddq_clkbias */
>> +     phy_write(phydev, 0x14, 0x0C00);
>> +
>> +     phy_write(phydev, 0x13, 0x7555);
>> +
>> +     /* reset shadow mode 2 */
>> +     phy_set_clr_bits(phydev, 0x1f, 0x0004, 0);
>
> phy_set_clr_bits returned status code is not checked.
>
>> +
>> +     return 0;
>> +}
>> +
>> +/* Workaround for putting the PHY in IDDQ mode, required
>> + * for all BCM7XXX PHYs
>> + */
>> +static int bcm7xxx_suspend(struct phy_device *phydev)
>
> Factor out with bcm7445_config_init and some helper ?

I would rather keep this function simple like it is today since this
really is only required for entering suspend mode properly while
bcm7445_config_init() as the name suggests is specific to 7445 only.

Thanks for the review!

>
>> +{
>> +     int ret;
>> +     const struct bcm7xxx_regs {
>> +             int reg;
>> +             u16 value;
>> +     } bcm7xxx_suspend_cfg[] = {
>> +             { 0x1f, 0x008b },
>> +             { 0x10, 0x01c0 },
>> +             { 0x14, 0x7000 },
>> +             { 0x1f, 0x000f },
>> +             { 0x10, 0x20d0 },
>> +             { 0x1f, 0x000b },
>> +     };
>> +     unsigned int i;
>
> --
> Ueimor



-- 
Florian

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH net-next v2 01/10] net: phy: add "internal" PHY mode
  2014-02-13  5:29     ` Florian Fainelli
  (?)
@ 2014-02-13 20:34     ` David Miller
  2014-02-13 20:41       ` Florian Fainelli
  -1 siblings, 1 reply; 33+ messages in thread
From: David Miller @ 2014-02-13 20:34 UTC (permalink / raw)
  To: f.fainelli; +Cc: netdev, cernekee, devicetree

From: Florian Fainelli <f.fainelli@gmail.com>
Date: Wed, 12 Feb 2014 21:29:46 -0800

> On some systems, the PHY can be internal, in the same package as the
> Ethernet MAC, and still be responding to a specific address on the MDIO
> bus, in that case, the Ethernet MAC might need to know about it to
> properly configure a port multiplexer to switch to an internal or
> external PHY. Add a new PHY interface mode for this and update the
> Device Tree of_get_phy_mode() function to look for it.
> 
> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
> ---
> Changes since v1:
> - rebased against lastest net-next master branch

This is over-engineering.

The only thing that even uses this value is phy_is_internal(), and
the only user of phy_is_internal() is the generic PHY layer ethtool
operation for get-settings.

The PHY layer already has a place to indicate whether a PHY is
internal or not, overriding that using the PHY mode is trouble
waiting to happen.

Please, just provide some way to propagate this device tree property
into phy->is_internal.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH net-next v2 01/10] net: phy: add "internal" PHY mode
  2014-02-13 20:34     ` David Miller
@ 2014-02-13 20:41       ` Florian Fainelli
  0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2014-02-13 20:41 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Kevin Cernekee, devicetree

2014-02-13 12:34 GMT-08:00 David Miller <davem@davemloft.net>:
> From: Florian Fainelli <f.fainelli@gmail.com>
> Date: Wed, 12 Feb 2014 21:29:46 -0800
>
>> On some systems, the PHY can be internal, in the same package as the
>> Ethernet MAC, and still be responding to a specific address on the MDIO
>> bus, in that case, the Ethernet MAC might need to know about it to
>> properly configure a port multiplexer to switch to an internal or
>> external PHY. Add a new PHY interface mode for this and update the
>> Device Tree of_get_phy_mode() function to look for it.
>>
>> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
>> ---
>> Changes since v1:
>> - rebased against lastest net-next master branch
>
> This is over-engineering.
>
> The only thing that even uses this value is phy_is_internal(), and
> the only user of phy_is_internal() is the generic PHY layer ethtool
> operation for get-settings.
>
> The PHY layer already has a place to indicate whether a PHY is
> internal or not, overriding that using the PHY mode is trouble
> waiting to happen.
>
> Please, just provide some way to propagate this device tree property
> into phy->is_internal.

I just realized that I am able to drop this change completely since I
add a PHY driver for the Ethernet MAC which already flags particular
devices of interest as internal PHYs. This change originally came up
as I needed to know that before probing for the PHY, which can be
resolved by doing some re-ordering.

Thanks for the feedback.
-- 
Florian

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2014-02-13 20:41 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-13  5:29 [PATCH net-next v2 00/10] Support for the Broadcom GENET driver Florian Fainelli
2014-02-13  5:29 ` Florian Fainelli
     [not found] ` <1392269395-23513-1-git-send-email-f.fainelli-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2014-02-13  5:29   ` [PATCH net-next v2 01/10] net: phy: add "internal" PHY mode Florian Fainelli
2014-02-13  5:29     ` Florian Fainelli
2014-02-13 20:34     ` David Miller
2014-02-13 20:41       ` Florian Fainelli
2014-02-13  5:29   ` [PATCH net-next v2 04/10] net: phy: add Broadcom BCM7xxx internal PHY driver Florian Fainelli
2014-02-13  5:29     ` Florian Fainelli
2014-02-13 10:34     ` Francois Romieu
2014-02-13 18:41       ` Florian Fainelli
2014-02-13  5:29 ` [PATCH net-next v2 02/10] net: phy: add MoCA PHY type Florian Fainelli
2014-02-13  5:29   ` Florian Fainelli
2014-02-13  5:29 ` [PATCH net-next v2 03/10] net: phy: update port type for MoCA PHYs Florian Fainelli
2014-02-13  5:29   ` Florian Fainelli
2014-02-13  5:29 ` [PATCH net-next v2 05/10] net: bcmgenet: add driver definitions and private structure Florian Fainelli
2014-02-13  5:29   ` Florian Fainelli
2014-02-13  5:29 ` [PATCH net-next v2 06/10] net: bcmgenet: add main driver file Florian Fainelli
2014-02-13  5:29   ` Florian Fainelli
2014-02-13 10:35   ` Francois Romieu
2014-02-13 10:58     ` Joe Perches
2014-02-13 11:38   ` Mark Rutland
2014-02-13  5:29 ` [PATCH net-next v2 07/10] net: bcmgenet: add MDIO routines Florian Fainelli
2014-02-13  5:29   ` Florian Fainelli
2014-02-13 11:50   ` Mark Rutland
2014-02-13 17:00     ` Florian Fainelli
2014-02-13  5:29 ` [PATCH net-next v2 08/10] net: bcmgenet: hook into the build system Florian Fainelli
2014-02-13  5:29   ` Florian Fainelli
2014-02-13  5:29 ` [PATCH net-next v2 09/10] Documentation: add Device tree bindings for Broadcom GENET Florian Fainelli
2014-02-13  5:29   ` Florian Fainelli
     [not found]   ` <1392269395-23513-10-git-send-email-f.fainelli-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2014-02-13 11:13     ` Mark Rutland
     [not found]       ` <20140213111328.GB30705-NuALmloUBlrZROr8t4l/smS4ubULX0JqMm0uRHvK7Nw@public.gmane.org>
2014-02-13 16:57         ` Florian Fainelli
2014-02-13  5:29 ` [PATCH net-next v2 10/10] MAINTAINERS: add entry for the Broadcom GENET driver Florian Fainelli
2014-02-13  5:29   ` Florian Fainelli

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.