netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v1 0/7] amd-xgbe: AMD XGBE driver updates 2015-05-12
@ 2015-05-12 19:22 Tom Lendacky
  2015-05-12 19:22 ` [PATCH net-next v1 1/7] amd-xgbe: Add additional stats to be reported via ethtool Tom Lendacky
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: Tom Lendacky @ 2015-05-12 19:22 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

The following series of patches includes functional updates and changes
to the driver.

- Add additional statistics to be collected and reported
- Use the netif_msg_* functions for issuing some debug and informational
  driver messages
- Rx path SKB allocation cleanup/simplification
- Remove stand-alone phylib driver and incorporate function into the nic
  driver
- Simplify device tree support while maintaining backwards compatibility
- Fix the flow control negotiation logic to properly configure flow
  control
- Remove the checking and setting of the device dma_mask field

This patch series is based on net-next.

---

Tom Lendacky (7):
      amd-xgbe: Add additional stats to be reported via ethtool
      amd-xgbe: Add netif_msg_* support for driver messages
      amd-xgbe: Rework the Rx path SKB allocation
      amd-xgbe: Move the PHY support into amd-xgbe
      amd-xgbe: Support defining PHY resources in ETH device node
      amd-xgbe: Fix flow control setting logic
      amd-xgbe: Remove manual check and set of dma_mask pointer


 .../devicetree/bindings/net/amd-xgbe-phy.txt       |   48 -
 Documentation/devicetree/bindings/net/amd-xgbe.txt |   40 
 MAINTAINERS                                        |    1 
 drivers/net/ethernet/amd/Kconfig                   |    4 
 drivers/net/ethernet/amd/xgbe/xgbe-common.h        |  155 ++
 drivers/net/ethernet/amd/xgbe/xgbe-dcb.c           |   20 
 drivers/net/ethernet/amd/xgbe/xgbe-desc.c          |   41 
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c           |  115 +
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c           |  348 ++--
 drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c       |   79 +
 drivers/net/ethernet/amd/xgbe/xgbe-main.c          |  384 ++++
 drivers/net/ethernet/amd/xgbe/xgbe-mdio.c          | 1237 ++++++++++++-
 drivers/net/ethernet/amd/xgbe/xgbe.h               |  236 ++-
 drivers/net/phy/Kconfig                            |    6 
 drivers/net/phy/Makefile                           |    1 
 drivers/net/phy/amd-xgbe-phy.c                     | 1862 --------------------
 16 files changed, 2125 insertions(+), 2452 deletions(-)
 delete mode 100644 Documentation/devicetree/bindings/net/amd-xgbe-phy.txt
 delete mode 100644 drivers/net/phy/amd-xgbe-phy.c

-- 
Tom Lendacky

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH net-next v1 1/7] amd-xgbe: Add additional stats to be reported via ethtool
  2015-05-12 19:22 [PATCH net-next v1 0/7] amd-xgbe: AMD XGBE driver updates 2015-05-12 Tom Lendacky
@ 2015-05-12 19:22 ` Tom Lendacky
  2015-05-12 19:22 ` [PATCH net-next v1 2/7] amd-xgbe: Add netif_msg_* support for driver messages Tom Lendacky
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Tom Lendacky @ 2015-05-12 19:22 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Add additional/extended statistics beyond what is provided by the
hardware to be reported via ethtool. The new stats focus on the
calls into ndo_start_xmit and the napi_poll routine.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c     |   12 +++++++++---
 drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c |    8 ++++++++
 drivers/net/ethernet/amd/xgbe/xgbe.h         |    6 ++++++
 3 files changed, 23 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index 21d9497..6f593a5 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -1533,6 +1533,8 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 				  packet->tcp_payload_len);
 		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, TCPHDRLEN,
 				  packet->tcp_header_len / 4);
+
+		pdata->ext_stats.tx_tso_packets++;
 	} else {
 		/* Enable CRC and Pad Insertion */
 		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, CPC, 0);
@@ -1618,11 +1620,12 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 
 static int xgbe_dev_read(struct xgbe_channel *channel)
 {
+	struct xgbe_prv_data *pdata = channel->pdata;
 	struct xgbe_ring *ring = channel->rx_ring;
 	struct xgbe_ring_data *rdata;
 	struct xgbe_ring_desc *rdesc;
 	struct xgbe_packet_data *packet = &ring->packet_data;
-	struct net_device *netdev = channel->pdata->netdev;
+	struct net_device *netdev = pdata->netdev;
 	unsigned int err, etlt, l34t;
 
 	DBGPR("-->xgbe_dev_read: cur = %d\n", ring->cur);
@@ -1661,9 +1664,12 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
 			       CONTEXT_NEXT, 1);
 
 	/* Get the header length */
-	if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, FD))
+	if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, FD)) {
 		rdata->rx.hdr_len = XGMAC_GET_BITS_LE(rdesc->desc2,
 						      RX_NORMAL_DESC2, HL);
+		if (rdata->rx.hdr_len)
+			pdata->ext_stats.rx_split_header_packets++;
+	}
 
 	/* Get the RSS hash */
 	if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, RSV)) {
@@ -1700,7 +1706,7 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
 		       INCOMPLETE, 0);
 
 	/* Set checksum done indicator as appropriate */
-	if (channel->pdata->netdev->features & NETIF_F_RXCSUM)
+	if (netdev->features & NETIF_F_RXCSUM)
 		XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
 			       CSUM_DONE, 1);
 
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
index 5f149e8..95baa86 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
@@ -133,6 +133,12 @@ struct xgbe_stats {
 	  offsetof(struct xgbe_prv_data, mmc_stats._var),	\
 	}
 
+#define XGMAC_EXT_STAT(_string, _var)				\
+	{ _string,						\
+	  FIELD_SIZEOF(struct xgbe_ext_stats, _var),		\
+	  offsetof(struct xgbe_prv_data, ext_stats._var),	\
+	}
+
 static const struct xgbe_stats xgbe_gstring_stats[] = {
 	XGMAC_MMC_STAT("tx_bytes", txoctetcount_gb),
 	XGMAC_MMC_STAT("tx_packets", txframecount_gb),
@@ -140,6 +146,7 @@ static const struct xgbe_stats xgbe_gstring_stats[] = {
 	XGMAC_MMC_STAT("tx_broadcast_packets", txbroadcastframes_gb),
 	XGMAC_MMC_STAT("tx_multicast_packets", txmulticastframes_gb),
 	XGMAC_MMC_STAT("tx_vlan_packets", txvlanframes_g),
+	XGMAC_EXT_STAT("tx_tso_packets", tx_tso_packets),
 	XGMAC_MMC_STAT("tx_64_byte_packets", tx64octets_gb),
 	XGMAC_MMC_STAT("tx_65_to_127_byte_packets", tx65to127octets_gb),
 	XGMAC_MMC_STAT("tx_128_to_255_byte_packets", tx128to255octets_gb),
@@ -171,6 +178,7 @@ static const struct xgbe_stats xgbe_gstring_stats[] = {
 	XGMAC_MMC_STAT("rx_fifo_overflow_errors", rxfifooverflow),
 	XGMAC_MMC_STAT("rx_watchdog_errors", rxwatchdogerror),
 	XGMAC_MMC_STAT("rx_pause_frames", rxpauseframes),
+	XGMAC_EXT_STAT("rx_split_header_packets", rx_split_header_packets),
 };
 
 #define XGBE_STATS_COUNT	ARRAY_SIZE(xgbe_gstring_stats)
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index e62dfa2..ef3d1e5 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -492,6 +492,11 @@ struct xgbe_mmc_stats {
 	u64 rxwatchdogerror;
 };
 
+struct xgbe_ext_stats {
+	u64 tx_tso_packets;
+	u64 rx_split_header_packets;
+};
+
 struct xgbe_hw_if {
 	int (*tx_complete)(struct xgbe_ring_desc *);
 
@@ -750,6 +755,7 @@ struct xgbe_prv_data {
 	netdev_features_t netdev_features;
 	struct napi_struct napi;
 	struct xgbe_mmc_stats mmc_stats;
+	struct xgbe_ext_stats ext_stats;
 
 	/* Filtering support */
 	unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v1 2/7] amd-xgbe: Add netif_msg_* support for driver messages
  2015-05-12 19:22 [PATCH net-next v1 0/7] amd-xgbe: AMD XGBE driver updates 2015-05-12 Tom Lendacky
  2015-05-12 19:22 ` [PATCH net-next v1 1/7] amd-xgbe: Add additional stats to be reported via ethtool Tom Lendacky
@ 2015-05-12 19:22 ` Tom Lendacky
  2015-05-12 19:44   ` Joe Perches
  2015-05-12 19:22 ` [PATCH net-next v1 3/7] amd-xgbe: Rework the Rx path SKB allocation Tom Lendacky
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 12+ messages in thread
From: Tom Lendacky @ 2015-05-12 19:22 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Add support for the network interface message level settings for
determining whether to issue some of the driver messages.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-dcb.c  |   20 ++++-
 drivers/net/ethernet/amd/xgbe/xgbe-desc.c |   39 +++++++---
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c  |   86 +++++++++++++++-------
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c  |  111 +++++++++++++++++------------
 drivers/net/ethernet/amd/xgbe/xgbe-main.c |   15 +++-
 drivers/net/ethernet/amd/xgbe/xgbe-mdio.c |   83 +++++++++++-----------
 drivers/net/ethernet/amd/xgbe/xgbe.h      |   23 ++----
 7 files changed, 225 insertions(+), 152 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c b/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c
index 8a50b01..754b21d 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c
@@ -150,9 +150,14 @@ static int xgbe_dcb_ieee_setets(struct net_device *netdev,
 	tc_ets = 0;
 	tc_ets_weight = 0;
 	for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
-		DBGPR("  TC%u: tx_bw=%hhu, rx_bw=%hhu, tsa=%hhu\n", i,
-		      ets->tc_tx_bw[i], ets->tc_rx_bw[i], ets->tc_tsa[i]);
-		DBGPR("  PRIO%u: TC=%hhu\n", i, ets->prio_tc[i]);
+		if (netif_msg_drv(pdata)) {
+			netdev_dbg(netdev,
+				   "TC%u: tx_bw=%hhu, rx_bw=%hhu, tsa=%hhu\n",
+				   i, ets->tc_tx_bw[i], ets->tc_rx_bw[i],
+				   ets->tc_tsa[i]);
+			netdev_dbg(netdev, "PRIO%u: TC=%hhu\n", i,
+				   ets->prio_tc[i]);
+		}
 
 		if ((ets->tc_tx_bw[i] || ets->tc_tsa[i]) &&
 		    (i >= pdata->hw_feat.tc_cnt))
@@ -214,8 +219,9 @@ static int xgbe_dcb_ieee_setpfc(struct net_device *netdev,
 {
 	struct xgbe_prv_data *pdata = netdev_priv(netdev);
 
-	DBGPR("  cap=%hhu, en=%hhx, mbc=%hhu, delay=%hhu\n",
-	      pfc->pfc_cap, pfc->pfc_en, pfc->mbc, pfc->delay);
+	if (netif_msg_drv(pdata))
+		netdev_dbg(netdev, "cap=%hhu, en=%#hhx, mbc=%hhu, delay=%hhu\n",
+			   pfc->pfc_cap, pfc->pfc_en, pfc->mbc, pfc->delay);
 
 	if (!pdata->pfc) {
 		pdata->pfc = devm_kzalloc(pdata->dev, sizeof(*pdata->pfc),
@@ -238,9 +244,11 @@ static u8 xgbe_dcb_getdcbx(struct net_device *netdev)
 
 static u8 xgbe_dcb_setdcbx(struct net_device *netdev, u8 dcbx)
 {
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
 	u8 support = xgbe_dcb_getdcbx(netdev);
 
-	DBGPR("  DCBX=%#hhx\n", dcbx);
+	if (netif_msg_drv(pdata))
+		netdev_dbg(netdev, "DCBX=%#hhx\n", dcbx);
 
 	if (dcbx & ~support)
 		return 1;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
index d81fc6b..6f2a914 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
@@ -208,8 +208,10 @@ static int xgbe_init_ring(struct xgbe_prv_data *pdata,
 	if (!ring->rdata)
 		return -ENOMEM;
 
-	DBGPR("    rdesc=0x%p, rdesc_dma=0x%llx, rdata=0x%p\n",
-	      ring->rdesc, ring->rdesc_dma, ring->rdata);
+	if (netif_msg_drv(pdata))
+		netdev_dbg(pdata->netdev,
+			   "rdesc=%p, rdesc_dma=%pad, rdata=%p\n",
+			   ring->rdesc, &ring->rdesc_dma, ring->rdata);
 
 	DBGPR("<--xgbe_init_ring\n");
 
@@ -226,7 +228,10 @@ static int xgbe_alloc_ring_resources(struct xgbe_prv_data *pdata)
 
 	channel = pdata->channel;
 	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		DBGPR("  %s - tx_ring:\n", channel->name);
+		if (netif_msg_drv(pdata))
+			netdev_dbg(pdata->netdev, "%s - Tx ring:\n",
+				   channel->name);
+
 		ret = xgbe_init_ring(pdata, channel->tx_ring,
 				     pdata->tx_desc_count);
 		if (ret) {
@@ -235,12 +240,15 @@ static int xgbe_alloc_ring_resources(struct xgbe_prv_data *pdata)
 			goto err_ring;
 		}
 
-		DBGPR("  %s - rx_ring:\n", channel->name);
+		if (netif_msg_drv(pdata))
+			netdev_dbg(pdata->netdev, "%s - Rx ring:\n",
+				   channel->name);
+
 		ret = xgbe_init_ring(pdata, channel->rx_ring,
 				     pdata->rx_desc_count);
 		if (ret) {
 			netdev_alert(pdata->netdev,
-				     "error initializing Tx ring\n");
+				     "error initializing Rx ring\n");
 			goto err_ring;
 		}
 	}
@@ -518,8 +526,6 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb)
 	rdata = XGBE_GET_DESC_DATA(ring, cur_index);
 
 	if (tso) {
-		DBGPR("  TSO packet\n");
-
 		/* Map the TSO header */
 		skb_dma = dma_map_single(pdata->dev, skb->data,
 					 packet->header_len, DMA_TO_DEVICE);
@@ -529,6 +535,10 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb)
 		}
 		rdata->skb_dma = skb_dma;
 		rdata->skb_dma_len = packet->header_len;
+		if (netif_msg_tx_queued(pdata))
+			netdev_dbg(pdata->netdev,
+				   "skb header: index=%u, dma=%pad, len=%u\n",
+				   cur_index, &skb_dma, packet->header_len);
 
 		offset = packet->header_len;
 
@@ -550,8 +560,10 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb)
 		}
 		rdata->skb_dma = skb_dma;
 		rdata->skb_dma_len = len;
-		DBGPR("  skb data: index=%u, dma=0x%llx, len=%u\n",
-		      cur_index, skb_dma, len);
+		if (netif_msg_tx_queued(pdata))
+			netdev_dbg(pdata->netdev,
+				   "skb data: index=%u, dma=%pad, len=%u\n",
+				   cur_index, &skb_dma, len);
 
 		datalen -= len;
 		offset += len;
@@ -563,7 +575,8 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb)
 	}
 
 	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
-		DBGPR("  mapping frag %u\n", i);
+		if (netif_msg_tx_queued(pdata))
+			netdev_dbg(pdata->netdev, "mapping frag %u\n", i);
 
 		frag = &skb_shinfo(skb)->frags[i];
 		offset = 0;
@@ -582,8 +595,10 @@ static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb)
 			rdata->skb_dma = skb_dma;
 			rdata->skb_dma_len = len;
 			rdata->mapped_as_page = 1;
-			DBGPR("  skb data: index=%u, dma=0x%llx, len=%u\n",
-			      cur_index, skb_dma, len);
+			if (netif_msg_tx_queued(pdata))
+				netdev_dbg(pdata->netdev,
+					   "skb frag: index=%u, dma=%pad, len=%u\n",
+					   cur_index, &skb_dma, len);
 
 			datalen -= len;
 			offset += len;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index 6f593a5..35fad1a 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -710,7 +710,9 @@ static int xgbe_set_promiscuous_mode(struct xgbe_prv_data *pdata,
 	if (XGMAC_IOREAD_BITS(pdata, MAC_PFR, PR) == val)
 		return 0;
 
-	DBGPR("  %s promiscuous mode\n", enable ? "entering" : "leaving");
+	if (netif_msg_drv(pdata))
+		netdev_dbg(pdata->netdev, "%s promiscuous mode\n",
+			   enable ? "entering" : "leaving");
 	XGMAC_IOWRITE_BITS(pdata, MAC_PFR, PR, val);
 
 	return 0;
@@ -724,7 +726,9 @@ static int xgbe_set_all_multicast_mode(struct xgbe_prv_data *pdata,
 	if (XGMAC_IOREAD_BITS(pdata, MAC_PFR, PM) == val)
 		return 0;
 
-	DBGPR("  %s allmulti mode\n", enable ? "entering" : "leaving");
+	if (netif_msg_drv(pdata))
+		netdev_dbg(pdata->netdev, "%s allmulti mode\n",
+			   enable ? "entering" : "leaving");
 	XGMAC_IOWRITE_BITS(pdata, MAC_PFR, PM, val);
 
 	return 0;
@@ -749,8 +753,10 @@ static void xgbe_set_mac_reg(struct xgbe_prv_data *pdata,
 		mac_addr[0] = ha->addr[4];
 		mac_addr[1] = ha->addr[5];
 
-		DBGPR("  adding mac address %pM at 0x%04x\n", ha->addr,
-		      *mac_reg);
+		if (netif_msg_drv(pdata))
+			netdev_dbg(pdata->netdev,
+				   "adding mac address %pM at %#x\n",
+				   ha->addr, *mac_reg);
 
 		XGMAC_SET_BITS(mac_addr_hi, MAC_MACA1HR, AE, 1);
 	}
@@ -1322,7 +1328,8 @@ static void xgbe_config_dcb_tc(struct xgbe_prv_data *pdata)
 	for (i = 0; i < pdata->hw_feat.tc_cnt; i++) {
 		switch (ets->tc_tsa[i]) {
 		case IEEE_8021QAZ_TSA_STRICT:
-			DBGPR("  TC%u using SP\n", i);
+			if (netif_msg_drv(pdata))
+				netdev_dbg(pdata->netdev, "TC%u using SP\n", i);
 			XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_ETSCR, TSA,
 					       MTL_TSA_SP);
 			break;
@@ -1330,7 +1337,10 @@ static void xgbe_config_dcb_tc(struct xgbe_prv_data *pdata)
 			weight = total_weight * ets->tc_tx_bw[i] / 100;
 			weight = clamp(weight, min_weight, total_weight);
 
-			DBGPR("  TC%u using DWRR (weight %u)\n", i, weight);
+			if (netif_msg_drv(pdata))
+				netdev_dbg(pdata->netdev,
+					   "TC%u using DWRR (weight %u)\n", i,
+					   weight);
 			XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_ETSCR, TSA,
 					       MTL_TSA_ETS);
 			XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_QWR, QW,
@@ -1359,7 +1369,9 @@ static void xgbe_config_dcb_pfc(struct xgbe_prv_data *pdata)
 		}
 		mask &= 0xff;
 
-		DBGPR("  TC%u PFC mask=%#x\n", tc, mask);
+		if (netif_msg_drv(pdata))
+			netdev_dbg(pdata->netdev, "TC%u PFC mask=%#x\n", tc,
+				   mask);
 		reg = MTL_TCPM0R + (MTL_TCPM_INC * (tc / MTL_TCPM_TC_PER_REG));
 		reg_val = XGMAC_IOREAD(pdata, reg);
 
@@ -1457,8 +1469,10 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 	/* Create a context descriptor if this is a TSO packet */
 	if (tso_context || vlan_context) {
 		if (tso_context) {
-			DBGPR("  TSO context descriptor, mss=%u\n",
-			      packet->mss);
+			if (netif_msg_tx_queued(pdata))
+				netdev_dbg(pdata->netdev,
+					   "TSO context descriptor, mss=%u\n",
+					   packet->mss);
 
 			/* Set the MSS size */
 			XGMAC_SET_BITS_LE(rdesc->desc2, TX_CONTEXT_DESC2,
@@ -1476,8 +1490,10 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 		}
 
 		if (vlan_context) {
-			DBGPR("  VLAN context descriptor, ctag=%u\n",
-			      packet->vlan_ctag);
+			if (netif_msg_tx_queued(pdata))
+				netdev_dbg(pdata->netdev,
+					   "VLAN context descriptor, ctag=%u\n",
+					   packet->vlan_ctag);
 
 			/* Mark it as a CONTEXT descriptor */
 			XGMAC_SET_BITS_LE(rdesc->desc3, TX_CONTEXT_DESC3,
@@ -1596,9 +1612,9 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 	rdesc = rdata->rdesc;
 	XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, OWN, 1);
 
-#ifdef XGMAC_ENABLE_TX_DESC_DUMP
-	xgbe_dump_tx_desc(ring, start_index, packet->rdesc_count, 1);
-#endif
+	if (netif_msg_tx_queued(pdata))
+		xgbe_dump_tx_desc(pdata, ring, start_index,
+				  packet->rdesc_count, 1);
 
 	/* Make sure ownership is written to the descriptor */
 	dma_wmb();
@@ -1640,9 +1656,8 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
 	/* Make sure descriptor fields are read after reading the OWN bit */
 	dma_rmb();
 
-#ifdef XGMAC_ENABLE_RX_DESC_DUMP
-	xgbe_dump_rx_desc(ring, rdesc, ring->cur);
-#endif
+	if (netif_msg_rx_status(pdata))
+		xgbe_dump_rx_desc(pdata, ring, ring->cur);
 
 	if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, CTXT)) {
 		/* Timestamp Context Descriptor */
@@ -1713,7 +1728,8 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
 	/* Check for errors (only valid in last descriptor) */
 	err = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, ES);
 	etlt = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, ETLT);
-	DBGPR("  err=%u, etlt=%#x\n", err, etlt);
+	if (netif_msg_rx_status(pdata))
+		netdev_dbg(netdev, "err=%u, etlt=%#x\n", err, etlt);
 
 	if (!err || !etlt) {
 		/* No error if err is 0 or etlt is 0 */
@@ -1724,7 +1740,9 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
 			packet->vlan_ctag = XGMAC_GET_BITS_LE(rdesc->desc0,
 							      RX_NORMAL_DESC0,
 							      OVT);
-			DBGPR("  vlan-ctag=0x%04x\n", packet->vlan_ctag);
+			if (netif_msg_rx_status(pdata))
+				netdev_dbg(netdev, "vlan-ctag=%#06x\n",
+					   packet->vlan_ctag);
 		}
 	} else {
 		if ((etlt == 0x05) || (etlt == 0x06))
@@ -2032,9 +2050,10 @@ static void xgbe_config_tx_fifo_size(struct xgbe_prv_data *pdata)
 	for (i = 0; i < pdata->tx_q_count; i++)
 		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TQS, fifo_size);
 
-	netdev_notice(pdata->netdev,
-		      "%d Tx hardware queues, %d byte fifo per queue\n",
-		      pdata->tx_q_count, ((fifo_size + 1) * 256));
+	if (netif_msg_drv(pdata))
+		netdev_info(pdata->netdev,
+			    "%d Tx hardware queues, %d byte fifo per queue\n",
+			    pdata->tx_q_count, ((fifo_size + 1) * 256));
 }
 
 static void xgbe_config_rx_fifo_size(struct xgbe_prv_data *pdata)
@@ -2048,9 +2067,10 @@ static void xgbe_config_rx_fifo_size(struct xgbe_prv_data *pdata)
 	for (i = 0; i < pdata->rx_q_count; i++)
 		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, RQS, fifo_size);
 
-	netdev_notice(pdata->netdev,
-		      "%d Rx hardware queues, %d byte fifo per queue\n",
-		      pdata->rx_q_count, ((fifo_size + 1) * 256));
+	if (netif_msg_drv(pdata))
+		netdev_info(pdata->netdev,
+			    "%d Rx hardware queues, %d byte fifo per queue\n",
+			    pdata->rx_q_count, ((fifo_size + 1) * 256));
 }
 
 static void xgbe_config_queue_mapping(struct xgbe_prv_data *pdata)
@@ -2069,14 +2089,18 @@ static void xgbe_config_queue_mapping(struct xgbe_prv_data *pdata)
 
 	for (i = 0, queue = 0; i < pdata->hw_feat.tc_cnt; i++) {
 		for (j = 0; j < qptc; j++) {
-			DBGPR("  TXq%u mapped to TC%u\n", queue, i);
+			if (netif_msg_drv(pdata))
+				netdev_dbg(pdata->netdev,
+					   "TXq%u mapped to TC%u\n", queue, i);
 			XGMAC_MTL_IOWRITE_BITS(pdata, queue, MTL_Q_TQOMR,
 					       Q2TCMAP, i);
 			pdata->q2tc_map[queue++] = i;
 		}
 
 		if (i < qptc_extra) {
-			DBGPR("  TXq%u mapped to TC%u\n", queue, i);
+			if (netif_msg_drv(pdata))
+				netdev_dbg(pdata->netdev,
+					   "TXq%u mapped to TC%u\n", queue, i);
 			XGMAC_MTL_IOWRITE_BITS(pdata, queue, MTL_Q_TQOMR,
 					       Q2TCMAP, i);
 			pdata->q2tc_map[queue++] = i;
@@ -2094,13 +2118,17 @@ static void xgbe_config_queue_mapping(struct xgbe_prv_data *pdata)
 	for (i = 0, prio = 0; i < prio_queues;) {
 		mask = 0;
 		for (j = 0; j < ppq; j++) {
-			DBGPR("  PRIO%u mapped to RXq%u\n", prio, i);
+			if (netif_msg_drv(pdata))
+				netdev_dbg(pdata->netdev,
+					   "PRIO%u mapped to RXq%u\n", prio, i);
 			mask |= (1 << prio);
 			pdata->prio2q_map[prio++] = i;
 		}
 
 		if (i < ppq_extra) {
-			DBGPR("  PRIO%u mapped to RXq%u\n", prio, i);
+			if (netif_msg_drv(pdata))
+				netdev_dbg(pdata->netdev,
+					   "PRIO%u mapped to RXq%u\n", prio, i);
 			mask |= (1 << prio);
 			pdata->prio2q_map[prio++] = i;
 		}
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index db84ddc..a93bbc6 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -183,9 +183,12 @@ static int xgbe_alloc_channels(struct xgbe_prv_data *pdata)
 			channel->rx_ring = rx_ring++;
 		}
 
-		DBGPR("  %s: queue=%u, dma_regs=%p, dma_irq=%d, tx=%p, rx=%p\n",
-		      channel->name, channel->queue_index, channel->dma_regs,
-		      channel->dma_irq, channel->tx_ring, channel->rx_ring);
+		if (netif_msg_drv(pdata))
+			netdev_dbg(pdata->netdev,
+				   "%s: dma_regs=%p, dma_irq=%d, tx=%p, rx=%p\n",
+				   channel->name, channel->dma_regs,
+				   channel->dma_irq, channel->tx_ring,
+				   channel->rx_ring);
 	}
 
 	pdata->channel = channel_mem;
@@ -235,7 +238,9 @@ static int xgbe_maybe_stop_tx_queue(struct xgbe_channel *channel,
 	struct xgbe_prv_data *pdata = channel->pdata;
 
 	if (count > xgbe_tx_avail_desc(ring)) {
-		DBGPR("  Tx queue stopped, not enough descriptors available\n");
+		if (netif_msg_drv(pdata))
+			netdev_info(pdata->netdev,
+				    "Tx queue stopped, not enough descriptors available\n");
 		netif_stop_subqueue(pdata->netdev, channel->queue_index);
 		ring->tx.queue_stopped = 1;
 
@@ -330,7 +335,8 @@ static irqreturn_t xgbe_isr(int irq, void *data)
 	if (!dma_isr)
 		goto isr_done;
 
-	DBGPR("  DMA_ISR = %08x\n", dma_isr);
+	if (netif_msg_intr(pdata))
+		netdev_dbg(pdata->netdev, "DMA_ISR=%#010x\n", dma_isr);
 
 	for (i = 0; i < pdata->channel_count; i++) {
 		if (!(dma_isr & (1 << i)))
@@ -339,7 +345,9 @@ static irqreturn_t xgbe_isr(int irq, void *data)
 		channel = pdata->channel + i;
 
 		dma_ch_isr = XGMAC_DMA_IOREAD(channel, DMA_CH_SR);
-		DBGPR("  DMA_CH%u_ISR = %08x\n", i, dma_ch_isr);
+		if (netif_msg_intr(pdata))
+			netdev_dbg(pdata->netdev, "DMA_CH%u_ISR=%#010x\n", i,
+				   dma_ch_isr);
 
 		/* The TI or RI interrupt bits may still be set even if using
 		 * per channel DMA interrupts. Check to be sure those are not
@@ -386,8 +394,6 @@ static irqreturn_t xgbe_isr(int irq, void *data)
 		}
 	}
 
-	DBGPR("  DMA_ISR = %08x\n", XGMAC_IOREAD(pdata, DMA_ISR));
-
 isr_done:
 	return IRQ_HANDLED;
 }
@@ -448,7 +454,6 @@ static void xgbe_init_tx_timers(struct xgbe_prv_data *pdata)
 		if (!channel->tx_ring)
 			break;
 
-		DBGPR("  %s adding tx timer\n", channel->name);
 		setup_timer(&channel->tx_timer, xgbe_tx_timer,
 			    (unsigned long)channel);
 	}
@@ -468,7 +473,6 @@ static void xgbe_stop_tx_timers(struct xgbe_prv_data *pdata)
 		if (!channel->tx_ring)
 			break;
 
-		DBGPR("  %s deleting tx timer\n", channel->name);
 		del_timer_sync(&channel->tx_timer);
 	}
 
@@ -848,8 +852,10 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 		ret = -ENODEV;
 		goto err_phy_connect;
 	}
-	DBGPR("  phy_connect_direct succeeded for PHY %s, link=%d\n",
-	      dev_name(&phydev->dev), phydev->link);
+	if (netif_msg_ifup(pdata))
+		netdev_dbg(pdata->netdev,
+			   "phy_connect_direct succeeded for PHY %s\n",
+			   dev_name(&phydev->dev));
 
 	return 0;
 
@@ -1478,7 +1484,8 @@ static int xgbe_xmit(struct sk_buff *skb, struct net_device *netdev)
 	ret = NETDEV_TX_OK;
 
 	if (skb->len == 0) {
-		netdev_err(netdev, "empty skb received from stack\n");
+		if (netif_msg_tx_err(pdata))
+			netdev_err(netdev, "empty skb received from stack\n");
 		dev_kfree_skb_any(skb);
 		goto tx_netdev_return;
 	}
@@ -1494,7 +1501,8 @@ static int xgbe_xmit(struct sk_buff *skb, struct net_device *netdev)
 
 	ret = xgbe_prep_tso(skb, packet);
 	if (ret) {
-		netdev_err(netdev, "error processing TSO packet\n");
+		if (netif_msg_tx_err(pdata))
+			netdev_err(netdev, "error processing TSO packet\n");
 		dev_kfree_skb_any(skb);
 		goto tx_netdev_return;
 	}
@@ -1513,9 +1521,8 @@ static int xgbe_xmit(struct sk_buff *skb, struct net_device *netdev)
 	/* Configure required descriptor fields for transmission */
 	hw_if->dev_xmit(channel);
 
-#ifdef XGMAC_ENABLE_TX_PKT_DUMP
-	xgbe_print_pkt(netdev, skb, true);
-#endif
+	if (netif_msg_pktdata(pdata))
+		xgbe_print_pkt(netdev, skb, true);
 
 	/* Stop the queue in advance if there may not be enough descriptors */
 	xgbe_maybe_stop_tx_queue(channel, ring, XGBE_TX_MAX_DESCS);
@@ -1710,7 +1717,9 @@ static int xgbe_setup_tc(struct net_device *netdev, u8 tc)
 			       (pdata->q2tc_map[queue] == i))
 				queue++;
 
-			DBGPR("  TC%u using TXq%u-%u\n", i, offset, queue - 1);
+			if (netif_msg_drv(pdata))
+				netdev_dbg(netdev, "TC%u using TXq%u-%u\n", i,
+					   offset, queue - 1);
 			netdev_set_tc_queue(netdev, i, queue - offset, offset);
 			offset = queue;
 		}
@@ -1877,9 +1886,8 @@ static int xgbe_tx_poll(struct xgbe_channel *channel)
 		 * bit */
 		dma_rmb();
 
-#ifdef XGMAC_ENABLE_TX_DESC_DUMP
-		xgbe_dump_tx_desc(ring, ring->dirty, 1, 0);
-#endif
+		if (netif_msg_tx_done(pdata))
+			xgbe_dump_tx_desc(pdata, ring, ring->dirty, 1, 0);
 
 		if (hw_if->is_last_desc(rdesc)) {
 			tx_packets += rdata->tx.packets;
@@ -1982,8 +1990,9 @@ read_again:
 			goto read_again;
 
 		if (error || packet->errors) {
-			if (packet->errors)
-				DBGPR("Error in received packet\n");
+			if (packet->errors && netif_msg_rx_err(pdata))
+				netdev_err(netdev,
+					   "error in received packet\n");
 			dev_kfree_skb(skb);
 			goto next_packet;
 		}
@@ -2033,14 +2042,15 @@ skip_data:
 			max_len += VLAN_HLEN;
 
 		if (skb->len > max_len) {
-			DBGPR("packet length exceeds configured MTU\n");
+			if (netif_msg_rx_err(pdata))
+				netdev_err(netdev,
+					   "packet length exceeds configured MTU\n");
 			dev_kfree_skb(skb);
 			goto next_packet;
 		}
 
-#ifdef XGMAC_ENABLE_RX_PKT_DUMP
-		xgbe_print_pkt(netdev, skb, false);
-#endif
+		if (netif_msg_pktdata(pdata))
+			xgbe_print_pkt(netdev, skb, false);
 
 		skb_checksum_none_assert(skb);
 		if (XGMAC_GET_BITS(packet->attributes,
@@ -2165,8 +2175,8 @@ static int xgbe_all_poll(struct napi_struct *napi, int budget)
 	return processed;
 }
 
-void xgbe_dump_tx_desc(struct xgbe_ring *ring, unsigned int idx,
-		       unsigned int count, unsigned int flag)
+void xgbe_dump_tx_desc(struct xgbe_prv_data *pdata, struct xgbe_ring *ring,
+		       unsigned int idx, unsigned int count, unsigned int flag)
 {
 	struct xgbe_ring_data *rdata;
 	struct xgbe_ring_desc *rdesc;
@@ -2174,20 +2184,29 @@ void xgbe_dump_tx_desc(struct xgbe_ring *ring, unsigned int idx,
 	while (count--) {
 		rdata = XGBE_GET_DESC_DATA(ring, idx);
 		rdesc = rdata->rdesc;
-		pr_alert("TX_NORMAL_DESC[%d %s] = %08x:%08x:%08x:%08x\n", idx,
-			 (flag == 1) ? "QUEUED FOR TX" : "TX BY DEVICE",
-			 le32_to_cpu(rdesc->desc0), le32_to_cpu(rdesc->desc1),
-			 le32_to_cpu(rdesc->desc2), le32_to_cpu(rdesc->desc3));
+		netdev_dbg(pdata->netdev,
+			   "TX_NORMAL_DESC[%d %s] = %08x:%08x:%08x:%08x\n", idx,
+			   (flag == 1) ? "QUEUED FOR TX" : "TX BY DEVICE",
+			   le32_to_cpu(rdesc->desc0),
+			   le32_to_cpu(rdesc->desc1),
+			   le32_to_cpu(rdesc->desc2),
+			   le32_to_cpu(rdesc->desc3));
 		idx++;
 	}
 }
 
-void xgbe_dump_rx_desc(struct xgbe_ring *ring, struct xgbe_ring_desc *desc,
+void xgbe_dump_rx_desc(struct xgbe_prv_data *pdata, struct xgbe_ring *ring,
 		       unsigned int idx)
 {
-	pr_alert("RX_NORMAL_DESC[%d RX BY DEVICE] = %08x:%08x:%08x:%08x\n", idx,
-		 le32_to_cpu(desc->desc0), le32_to_cpu(desc->desc1),
-		 le32_to_cpu(desc->desc2), le32_to_cpu(desc->desc3));
+	struct xgbe_ring_data *rdata;
+	struct xgbe_ring_desc *rdesc;
+
+	rdata = XGBE_GET_DESC_DATA(ring, idx);
+	rdesc = rdata->rdesc;
+	netdev_dbg(pdata->netdev,
+		   "RX_NORMAL_DESC[%d RX BY DEVICE] = %08x:%08x:%08x:%08x\n",
+		   idx, le32_to_cpu(rdesc->desc0), le32_to_cpu(rdesc->desc1),
+		   le32_to_cpu(rdesc->desc2), le32_to_cpu(rdesc->desc3));
 }
 
 void xgbe_print_pkt(struct net_device *netdev, struct sk_buff *skb, bool tx_rx)
@@ -2197,21 +2216,21 @@ void xgbe_print_pkt(struct net_device *netdev, struct sk_buff *skb, bool tx_rx)
 	unsigned char buffer[128];
 	unsigned int i, j;
 
-	netdev_alert(netdev, "\n************** SKB dump ****************\n");
+	netdev_dbg(netdev, "\n************** SKB dump ****************\n");
 
-	netdev_alert(netdev, "%s packet of %d bytes\n",
-		     (tx_rx ? "TX" : "RX"), skb->len);
+	netdev_dbg(netdev, "%s packet of %d bytes\n",
+		   (tx_rx ? "TX" : "RX"), skb->len);
 
-	netdev_alert(netdev, "Dst MAC addr: %pM\n", eth->h_dest);
-	netdev_alert(netdev, "Src MAC addr: %pM\n", eth->h_source);
-	netdev_alert(netdev, "Protocol: 0x%04hx\n", ntohs(eth->h_proto));
+	netdev_dbg(netdev, "Dst MAC addr: %pM\n", eth->h_dest);
+	netdev_dbg(netdev, "Src MAC addr: %pM\n", eth->h_source);
+	netdev_dbg(netdev, "Protocol: %#06hx\n", ntohs(eth->h_proto));
 
 	for (i = 0, j = 0; i < skb->len;) {
 		j += snprintf(buffer + j, sizeof(buffer) - j, "%02hhx",
 			      buf[i++]);
 
 		if ((i % 32) == 0) {
-			netdev_alert(netdev, "  0x%04x: %s\n", i - 32, buffer);
+			netdev_dbg(netdev, "  %#06x: %s\n", i - 32, buffer);
 			j = 0;
 		} else if ((i % 16) == 0) {
 			buffer[j++] = ' ';
@@ -2221,7 +2240,7 @@ void xgbe_print_pkt(struct net_device *netdev, struct sk_buff *skb, bool tx_rx)
 		}
 	}
 	if (i % 32)
-		netdev_alert(netdev, "  0x%04x: %s\n", i - (i % 32), buffer);
+		netdev_dbg(netdev, "  %#06x: %s\n", i - (i % 32), buffer);
 
-	netdev_alert(netdev, "\n************** SKB dump ****************\n");
+	netdev_dbg(netdev, "\n************** SKB dump ****************\n");
 }
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-main.c b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
index 7149053..ae869d4 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-main.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
@@ -136,6 +136,13 @@ MODULE_LICENSE("Dual BSD/GPL");
 MODULE_VERSION(XGBE_DRV_VERSION);
 MODULE_DESCRIPTION(XGBE_DRV_DESC);
 
+static int debug = -1;
+module_param(debug, int, S_IWUSR | S_IRUGO);
+MODULE_PARM_DESC(debug, " Network interface message level setting");
+
+static const u32 default_msg_level = (NETIF_MSG_LINK | NETIF_MSG_IFDOWN |
+				      NETIF_MSG_IFUP);
+
 static void xgbe_default_config(struct xgbe_prv_data *pdata)
 {
 	DBGPR("-->xgbe_default_config\n");
@@ -289,6 +296,8 @@ static int xgbe_probe(struct platform_device *pdev)
 	mutex_init(&pdata->rss_mutex);
 	spin_lock_init(&pdata->tstamp_lock);
 
+	pdata->msg_enable = netif_msg_init(debug, default_msg_level);
+
 	/* Check if we should use ACPI or DT */
 	pdata->use_acpi = (!pdata->adev || acpi_disabled) ? 0 : 1;
 
@@ -318,7 +327,8 @@ static int xgbe_probe(struct platform_device *pdev)
 		ret = PTR_ERR(pdata->xgmac_regs);
 		goto err_io;
 	}
-	DBGPR("  xgmac_regs = %p\n", pdata->xgmac_regs);
+	if (netif_msg_probe(pdata))
+		dev_dbg(dev, "xgmac_regs = %p\n", pdata->xgmac_regs);
 
 	res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
 	pdata->xpcs_regs = devm_ioremap_resource(dev, res);
@@ -327,7 +337,8 @@ static int xgbe_probe(struct platform_device *pdev)
 		ret = PTR_ERR(pdata->xpcs_regs);
 		goto err_io;
 	}
-	DBGPR("  xpcs_regs  = %p\n", pdata->xpcs_regs);
+	if (netif_msg_probe(pdata))
+		dev_dbg(dev, "xpcs_regs  = %p\n", pdata->xpcs_regs);
 
 	/* Retrieve the MAC address */
 	ret = device_property_read_u8_array(dev, XGBE_MAC_ADDR_PROPERTY,
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
index 59e267f..532a67f 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
@@ -159,48 +159,48 @@ static int xgbe_mdio_write(struct mii_bus *mii, int prtad, int mmd_reg,
 void xgbe_dump_phy_registers(struct xgbe_prv_data *pdata)
 {
 	struct device *dev = pdata->dev;
-	struct phy_device *phydev = pdata->mii->phy_map[XGBE_PRTAD];
+	struct phy_device *phydev = pdata->phydev;
 	int i;
 
-	dev_alert(dev, "\n************* PHY Reg dump **********************\n");
-
-	dev_alert(dev, "PCS Control Reg (%#04x) = %#04x\n", MDIO_CTRL1,
-		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL1));
-	dev_alert(dev, "PCS Status Reg (%#04x) = %#04x\n", MDIO_STAT1,
-		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1));
-	dev_alert(dev, "Phy Id (PHYS ID 1 %#04x)= %#04x\n", MDIO_DEVID1,
-		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVID1));
-	dev_alert(dev, "Phy Id (PHYS ID 2 %#04x)= %#04x\n", MDIO_DEVID2,
-		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVID2));
-	dev_alert(dev, "Devices in Package (%#04x)= %#04x\n", MDIO_DEVS1,
-		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVS1));
-	dev_alert(dev, "Devices in Package (%#04x)= %#04x\n", MDIO_DEVS2,
-		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVS2));
-
-	dev_alert(dev, "Auto-Neg Control Reg (%#04x) = %#04x\n", MDIO_CTRL1,
-		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_CTRL1));
-	dev_alert(dev, "Auto-Neg Status Reg (%#04x) = %#04x\n", MDIO_STAT1,
-		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_STAT1));
-	dev_alert(dev, "Auto-Neg Ad Reg 1 (%#04x) = %#04x\n",
-		  MDIO_AN_ADVERTISE,
-		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE));
-	dev_alert(dev, "Auto-Neg Ad Reg 2 (%#04x) = %#04x\n",
-		  MDIO_AN_ADVERTISE + 1,
-		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1));
-	dev_alert(dev, "Auto-Neg Ad Reg 3 (%#04x) = %#04x\n",
-		  MDIO_AN_ADVERTISE + 2,
-		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2));
-	dev_alert(dev, "Auto-Neg Completion Reg (%#04x) = %#04x\n",
-		  MDIO_AN_COMP_STAT,
-		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_COMP_STAT));
-
-	dev_alert(dev, "MMD Device Mask = %#x\n",
-		  phydev->c45_ids.devices_in_package);
+	dev_dbg(dev, "\n************* PHY Reg dump **********************\n");
+
+	dev_dbg(dev, "PCS Control Reg (%#04x) = %#04x\n", MDIO_CTRL1,
+		XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL1));
+	dev_dbg(dev, "PCS Status Reg (%#04x) = %#04x\n", MDIO_STAT1,
+		XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1));
+	dev_dbg(dev, "Phy Id (PHYS ID 1 %#04x)= %#04x\n", MDIO_DEVID1,
+		XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVID1));
+	dev_dbg(dev, "Phy Id (PHYS ID 2 %#04x)= %#04x\n", MDIO_DEVID2,
+		XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVID2));
+	dev_dbg(dev, "Devices in Package (%#04x)= %#04x\n", MDIO_DEVS1,
+		XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVS1));
+	dev_dbg(dev, "Devices in Package (%#04x)= %#04x\n", MDIO_DEVS2,
+		XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVS2));
+
+	dev_dbg(dev, "Auto-Neg Control Reg (%#04x) = %#04x\n", MDIO_CTRL1,
+		XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_CTRL1));
+	dev_dbg(dev, "Auto-Neg Status Reg (%#04x) = %#04x\n", MDIO_STAT1,
+		XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_STAT1));
+	dev_dbg(dev, "Auto-Neg Ad Reg 1 (%#04x) = %#04x\n",
+		MDIO_AN_ADVERTISE,
+		XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE));
+	dev_dbg(dev, "Auto-Neg Ad Reg 2 (%#04x) = %#04x\n",
+		MDIO_AN_ADVERTISE + 1,
+		XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1));
+	dev_dbg(dev, "Auto-Neg Ad Reg 3 (%#04x) = %#04x\n",
+		MDIO_AN_ADVERTISE + 2,
+		XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2));
+	dev_dbg(dev, "Auto-Neg Completion Reg (%#04x) = %#04x\n",
+		MDIO_AN_COMP_STAT,
+		XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_COMP_STAT));
+
+	dev_dbg(dev, "MMD Device Mask = %#x\n",
+		phydev->c45_ids.devices_in_package);
 	for (i = 0; i < ARRAY_SIZE(phydev->c45_ids.device_ids); i++)
-		dev_alert(dev, "  MMD %d: ID = %#08x\n", i,
-			  phydev->c45_ids.device_ids[i]);
+		dev_dbg(dev, "  MMD %d: ID = %#08x\n", i,
+			phydev->c45_ids.device_ids[i]);
 
-	dev_alert(dev, "\n*************************************************\n");
+	dev_dbg(dev, "\n*************************************************\n");
 }
 
 int xgbe_mdio_register(struct xgbe_prv_data *pdata)
@@ -230,7 +230,9 @@ int xgbe_mdio_register(struct xgbe_prv_data *pdata)
 		dev_err(pdata->dev, "mdiobus_register failed\n");
 		goto err_mdiobus_alloc;
 	}
-	DBGPR("  mdiobus_register succeeded for %s\n", pdata->mii_bus_id);
+	if (netif_msg_drv(pdata))
+		dev_dbg(pdata->dev, "mdiobus_register succeeded for %s\n",
+			pdata->mii_bus_id);
 
 	/* Probe the PCS using Clause 45 */
 	phydev = get_phy_device(mii, XGBE_PRTAD, true);
@@ -275,7 +277,8 @@ int xgbe_mdio_register(struct xgbe_prv_data *pdata)
 
 	pdata->phydev = phydev;
 
-	DBGPHY_REGS(pdata);
+	if (netif_msg_drv(pdata))
+		xgbe_dump_phy_registers(pdata);
 
 	DBGPR("<--xgbe_mdio_register\n");
 
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index ef3d1e5..8313b07 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -793,6 +793,9 @@ struct xgbe_prv_data {
 	/* Keeps track of power mode */
 	unsigned int power_down;
 
+	/* Network interface message level setting */
+	u32 msg_enable;
+
 #ifdef CONFIG_DEBUG_FS
 	struct dentry *xgbe_debugfs;
 
@@ -818,9 +821,9 @@ void xgbe_mdio_unregister(struct xgbe_prv_data *);
 void xgbe_dump_phy_registers(struct xgbe_prv_data *);
 void xgbe_ptp_register(struct xgbe_prv_data *);
 void xgbe_ptp_unregister(struct xgbe_prv_data *);
-void xgbe_dump_tx_desc(struct xgbe_ring *, unsigned int, unsigned int,
-		       unsigned int);
-void xgbe_dump_rx_desc(struct xgbe_ring *, struct xgbe_ring_desc *,
+void xgbe_dump_tx_desc(struct xgbe_prv_data *, struct xgbe_ring *,
+		       unsigned int, unsigned int, unsigned int);
+void xgbe_dump_rx_desc(struct xgbe_prv_data *, struct xgbe_ring *,
 		       unsigned int);
 void xgbe_print_pkt(struct net_device *, struct sk_buff *, bool);
 void xgbe_get_all_hw_features(struct xgbe_prv_data *);
@@ -837,18 +840,6 @@ static inline void xgbe_debugfs_init(struct xgbe_prv_data *pdata) {}
 static inline void xgbe_debugfs_exit(struct xgbe_prv_data *pdata) {}
 #endif /* CONFIG_DEBUG_FS */
 
-/* NOTE: Uncomment for TX and RX DESCRIPTOR DUMP in KERNEL LOG */
-#if 0
-#define XGMAC_ENABLE_TX_DESC_DUMP
-#define XGMAC_ENABLE_RX_DESC_DUMP
-#endif
-
-/* NOTE: Uncomment for TX and RX PACKET DUMP in KERNEL LOG */
-#if 0
-#define XGMAC_ENABLE_TX_PKT_DUMP
-#define XGMAC_ENABLE_RX_PKT_DUMP
-#endif
-
 /* NOTE: Uncomment for function trace log messages in KERNEL LOG */
 #if 0
 #define YDEBUG
@@ -858,10 +849,8 @@ static inline void xgbe_debugfs_exit(struct xgbe_prv_data *pdata) {}
 /* For debug prints */
 #ifdef YDEBUG
 #define DBGPR(x...) pr_alert(x)
-#define DBGPHY_REGS(x...) xgbe_dump_phy_registers(x)
 #else
 #define DBGPR(x...) do { } while (0)
-#define DBGPHY_REGS(x...) do { } while (0)
 #endif
 
 #ifdef YDEBUG_MDIO

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v1 3/7] amd-xgbe: Rework the Rx path SKB allocation
  2015-05-12 19:22 [PATCH net-next v1 0/7] amd-xgbe: AMD XGBE driver updates 2015-05-12 Tom Lendacky
  2015-05-12 19:22 ` [PATCH net-next v1 1/7] amd-xgbe: Add additional stats to be reported via ethtool Tom Lendacky
  2015-05-12 19:22 ` [PATCH net-next v1 2/7] amd-xgbe: Add netif_msg_* support for driver messages Tom Lendacky
@ 2015-05-12 19:22 ` Tom Lendacky
  2015-05-12 19:22 ` [PATCH net-next v1 4/7] amd-xgbe: Move the PHY support into amd-xgbe Tom Lendacky
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Tom Lendacky @ 2015-05-12 19:22 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Rework the SKB allocation so that all of the buffers of the first
descriptor are handled in the SKB allocation routine. After copying the
data in the header buffer (which can be just the header if split header
processing succeeded for header plus data if split header processing did
not succeed) into the SKB, check for remaining data in the receive
buffer. If there is data remaining in the receive buffer, add that as a
frag to the SKB. Once an SKB has been allocated, all other descriptors
are added as frags to the SKB.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-desc.c |    2 -
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c  |   66 ++++++++++++++++-------------
 drivers/net/ethernet/amd/xgbe/xgbe.h      |    2 -
 3 files changed, 37 insertions(+), 33 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
index 6f2a914..0ad54c1 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
@@ -484,8 +484,6 @@ static void xgbe_unmap_rdata(struct xgbe_prv_data *pdata,
 
 	if (rdata->state_saved) {
 		rdata->state_saved = 0;
-		rdata->state.incomplete = 0;
-		rdata->state.context_next = 0;
 		rdata->state.skb = NULL;
 		rdata->state.len = 0;
 		rdata->state.error = 0;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index a93bbc6..102e354 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -1829,9 +1829,10 @@ static void xgbe_rx_refresh(struct xgbe_channel *channel)
 			  lower_32_bits(rdata->rdesc_dma));
 }
 
-static struct sk_buff *xgbe_create_skb(struct napi_struct *napi,
+static struct sk_buff *xgbe_create_skb(struct xgbe_prv_data *pdata,
+				       struct napi_struct *napi,
 				       struct xgbe_ring_data *rdata,
-				       unsigned int *len)
+				       unsigned int len)
 {
 	struct sk_buff *skb;
 	u8 *packet;
@@ -1841,14 +1842,31 @@ static struct sk_buff *xgbe_create_skb(struct napi_struct *napi,
 	if (!skb)
 		return NULL;
 
+	/* Start with the header buffer which may contain just the header
+	 * or the header plus data
+	 */
+	dma_sync_single_for_cpu(pdata->dev, rdata->rx.hdr.dma,
+				rdata->rx.hdr.dma_len, DMA_FROM_DEVICE);
+
 	packet = page_address(rdata->rx.hdr.pa.pages) +
 		 rdata->rx.hdr.pa.pages_offset;
-	copy_len = (rdata->rx.hdr_len) ? rdata->rx.hdr_len : *len;
+	copy_len = (rdata->rx.hdr_len) ? rdata->rx.hdr_len : len;
 	copy_len = min(rdata->rx.hdr.dma_len, copy_len);
 	skb_copy_to_linear_data(skb, packet, copy_len);
 	skb_put(skb, copy_len);
 
-	*len -= copy_len;
+	len -= copy_len;
+	if (len) {
+		/* Add the remaining data as a frag */
+		dma_sync_single_for_cpu(pdata->dev, rdata->rx.buf.dma,
+					rdata->rx.buf.dma_len, DMA_FROM_DEVICE);
+
+		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+				rdata->rx.buf.pa.pages,
+				rdata->rx.buf.pa.pages_offset,
+				len, rdata->rx.buf.dma_len);
+		rdata->rx.buf.pa.pages = NULL;
+	}
 
 	return skb;
 }
@@ -1930,7 +1948,7 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
 	struct sk_buff *skb;
 	struct skb_shared_hwtstamps *hwtstamps;
 	unsigned int incomplete, error, context_next, context;
-	unsigned int len, put_len, max_len;
+	unsigned int len, rdesc_len, max_len;
 	unsigned int received = 0;
 	int packet_count = 0;
 
@@ -1940,6 +1958,9 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
 	if (!ring)
 		return 0;
 
+	incomplete = 0;
+	context_next = 0;
+
 	napi = (pdata->per_channel_irq) ? &channel->napi : &pdata->napi;
 
 	rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
@@ -1949,15 +1970,11 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
 
 		/* First time in loop see if we need to restore state */
 		if (!received && rdata->state_saved) {
-			incomplete = rdata->state.incomplete;
-			context_next = rdata->state.context_next;
 			skb = rdata->state.skb;
 			error = rdata->state.error;
 			len = rdata->state.len;
 		} else {
 			memset(packet, 0, sizeof(*packet));
-			incomplete = 0;
-			context_next = 0;
 			skb = NULL;
 			error = 0;
 			len = 0;
@@ -1998,23 +2015,16 @@ read_again:
 		}
 
 		if (!context) {
-			put_len = rdata->rx.len - len;
-			len += put_len;
-
-			if (!skb) {
-				dma_sync_single_for_cpu(pdata->dev,
-							rdata->rx.hdr.dma,
-							rdata->rx.hdr.dma_len,
-							DMA_FROM_DEVICE);
-
-				skb = xgbe_create_skb(napi, rdata, &put_len);
-				if (!skb) {
+			/* Length is cumulative, get this descriptor's length */
+			rdesc_len = rdata->rx.len - len;
+			len += rdesc_len;
+
+			if (rdesc_len && !skb) {
+				skb = xgbe_create_skb(pdata, napi, rdata,
+						      rdesc_len);
+				if (!skb)
 					error = 1;
-					goto skip_data;
-				}
-			}
-
-			if (put_len) {
+			} else if (rdesc_len) {
 				dma_sync_single_for_cpu(pdata->dev,
 							rdata->rx.buf.dma,
 							rdata->rx.buf.dma_len,
@@ -2023,12 +2033,12 @@ read_again:
 				skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
 						rdata->rx.buf.pa.pages,
 						rdata->rx.buf.pa.pages_offset,
-						put_len, rdata->rx.buf.dma_len);
+						rdesc_len,
+						rdata->rx.buf.dma_len);
 				rdata->rx.buf.pa.pages = NULL;
 			}
 		}
 
-skip_data:
 		if (incomplete || context_next)
 			goto read_again;
 
@@ -2093,8 +2103,6 @@ next_packet:
 	if (received && (incomplete || context_next)) {
 		rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
 		rdata->state_saved = 1;
-		rdata->state.incomplete = incomplete;
-		rdata->state.context_next = context_next;
 		rdata->state.skb = skb;
 		rdata->state.len = len;
 		rdata->state.error = error;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index 8313b07..e182b25 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -334,8 +334,6 @@ struct xgbe_ring_data {
 	 */
 	unsigned int state_saved;
 	struct {
-		unsigned int incomplete;
-		unsigned int context_next;
 		struct sk_buff *skb;
 		unsigned int len;
 		unsigned int error;

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v1 4/7] amd-xgbe: Move the PHY support into amd-xgbe
  2015-05-12 19:22 [PATCH net-next v1 0/7] amd-xgbe: AMD XGBE driver updates 2015-05-12 Tom Lendacky
                   ` (2 preceding siblings ...)
  2015-05-12 19:22 ` [PATCH net-next v1 3/7] amd-xgbe: Rework the Rx path SKB allocation Tom Lendacky
@ 2015-05-12 19:22 ` Tom Lendacky
  2015-05-12 22:19   ` Florian Fainelli
  2015-05-12 19:23 ` [PATCH net-next v1 5/7] amd-xgbe: Support defining PHY resources in ETH device node Tom Lendacky
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 12+ messages in thread
From: Tom Lendacky @ 2015-05-12 19:22 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

The AMD XGBE device is intended to work with a specific integrated PHY
and that PHY is not meant to be a standalone PHY for use by other
devices. As such this patch removes the phylib driver and implements
the PHY support in the amd-xgbe driver (the majority of the logic from
the phylib driver is moved into the amd-xgbe driver).

Update the driver version to 1.0.1.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 .../devicetree/bindings/net/amd-xgbe-phy.txt       |   48 -
 Documentation/devicetree/bindings/net/amd-xgbe.txt |   51 +
 MAINTAINERS                                        |    1 
 drivers/net/ethernet/amd/Kconfig                   |    4 
 drivers/net/ethernet/amd/xgbe/xgbe-common.h        |  155 ++
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c           |   17 
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c           |  177 +-
 drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c       |   50 -
 drivers/net/ethernet/amd/xgbe/xgbe-main.c          |  360 +++-
 drivers/net/ethernet/amd/xgbe/xgbe-mdio.c          | 1165 +++++++++++--
 drivers/net/ethernet/amd/xgbe/xgbe.h               |  205 ++
 drivers/net/phy/Kconfig                            |    6 
 drivers/net/phy/Makefile                           |    1 
 drivers/net/phy/amd-xgbe-phy.c                     | 1862 --------------------
 14 files changed, 1828 insertions(+), 2274 deletions(-)
 delete mode 100644 Documentation/devicetree/bindings/net/amd-xgbe-phy.txt
 delete mode 100644 drivers/net/phy/amd-xgbe-phy.c

diff --git a/Documentation/devicetree/bindings/net/amd-xgbe-phy.txt b/Documentation/devicetree/bindings/net/amd-xgbe-phy.txt
deleted file mode 100644
index 8db3238..0000000
--- a/Documentation/devicetree/bindings/net/amd-xgbe-phy.txt
+++ /dev/null
@@ -1,48 +0,0 @@
-* AMD 10GbE PHY driver (amd-xgbe-phy)
-
-Required properties:
-- compatible: Should be "amd,xgbe-phy-seattle-v1a" and
-  "ethernet-phy-ieee802.3-c45"
-- reg: Address and length of the register sets for the device
-   - SerDes Rx/Tx registers
-   - SerDes integration registers (1/2)
-   - SerDes integration registers (2/2)
-- interrupt-parent: Should be the phandle for the interrupt controller
-  that services interrupts for this device
-- interrupts: Should contain the amd-xgbe-phy interrupt.
-
-Optional properties:
-- amd,speed-set: Speed capabilities of the device
-    0 - 1GbE and 10GbE (default)
-    1 - 2.5GbE and 10GbE
-
-The following optional properties are represented by an array with each
-value corresponding to a particular speed. The first array value represents
-the setting for the 1GbE speed, the second value for the 2.5GbE speed and
-the third value for the 10GbE speed.  All three values are required if the
-property is used.
-- amd,serdes-blwc: Baseline wandering correction enablement
-    0 - Off
-    1 - On
-- amd,serdes-cdr-rate: CDR rate speed selection
-- amd,serdes-pq-skew: PQ (data sampling) skew
-- amd,serdes-tx-amp: TX amplitude boost
-- amd,serdes-dfe-tap-config: DFE taps available to run
-- amd,serdes-dfe-tap-enable: DFE taps to enable
-
-Example:
-	xgbe_phy@e1240800 {
-		compatible = "amd,xgbe-phy-seattle-v1a", "ethernet-phy-ieee802.3-c45";
-		reg = <0 0xe1240800 0 0x00400>,
-		      <0 0xe1250000 0 0x00060>,
-		      <0 0xe1250080 0 0x00004>;
-		interrupt-parent = <&gic>;
-		interrupts = <0 323 4>;
-		amd,speed-set = <0>;
-		amd,serdes-blwc = <1>, <1>, <0>;
-		amd,serdes-cdr-rate = <2>, <2>, <7>;
-		amd,serdes-pq-skew = <10>, <10>, <30>;
-		amd,serdes-tx-amp = <15>, <15>, <10>;
-		amd,serdes-dfe-tap-config = <3>, <3>, <1>;
-		amd,serdes-dfe-tap-enable = <0>, <0>, <127>;
-	};
diff --git a/Documentation/devicetree/bindings/net/amd-xgbe.txt b/Documentation/devicetree/bindings/net/amd-xgbe.txt
index 26efd52..5dbc55a 100644
--- a/Documentation/devicetree/bindings/net/amd-xgbe.txt
+++ b/Documentation/devicetree/bindings/net/amd-xgbe.txt
@@ -1,6 +1,6 @@
 * AMD 10GbE driver (amd-xgbe)
 
-Required properties:
+Required properties (ethernet device):
 - compatible: Should be "amd,xgbe-seattle-v1a"
 - reg: Address and length of the register sets for the device
    - MAC registers
@@ -22,7 +22,7 @@ Required properties:
 - phy-handle: See ethernet.txt file in the same directory
 - phy-mode: See ethernet.txt file in the same directory
 
-Optional properties:
+Optional properties (ethernet device):
 - mac-address: mac address to be assigned to the device. Can be overridden
   by UEFI.
 - dma-coherent: Present if dma operations are coherent
@@ -30,6 +30,35 @@ Optional properties:
   a unique interrupt for each DMA channel - this requires an additional
   interrupt be configured for each DMA channel
 
+Required properties (phy device):
+- compatible: Should be "amd,xgbe-phy-seattle-v1a"
+- reg: Address and length of the register sets for the device
+   - SerDes Rx/Tx registers
+   - SerDes integration registers (1/2)
+   - SerDes integration registers (2/2)
+- interrupt-parent: Should be the phandle for the interrupt controller
+  that services interrupts for this device
+- interrupts: Should contain the amd-xgbe-phy interrupt.
+
+Optional properties (phy device):
+- amd,speed-set: Speed capabilities of the device
+    0 - 1GbE and 10GbE (default)
+    1 - 2.5GbE and 10GbE
+
+The following optional properties are represented by an array with each
+value corresponding to a particular speed. The first array value represents
+the setting for the 1GbE speed, the second value for the 2.5GbE speed and
+the third value for the 10GbE speed.  All three values are required if the
+property is used.
+- amd,serdes-blwc: Baseline wandering correction enablement
+    0 - Off
+    1 - On
+- amd,serdes-cdr-rate: CDR rate speed selection
+- amd,serdes-pq-skew: PQ (data sampling) skew
+- amd,serdes-tx-amp: TX amplitude boost
+- amd,serdes-dfe-tap-config: DFE taps available to run
+- amd,serdes-dfe-tap-enable: DFE taps to enable
+
 Example:
 	xgbe@e0700000 {
 		compatible = "amd,xgbe-seattle-v1a";
@@ -41,7 +70,23 @@ Example:
 		amd,per-channel-interrupt;
 		clocks = <&xgbe_dma_clk>, <&xgbe_ptp_clk>;
 		clock-names = "dma_clk", "ptp_clk";
-		phy-handle = <&phy>;
+		phy-handle = <&xgbe_phy>;
 		phy-mode = "xgmii";
 		mac-address = [ 02 a1 a2 a3 a4 a5 ];
 	};
+
+	xgbe_phy@e1240800 {
+		compatible = "amd,xgbe-phy-seattle-v1a";
+		reg = <0 0xe1240800 0 0x00400>,
+		      <0 0xe1250000 0 0x00060>,
+		      <0 0xe1250080 0 0x00004>;
+		interrupt-parent = <&gic>;
+		interrupts = <0 323 4>;
+		amd,speed-set = <0>;
+		amd,serdes-blwc = <1>, <1>, <0>;
+		amd,serdes-cdr-rate = <2>, <2>, <7>;
+		amd,serdes-pq-skew = <10>, <10>, <30>;
+		amd,serdes-tx-amp = <15>, <15>, <10>;
+		amd,serdes-dfe-tap-config = <3>, <3>, <1>;
+		amd,serdes-dfe-tap-enable = <0>, <0>, <127>;
+	};
diff --git a/MAINTAINERS b/MAINTAINERS
index 1622775..bbe0ad4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -652,7 +652,6 @@ M:	Tom Lendacky <thomas.lendacky@amd.com>
 L:	netdev@vger.kernel.org
 S:	Supported
 F:	drivers/net/ethernet/amd/xgbe/
-F:	drivers/net/phy/amd-xgbe-phy.c
 
 AMS (Apple Motion Sensor) DRIVER
 M:	Michael Hanselmann <linux-kernel@hansmi.ch>
diff --git a/drivers/net/ethernet/amd/Kconfig b/drivers/net/ethernet/amd/Kconfig
index 089c269..5c88201 100644
--- a/drivers/net/ethernet/amd/Kconfig
+++ b/drivers/net/ethernet/amd/Kconfig
@@ -179,9 +179,7 @@ config SUNLANCE
 
 config AMD_XGBE
 	tristate "AMD 10GbE Ethernet driver"
-	depends on (OF_NET || ACPI) && HAS_IOMEM && HAS_DMA
-	select PHYLIB
-	select AMD_XGBE_PHY
+	depends on ((OF_NET && OF_ADDRESS) || ACPI) && HAS_IOMEM
 	select BITREVERSE
 	select CRC32
 	select PTP_1588_CLOCK
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
index 34c28aa..b6fa891 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
@@ -857,6 +857,48 @@
  */
 #define PCS_MMD_SELECT			0xff
 
+/* SerDes integration register offsets */
+#define SIR0_KR_RT_1			0x002c
+#define SIR0_STATUS			0x0040
+#define SIR1_SPEED			0x0000
+
+/* SerDes integration register entry bit positions and sizes */
+#define SIR0_KR_RT_1_RESET_INDEX	11
+#define SIR0_KR_RT_1_RESET_WIDTH	1
+#define SIR0_STATUS_RX_READY_INDEX	0
+#define SIR0_STATUS_RX_READY_WIDTH	1
+#define SIR0_STATUS_TX_READY_INDEX	8
+#define SIR0_STATUS_TX_READY_WIDTH	1
+#define SIR1_SPEED_CDR_RATE_INDEX	12
+#define SIR1_SPEED_CDR_RATE_WIDTH	4
+#define SIR1_SPEED_DATARATE_INDEX	4
+#define SIR1_SPEED_DATARATE_WIDTH	2
+#define SIR1_SPEED_PLLSEL_INDEX		3
+#define SIR1_SPEED_PLLSEL_WIDTH		1
+#define SIR1_SPEED_RATECHANGE_INDEX	6
+#define SIR1_SPEED_RATECHANGE_WIDTH	1
+#define SIR1_SPEED_TXAMP_INDEX		8
+#define SIR1_SPEED_TXAMP_WIDTH		4
+#define SIR1_SPEED_WORDMODE_INDEX	0
+#define SIR1_SPEED_WORDMODE_WIDTH	3
+
+/* SerDes RxTx register offsets */
+#define RXTX_REG6			0x0018
+#define RXTX_REG20			0x0050
+#define RXTX_REG22			0x0058
+#define RXTX_REG114			0x01c8
+#define RXTX_REG129			0x0204
+
+/* SerDes RxTx register entry bit positions and sizes */
+#define RXTX_REG6_RESETB_RXD_INDEX	8
+#define RXTX_REG6_RESETB_RXD_WIDTH	1
+#define RXTX_REG20_BLWC_ENA_INDEX	2
+#define RXTX_REG20_BLWC_ENA_WIDTH	1
+#define RXTX_REG114_PQ_REG_INDEX	9
+#define RXTX_REG114_PQ_REG_WIDTH	7
+#define RXTX_REG129_RXDFE_CONFIG_INDEX	14
+#define RXTX_REG129_RXDFE_CONFIG_WIDTH	2
+
 /* Descriptor/Packet entry bit positions and sizes */
 #define RX_PACKET_ERRORS_CRC_INDEX		2
 #define RX_PACKET_ERRORS_CRC_WIDTH		1
@@ -973,10 +1015,47 @@
 #define TX_NORMAL_DESC2_VLAN_INSERT		0x2
 
 /* MDIO undefined or vendor specific registers */
+#ifndef MDIO_PMA_10GBR_PMD_CTRL
+#define MDIO_PMA_10GBR_PMD_CTRL		0x0096
+#endif
+
+#ifndef MDIO_PMA_10GBR_FECCTRL
+#define MDIO_PMA_10GBR_FECCTRL		0x00ab
+#endif
+
+#ifndef MDIO_AN_XNP
+#define MDIO_AN_XNP			0x0016
+#endif
+
+#ifndef MDIO_AN_LPX
+#define MDIO_AN_LPX			0x0019
+#endif
+
 #ifndef MDIO_AN_COMP_STAT
 #define MDIO_AN_COMP_STAT		0x0030
 #endif
 
+#ifndef MDIO_AN_INTMASK
+#define MDIO_AN_INTMASK			0x8001
+#endif
+
+#ifndef MDIO_AN_INT
+#define MDIO_AN_INT			0x8002
+#endif
+
+#ifndef MDIO_CTRL1_SPEED1G
+#define MDIO_CTRL1_SPEED1G		(MDIO_CTRL1_SPEED10G & ~BMCR_SPEED100)
+#endif
+
+/* MDIO mask values */
+#define XGBE_XNP_MCF_NULL_MESSAGE	0x001
+#define XGBE_XNP_ACK_PROCESSED		BIT(12)
+#define XGBE_XNP_MP_FORMATTED		BIT(13)
+#define XGBE_XNP_NP_EXCHANGE		BIT(15)
+
+#define XGBE_KR_TRAINING_START		BIT(0)
+#define XGBE_KR_TRAINING_ENABLE		BIT(1)
+
 /* Bit setting and getting macros
  *  The get macro will extract the current bit field value from within
  *  the variable
@@ -1119,6 +1198,82 @@ do {									\
 	ioread32((_pdata)->xpcs_regs + (_off))
 
 /* Macros for building, reading or writing register values or bits
+ * within the register values of SerDes integration registers.
+ */
+#define XSIR_GET_BITS(_var, _prefix, _field)                            \
+	GET_BITS((_var),                                                \
+		 _prefix##_##_field##_INDEX,                            \
+		 _prefix##_##_field##_WIDTH)
+
+#define XSIR_SET_BITS(_var, _prefix, _field, _val)                      \
+	SET_BITS((_var),                                                \
+		 _prefix##_##_field##_INDEX,                            \
+		 _prefix##_##_field##_WIDTH, (_val))
+
+#define XSIR0_IOREAD(_pdata, _reg)					\
+	ioread16((_pdata)->sir0_regs + _reg)
+
+#define XSIR0_IOREAD_BITS(_pdata, _reg, _field)				\
+	GET_BITS(XSIR0_IOREAD((_pdata), _reg),				\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH)
+
+#define XSIR0_IOWRITE(_pdata, _reg, _val)				\
+	iowrite16((_val), (_pdata)->sir0_regs + _reg)
+
+#define XSIR0_IOWRITE_BITS(_pdata, _reg, _field, _val)			\
+do {									\
+	u16 reg_val = XSIR0_IOREAD((_pdata), _reg);			\
+	SET_BITS(reg_val,						\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH, (_val));			\
+	XSIR0_IOWRITE((_pdata), _reg, reg_val);				\
+} while (0)
+
+#define XSIR1_IOREAD(_pdata, _reg)					\
+	ioread16((_pdata)->sir1_regs + _reg)
+
+#define XSIR1_IOREAD_BITS(_pdata, _reg, _field)				\
+	GET_BITS(XSIR1_IOREAD((_pdata), _reg),				\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH)
+
+#define XSIR1_IOWRITE(_pdata, _reg, _val)				\
+	iowrite16((_val), (_pdata)->sir1_regs + _reg)
+
+#define XSIR1_IOWRITE_BITS(_pdata, _reg, _field, _val)			\
+do {									\
+	u16 reg_val = XSIR1_IOREAD((_pdata), _reg);			\
+	SET_BITS(reg_val,						\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH, (_val));			\
+	XSIR1_IOWRITE((_pdata), _reg, reg_val);				\
+} while (0)
+
+/* Macros for building, reading or writing register values or bits
+ * within the register values of SerDes RxTx registers.
+ */
+#define XRXTX_IOREAD(_pdata, _reg)					\
+	ioread16((_pdata)->rxtx_regs + _reg)
+
+#define XRXTX_IOREAD_BITS(_pdata, _reg, _field)				\
+	GET_BITS(XRXTX_IOREAD((_pdata), _reg),				\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH)
+
+#define XRXTX_IOWRITE(_pdata, _reg, _val)				\
+	iowrite16((_val), (_pdata)->rxtx_regs + _reg)
+
+#define XRXTX_IOWRITE_BITS(_pdata, _reg, _field, _val)			\
+do {									\
+	u16 reg_val = XRXTX_IOREAD((_pdata), _reg);			\
+	SET_BITS(reg_val,						\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH, (_val));			\
+	XRXTX_IOWRITE((_pdata), _reg, reg_val);				\
+} while (0)
+
+/* Macros for building, reading or writing register values or bits
  * using MDIO.  Different from above because of the use of standardized
  * Linux include values.  No shifting is performed with the bit
  * operations, everything works on mask values.
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index 35fad1a..fa26412 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -913,23 +913,6 @@ static void xgbe_write_mmd_regs(struct xgbe_prv_data *pdata, int prtad,
 	else
 		mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff);
 
-	/* If the PCS is changing modes, match the MAC speed to it */
-	if (((mmd_address >> 16) == MDIO_MMD_PCS) &&
-	    ((mmd_address & 0xffff) == MDIO_CTRL2)) {
-		struct phy_device *phydev = pdata->phydev;
-
-		if (mmd_data & MDIO_PCS_CTRL2_TYPE) {
-			/* KX mode */
-			if (phydev->supported & SUPPORTED_1000baseKX_Full)
-				xgbe_set_gmii_speed(pdata);
-			else
-				xgbe_set_gmii_2500_speed(pdata);
-		} else {
-			/* KR mode */
-			xgbe_set_xgmii_speed(pdata);
-		}
-	}
-
 	/* The PCS registers are accessed using mmio. The underlying APB3
 	 * management interface uses indirect addressing to access the MMD
 	 * register sets. This requires accessing of the PCS register in two
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index 102e354..637b6c9 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -442,12 +442,31 @@ static void xgbe_tx_timer(unsigned long data)
 	DBGPR("<--xgbe_tx_timer\n");
 }
 
-static void xgbe_init_tx_timers(struct xgbe_prv_data *pdata)
+static void xgbe_service(struct work_struct *work)
+{
+	struct xgbe_prv_data *pdata = container_of(work,
+						   struct xgbe_prv_data,
+						   service_work);
+
+	pdata->phy_if.phy_status(pdata);
+}
+
+static void xgbe_service_timer(unsigned long data)
+{
+	struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data;
+
+	schedule_work(&pdata->service_work);
+
+	mod_timer(&pdata->service_timer, jiffies + HZ);
+}
+
+static void xgbe_init_timers(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_channel *channel;
 	unsigned int i;
 
-	DBGPR("-->xgbe_init_tx_timers\n");
+	setup_timer(&pdata->service_timer, xgbe_service_timer,
+		    (unsigned long)pdata);
 
 	channel = pdata->channel;
 	for (i = 0; i < pdata->channel_count; i++, channel++) {
@@ -457,16 +476,19 @@ static void xgbe_init_tx_timers(struct xgbe_prv_data *pdata)
 		setup_timer(&channel->tx_timer, xgbe_tx_timer,
 			    (unsigned long)channel);
 	}
+}
 
-	DBGPR("<--xgbe_init_tx_timers\n");
+static void xgbe_start_timers(struct xgbe_prv_data *pdata)
+{
+	mod_timer(&pdata->service_timer, jiffies + HZ);
 }
 
-static void xgbe_stop_tx_timers(struct xgbe_prv_data *pdata)
+static void xgbe_stop_timers(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_channel *channel;
 	unsigned int i;
 
-	DBGPR("-->xgbe_stop_tx_timers\n");
+	del_timer_sync(&pdata->service_timer);
 
 	channel = pdata->channel;
 	for (i = 0; i < pdata->channel_count; i++, channel++) {
@@ -475,8 +497,6 @@ static void xgbe_stop_tx_timers(struct xgbe_prv_data *pdata)
 
 		del_timer_sync(&channel->tx_timer);
 	}
-
-	DBGPR("<--xgbe_stop_tx_timers\n");
 }
 
 void xgbe_get_all_hw_features(struct xgbe_prv_data *pdata)
@@ -763,114 +783,14 @@ static void xgbe_free_rx_data(struct xgbe_prv_data *pdata)
 	DBGPR("<--xgbe_free_rx_data\n");
 }
 
-static void xgbe_adjust_link(struct net_device *netdev)
-{
-	struct xgbe_prv_data *pdata = netdev_priv(netdev);
-	struct xgbe_hw_if *hw_if = &pdata->hw_if;
-	struct phy_device *phydev = pdata->phydev;
-	int new_state = 0;
-
-	if (!phydev)
-		return;
-
-	if (phydev->link) {
-		/* Flow control support */
-		if (pdata->pause_autoneg) {
-			if (phydev->pause || phydev->asym_pause) {
-				pdata->tx_pause = 1;
-				pdata->rx_pause = 1;
-			} else {
-				pdata->tx_pause = 0;
-				pdata->rx_pause = 0;
-			}
-		}
-
-		if (pdata->tx_pause != pdata->phy_tx_pause) {
-			hw_if->config_tx_flow_control(pdata);
-			pdata->phy_tx_pause = pdata->tx_pause;
-		}
-
-		if (pdata->rx_pause != pdata->phy_rx_pause) {
-			hw_if->config_rx_flow_control(pdata);
-			pdata->phy_rx_pause = pdata->rx_pause;
-		}
-
-		/* Speed support */
-		if (phydev->speed != pdata->phy_speed) {
-			new_state = 1;
-
-			switch (phydev->speed) {
-			case SPEED_10000:
-				hw_if->set_xgmii_speed(pdata);
-				break;
-
-			case SPEED_2500:
-				hw_if->set_gmii_2500_speed(pdata);
-				break;
-
-			case SPEED_1000:
-				hw_if->set_gmii_speed(pdata);
-				break;
-			}
-			pdata->phy_speed = phydev->speed;
-		}
-
-		if (phydev->link != pdata->phy_link) {
-			new_state = 1;
-			pdata->phy_link = 1;
-		}
-	} else if (pdata->phy_link) {
-		new_state = 1;
-		pdata->phy_link = 0;
-		pdata->phy_speed = SPEED_UNKNOWN;
-	}
-
-	if (new_state)
-		phy_print_status(phydev);
-}
-
 static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 {
-	struct net_device *netdev = pdata->netdev;
-	struct phy_device *phydev = pdata->phydev;
-	int ret;
-
 	pdata->phy_link = -1;
 	pdata->phy_speed = SPEED_UNKNOWN;
 	pdata->phy_tx_pause = pdata->tx_pause;
 	pdata->phy_rx_pause = pdata->rx_pause;
 
-	ret = phy_connect_direct(netdev, phydev, &xgbe_adjust_link,
-				 pdata->phy_mode);
-	if (ret) {
-		netdev_err(netdev, "phy_connect_direct failed\n");
-		return ret;
-	}
-
-	if (!phydev->drv || (phydev->drv->phy_id == 0)) {
-		netdev_err(netdev, "phy_id not valid\n");
-		ret = -ENODEV;
-		goto err_phy_connect;
-	}
-	if (netif_msg_ifup(pdata))
-		netdev_dbg(pdata->netdev,
-			   "phy_connect_direct succeeded for PHY %s\n",
-			   dev_name(&phydev->dev));
-
-	return 0;
-
-err_phy_connect:
-	phy_disconnect(phydev);
-
-	return ret;
-}
-
-static void xgbe_phy_exit(struct xgbe_prv_data *pdata)
-{
-	if (!pdata->phydev)
-		return;
-
-	phy_disconnect(pdata->phydev);
+	return pdata->phy_if.phy_reset(pdata);
 }
 
 int xgbe_powerdown(struct net_device *netdev, unsigned int caller)
@@ -895,13 +815,14 @@ int xgbe_powerdown(struct net_device *netdev, unsigned int caller)
 
 	netif_tx_stop_all_queues(netdev);
 
+	xgbe_stop_timers(pdata);
+	flush_workqueue(pdata->dev_workqueue);
+
 	hw_if->powerdown_tx(pdata);
 	hw_if->powerdown_rx(pdata);
 
 	xgbe_napi_disable(pdata, 0);
 
-	phy_stop(pdata->phydev);
-
 	pdata->power_down = 1;
 
 	spin_unlock_irqrestore(&pdata->lock, flags);
@@ -930,8 +851,6 @@ int xgbe_powerup(struct net_device *netdev, unsigned int caller)
 
 	pdata->power_down = 0;
 
-	phy_start(pdata->phydev);
-
 	xgbe_napi_enable(pdata, 0);
 
 	hw_if->powerup_tx(pdata);
@@ -942,6 +861,8 @@ int xgbe_powerup(struct net_device *netdev, unsigned int caller)
 
 	netif_tx_start_all_queues(netdev);
 
+	xgbe_start_timers(pdata);
+
 	spin_unlock_irqrestore(&pdata->lock, flags);
 
 	DBGPR("<--xgbe_powerup\n");
@@ -952,6 +873,7 @@ int xgbe_powerup(struct net_device *netdev, unsigned int caller)
 static int xgbe_start(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_phy_if *phy_if = &pdata->phy_if;
 	struct net_device *netdev = pdata->netdev;
 	int ret;
 
@@ -959,7 +881,9 @@ static int xgbe_start(struct xgbe_prv_data *pdata)
 
 	hw_if->init(pdata);
 
-	phy_start(pdata->phydev);
+	ret = phy_if->phy_start(pdata);
+	if (ret)
+		goto err_phy;
 
 	xgbe_napi_enable(pdata, 1);
 
@@ -970,10 +894,11 @@ static int xgbe_start(struct xgbe_prv_data *pdata)
 	hw_if->enable_tx(pdata);
 	hw_if->enable_rx(pdata);
 
-	xgbe_init_tx_timers(pdata);
-
 	netif_tx_start_all_queues(netdev);
 
+	xgbe_start_timers(pdata);
+	schedule_work(&pdata->service_work);
+
 	DBGPR("<--xgbe_start\n");
 
 	return 0;
@@ -981,8 +906,9 @@ static int xgbe_start(struct xgbe_prv_data *pdata)
 err_napi:
 	xgbe_napi_disable(pdata, 1);
 
-	phy_stop(pdata->phydev);
+	phy_if->phy_stop(pdata);
 
+err_phy:
 	hw_if->exit(pdata);
 
 	return ret;
@@ -991,6 +917,7 @@ err_napi:
 static void xgbe_stop(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_phy_if *phy_if = &pdata->phy_if;
 	struct xgbe_channel *channel;
 	struct net_device *netdev = pdata->netdev;
 	struct netdev_queue *txq;
@@ -1000,7 +927,8 @@ static void xgbe_stop(struct xgbe_prv_data *pdata)
 
 	netif_tx_stop_all_queues(netdev);
 
-	xgbe_stop_tx_timers(pdata);
+	xgbe_stop_timers(pdata);
+	flush_workqueue(pdata->dev_workqueue);
 
 	hw_if->disable_tx(pdata);
 	hw_if->disable_rx(pdata);
@@ -1009,7 +937,7 @@ static void xgbe_stop(struct xgbe_prv_data *pdata)
 
 	xgbe_napi_disable(pdata, 1);
 
-	phy_stop(pdata->phydev);
+	phy_if->phy_stop(pdata);
 
 	hw_if->exit(pdata);
 
@@ -1380,7 +1308,7 @@ static int xgbe_open(struct net_device *netdev)
 	ret = clk_prepare_enable(pdata->sysclk);
 	if (ret) {
 		netdev_alert(netdev, "dma clk_prepare_enable failed\n");
-		goto err_phy_init;
+		return ret;
 	}
 
 	ret = clk_prepare_enable(pdata->ptpclk);
@@ -1405,14 +1333,17 @@ static int xgbe_open(struct net_device *netdev)
 	if (ret)
 		goto err_channels;
 
-	/* Initialize the device restart and Tx timestamp work struct */
+	INIT_WORK(&pdata->service_work, xgbe_service);
 	INIT_WORK(&pdata->restart_work, xgbe_restart);
 	INIT_WORK(&pdata->tx_tstamp_work, xgbe_tx_tstamp);
+	xgbe_init_timers(pdata);
 
 	ret = xgbe_start(pdata);
 	if (ret)
 		goto err_rings;
 
+	clear_bit(XGBE_DOWN, &pdata->dev_state);
+
 	DBGPR("<--xgbe_open\n");
 
 	return 0;
@@ -1429,9 +1360,6 @@ err_ptpclk:
 err_sysclk:
 	clk_disable_unprepare(pdata->sysclk);
 
-err_phy_init:
-	xgbe_phy_exit(pdata);
-
 	return ret;
 }
 
@@ -1455,8 +1383,7 @@ static int xgbe_close(struct net_device *netdev)
 	clk_disable_unprepare(pdata->ptpclk);
 	clk_disable_unprepare(pdata->sysclk);
 
-	/* Release the phy */
-	xgbe_phy_exit(pdata);
+	set_bit(XGBE_DOWN, &pdata->dev_state);
 
 	DBGPR("<--xgbe_close\n");
 
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
index 95baa86..b24a78c3 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
@@ -258,7 +258,6 @@ static int xgbe_set_pauseparam(struct net_device *netdev,
 			       struct ethtool_pauseparam *pause)
 {
 	struct xgbe_prv_data *pdata = netdev_priv(netdev);
-	struct phy_device *phydev = pdata->phydev;
 	int ret = 0;
 
 	DBGPR("-->xgbe_set_pauseparam\n");
@@ -268,19 +267,19 @@ static int xgbe_set_pauseparam(struct net_device *netdev,
 
 	pdata->pause_autoneg = pause->autoneg;
 	if (pause->autoneg) {
-		phydev->advertising |= ADVERTISED_Pause;
-		phydev->advertising |= ADVERTISED_Asym_Pause;
+		pdata->phy.advertising |= ADVERTISED_Pause;
+		pdata->phy.advertising |= ADVERTISED_Asym_Pause;
 
 	} else {
-		phydev->advertising &= ~ADVERTISED_Pause;
-		phydev->advertising &= ~ADVERTISED_Asym_Pause;
+		pdata->phy.advertising &= ~ADVERTISED_Pause;
+		pdata->phy.advertising &= ~ADVERTISED_Asym_Pause;
 
 		pdata->tx_pause = pause->tx_pause;
 		pdata->rx_pause = pause->rx_pause;
 	}
 
 	if (netif_running(netdev))
-		ret = phy_start_aneg(phydev);
+		ret = pdata->phy_if.phy_config_aneg(pdata);
 
 	DBGPR("<--xgbe_set_pauseparam\n");
 
@@ -291,36 +290,39 @@ static int xgbe_get_settings(struct net_device *netdev,
 			     struct ethtool_cmd *cmd)
 {
 	struct xgbe_prv_data *pdata = netdev_priv(netdev);
-	int ret;
 
 	DBGPR("-->xgbe_get_settings\n");
 
-	if (!pdata->phydev)
-		return -ENODEV;
+	cmd->phy_address = pdata->phy.address;
+
+	cmd->supported = pdata->phy.supported;
+	cmd->advertising = pdata->phy.advertising;
+	cmd->lp_advertising = pdata->phy.lp_advertising;
+
+	cmd->autoneg = pdata->phy.autoneg;
+	ethtool_cmd_speed_set(cmd, pdata->phy.speed);
+	cmd->duplex = pdata->phy.duplex;
 
-	ret = phy_ethtool_gset(pdata->phydev, cmd);
+	cmd->port = PORT_NONE;
+	cmd->transceiver = XCVR_INTERNAL;
 
 	DBGPR("<--xgbe_get_settings\n");
 
-	return ret;
+	return 0;
 }
 
 static int xgbe_set_settings(struct net_device *netdev,
 			     struct ethtool_cmd *cmd)
 {
 	struct xgbe_prv_data *pdata = netdev_priv(netdev);
-	struct phy_device *phydev = pdata->phydev;
 	u32 speed;
 	int ret;
 
 	DBGPR("-->xgbe_set_settings\n");
 
-	if (!pdata->phydev)
-		return -ENODEV;
-
 	speed = ethtool_cmd_speed(cmd);
 
-	if (cmd->phy_address != phydev->addr)
+	if (cmd->phy_address != pdata->phy.address)
 		return -EINVAL;
 
 	if ((cmd->autoneg != AUTONEG_ENABLE) &&
@@ -341,23 +343,23 @@ static int xgbe_set_settings(struct net_device *netdev,
 			return -EINVAL;
 	}
 
-	cmd->advertising &= phydev->supported;
+	cmd->advertising &= pdata->phy.supported;
 	if ((cmd->autoneg == AUTONEG_ENABLE) && !cmd->advertising)
 		return -EINVAL;
 
 	ret = 0;
-	phydev->autoneg = cmd->autoneg;
-	phydev->speed = speed;
-	phydev->duplex = cmd->duplex;
-	phydev->advertising = cmd->advertising;
+	pdata->phy.autoneg = cmd->autoneg;
+	pdata->phy.speed = speed;
+	pdata->phy.duplex = cmd->duplex;
+	pdata->phy.advertising = cmd->advertising;
 
 	if (cmd->autoneg == AUTONEG_ENABLE)
-		phydev->advertising |= ADVERTISED_Autoneg;
+		pdata->phy.advertising |= ADVERTISED_Autoneg;
 	else
-		phydev->advertising &= ~ADVERTISED_Autoneg;
+		pdata->phy.advertising &= ~ADVERTISED_Autoneg;
 
 	if (netif_running(netdev))
-		ret = phy_start_aneg(phydev);
+		ret = pdata->phy_if.phy_config_aneg(pdata);
 
 	DBGPR("<--xgbe_set_settings\n");
 
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-main.c b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
index ae869d4..0c219b3 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-main.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
@@ -124,9 +124,11 @@
 #include <linux/of.h>
 #include <linux/of_net.h>
 #include <linux/of_address.h>
+#include <linux/of_platform.h>
 #include <linux/clk.h>
 #include <linux/property.h>
 #include <linux/acpi.h>
+#include <linux/mdio.h>
 
 #include "xgbe.h"
 #include "xgbe-common.h"
@@ -143,6 +145,42 @@ MODULE_PARM_DESC(debug, " Network interface message level setting");
 static const u32 default_msg_level = (NETIF_MSG_LINK | NETIF_MSG_IFDOWN |
 				      NETIF_MSG_IFUP);
 
+static const u32 xgbe_serdes_blwc[] = {
+	XGBE_SPEED_1000_BLWC,
+	XGBE_SPEED_2500_BLWC,
+	XGBE_SPEED_10000_BLWC,
+};
+
+static const u32 xgbe_serdes_cdr_rate[] = {
+	XGBE_SPEED_1000_CDR,
+	XGBE_SPEED_2500_CDR,
+	XGBE_SPEED_10000_CDR,
+};
+
+static const u32 xgbe_serdes_pq_skew[] = {
+	XGBE_SPEED_1000_PQ,
+	XGBE_SPEED_2500_PQ,
+	XGBE_SPEED_10000_PQ,
+};
+
+static const u32 xgbe_serdes_tx_amp[] = {
+	XGBE_SPEED_1000_TXAMP,
+	XGBE_SPEED_2500_TXAMP,
+	XGBE_SPEED_10000_TXAMP,
+};
+
+static const u32 xgbe_serdes_dfe_tap_cfg[] = {
+	XGBE_SPEED_1000_DFE_TAP_CONFIG,
+	XGBE_SPEED_2500_DFE_TAP_CONFIG,
+	XGBE_SPEED_10000_DFE_TAP_CONFIG,
+};
+
+static const u32 xgbe_serdes_dfe_tap_ena[] = {
+	XGBE_SPEED_1000_DFE_TAP_ENABLE,
+	XGBE_SPEED_2500_DFE_TAP_ENABLE,
+	XGBE_SPEED_10000_DFE_TAP_ENABLE,
+};
+
 static void xgbe_default_config(struct xgbe_prv_data *pdata)
 {
 	DBGPR("-->xgbe_default_config\n");
@@ -160,8 +198,6 @@ static void xgbe_default_config(struct xgbe_prv_data *pdata)
 	pdata->rx_pause = 1;
 	pdata->phy_speed = SPEED_UNKNOWN;
 	pdata->power_down = 0;
-	pdata->default_autoneg = AUTONEG_ENABLE;
-	pdata->default_speed = SPEED_10000;
 
 	DBGPR("<--xgbe_default_config\n");
 }
@@ -169,6 +205,7 @@ static void xgbe_default_config(struct xgbe_prv_data *pdata)
 static void xgbe_init_all_fptrs(struct xgbe_prv_data *pdata)
 {
 	xgbe_init_function_ptrs_dev(&pdata->hw_if);
+	xgbe_init_function_ptrs_phy(&pdata->phy_if);
 	xgbe_init_function_ptrs_desc(&pdata->desc_if);
 }
 
@@ -255,23 +292,75 @@ static int xgbe_of_support(struct xgbe_prv_data *pdata)
 
 	return 0;
 }
+
+static struct platform_device *xgbe_of_get_phy_pdev(struct xgbe_prv_data *pdata)
+{
+	struct device *dev = pdata->dev;
+	struct device_node *phy_node;
+	struct platform_device *phy_pdev;
+
+	phy_node = of_parse_phandle(dev->of_node, "phy-handle", 0);
+	if (!phy_node) {
+		dev_err(dev, "unable to locate phy device\n");
+		return NULL;
+	}
+
+	phy_pdev = of_find_device_by_node(phy_node);
+	of_node_put(phy_node);
+
+	return phy_pdev;
+}
 #else   /* CONFIG_OF */
 static int xgbe_of_support(struct xgbe_prv_data *pdata)
 {
 	return -EINVAL;
 }
-#endif  /*CONFIG_OF */
+
+static struct platform_device *xgbe_of_get_phy_pdev(struct xgbe_prv_data *pdata)
+{
+	return NULL;
+}
+#endif  /* CONFIG_OF */
+
+static unsigned int xgbe_resource_count(struct platform_device *pdev,
+					unsigned int type)
+{
+	unsigned int count;
+	int i;
+
+	for (i = 0, count = 0; i < pdev->num_resources; i++) {
+		struct resource *res = &pdev->resource[i];
+
+		if (type == resource_type(res))
+			count++;
+	}
+
+	return count;
+}
+
+static struct platform_device *xgbe_get_phy_pdev(struct xgbe_prv_data *pdata)
+{
+	struct platform_device *phy_pdev;
+
+	if (pdata->use_acpi) {
+		get_device(pdata->dev);
+		phy_pdev = pdata->pdev;
+	} else {
+		phy_pdev = xgbe_of_get_phy_pdev(pdata);
+	}
+
+	return phy_pdev;
+}
 
 static int xgbe_probe(struct platform_device *pdev)
 {
 	struct xgbe_prv_data *pdata;
-	struct xgbe_hw_if *hw_if;
-	struct xgbe_desc_if *desc_if;
 	struct net_device *netdev;
-	struct device *dev = &pdev->dev;
+	struct device *dev = &pdev->dev, *phy_dev;
+	struct platform_device *phy_pdev;
 	struct resource *res;
 	const char *phy_mode;
-	unsigned int i;
+	unsigned int i, phy_memnum, phy_irqnum;
 	int ret;
 
 	DBGPR("--> xgbe_probe\n");
@@ -298,9 +387,34 @@ static int xgbe_probe(struct platform_device *pdev)
 
 	pdata->msg_enable = netif_msg_init(debug, default_msg_level);
 
+	set_bit(XGBE_DOWN, &pdata->dev_state);
+
 	/* Check if we should use ACPI or DT */
 	pdata->use_acpi = (!pdata->adev || acpi_disabled) ? 0 : 1;
 
+	phy_pdev = xgbe_get_phy_pdev(pdata);
+	if (!phy_pdev) {
+		dev_err(dev, "unable to obtain phy device\n");
+		ret = -EINVAL;
+		goto err_phydev;
+	}
+	phy_dev = &phy_pdev->dev;
+
+	if (pdev == phy_pdev) {
+		/* ACPI:
+		 *   The XGBE and PHY resources are grouped together with
+		 *   the PHY resources listed last
+		 */
+		phy_memnum = xgbe_resource_count(pdev, IORESOURCE_MEM) - 3;
+		phy_irqnum = xgbe_resource_count(pdev, IORESOURCE_IRQ) - 1;
+	} else {
+		/* Device tree:
+		 *   The XGBE and PHY resources are separate
+		 */
+		phy_memnum = 0;
+		phy_irqnum = 0;
+	}
+
 	/* Set and validate the number of descriptors for a ring */
 	BUILD_BUG_ON_NOT_POWER_OF_2(XGBE_TX_DESC_CNT);
 	pdata->tx_desc_count = XGBE_TX_DESC_CNT;
@@ -340,6 +454,36 @@ static int xgbe_probe(struct platform_device *pdev)
 	if (netif_msg_probe(pdata))
 		dev_dbg(dev, "xpcs_regs  = %p\n", pdata->xpcs_regs);
 
+	res = platform_get_resource(phy_pdev, IORESOURCE_MEM, phy_memnum++);
+	pdata->rxtx_regs = devm_ioremap_resource(dev, res);
+	if (IS_ERR(pdata->rxtx_regs)) {
+		dev_err(dev, "rxtx ioremap failed\n");
+		ret = PTR_ERR(pdata->rxtx_regs);
+		goto err_io;
+	}
+	if (netif_msg_probe(pdata))
+		dev_dbg(dev, "rxtx_regs  = %p\n", pdata->rxtx_regs);
+
+	res = platform_get_resource(phy_pdev, IORESOURCE_MEM, phy_memnum++);
+	pdata->sir0_regs = devm_ioremap_resource(dev, res);
+	if (IS_ERR(pdata->sir0_regs)) {
+		dev_err(dev, "sir0 ioremap failed\n");
+		ret = PTR_ERR(pdata->sir0_regs);
+		goto err_io;
+	}
+	if (netif_msg_probe(pdata))
+		dev_dbg(dev, "sir0_regs  = %p\n", pdata->sir0_regs);
+
+	res = platform_get_resource(phy_pdev, IORESOURCE_MEM, phy_memnum++);
+	pdata->sir1_regs = devm_ioremap_resource(dev, res);
+	if (IS_ERR(pdata->sir1_regs)) {
+		dev_err(dev, "sir1 ioremap failed\n");
+		ret = PTR_ERR(pdata->sir1_regs);
+		goto err_io;
+	}
+	if (netif_msg_probe(pdata))
+		dev_dbg(dev, "sir1_regs  = %p\n", pdata->sir1_regs);
+
 	/* Retrieve the MAC address */
 	ret = device_property_read_u8_array(dev, XGBE_MAC_ADDR_PROPERTY,
 					    pdata->mac_addr,
@@ -366,6 +510,115 @@ static int xgbe_probe(struct platform_device *pdev)
 	if (device_property_present(dev, XGBE_DMA_IRQS_PROPERTY))
 		pdata->per_channel_irq = 1;
 
+	/* Retrieve the PHY speedset */
+	ret = device_property_read_u32(phy_dev, XGBE_SPEEDSET_PROPERTY,
+				       &pdata->speed_set);
+	if (ret) {
+		dev_err(dev, "invalid %s property\n", XGBE_SPEEDSET_PROPERTY);
+		goto err_io;
+	}
+
+	switch (pdata->speed_set) {
+	case XGBE_SPEEDSET_1000_10000:
+	case XGBE_SPEEDSET_2500_10000:
+		break;
+	default:
+		dev_err(dev, "invalid %s property\n", XGBE_SPEEDSET_PROPERTY);
+		ret = -EINVAL;
+		goto err_io;
+	}
+
+	/* Retrieve the PHY configuration properties */
+	if (device_property_present(phy_dev, XGBE_BLWC_PROPERTY)) {
+		ret = device_property_read_u32_array(phy_dev,
+						     XGBE_BLWC_PROPERTY,
+						     pdata->serdes_blwc,
+						     XGBE_SPEEDS);
+		if (ret) {
+			dev_err(dev, "invalid %s property\n",
+				XGBE_BLWC_PROPERTY);
+			goto err_io;
+		}
+	} else {
+		memcpy(pdata->serdes_blwc, xgbe_serdes_blwc,
+		       sizeof(pdata->serdes_blwc));
+	}
+
+	if (device_property_present(phy_dev, XGBE_CDR_RATE_PROPERTY)) {
+		ret = device_property_read_u32_array(phy_dev,
+						     XGBE_CDR_RATE_PROPERTY,
+						     pdata->serdes_cdr_rate,
+						     XGBE_SPEEDS);
+		if (ret) {
+			dev_err(dev, "invalid %s property\n",
+				XGBE_CDR_RATE_PROPERTY);
+			goto err_io;
+		}
+	} else {
+		memcpy(pdata->serdes_cdr_rate, xgbe_serdes_cdr_rate,
+		       sizeof(pdata->serdes_cdr_rate));
+	}
+
+	if (device_property_present(phy_dev, XGBE_PQ_SKEW_PROPERTY)) {
+		ret = device_property_read_u32_array(phy_dev,
+						     XGBE_PQ_SKEW_PROPERTY,
+						     pdata->serdes_pq_skew,
+						     XGBE_SPEEDS);
+		if (ret) {
+			dev_err(dev, "invalid %s property\n",
+				XGBE_PQ_SKEW_PROPERTY);
+			goto err_io;
+		}
+	} else {
+		memcpy(pdata->serdes_pq_skew, xgbe_serdes_pq_skew,
+		       sizeof(pdata->serdes_pq_skew));
+	}
+
+	if (device_property_present(phy_dev, XGBE_TX_AMP_PROPERTY)) {
+		ret = device_property_read_u32_array(phy_dev,
+						     XGBE_TX_AMP_PROPERTY,
+						     pdata->serdes_tx_amp,
+						     XGBE_SPEEDS);
+		if (ret) {
+			dev_err(dev, "invalid %s property\n",
+				XGBE_TX_AMP_PROPERTY);
+			goto err_io;
+		}
+	} else {
+		memcpy(pdata->serdes_tx_amp, xgbe_serdes_tx_amp,
+		       sizeof(pdata->serdes_tx_amp));
+	}
+
+	if (device_property_present(phy_dev, XGBE_DFE_CFG_PROPERTY)) {
+		ret = device_property_read_u32_array(phy_dev,
+						     XGBE_DFE_CFG_PROPERTY,
+						     pdata->serdes_dfe_tap_cfg,
+						     XGBE_SPEEDS);
+		if (ret) {
+			dev_err(dev, "invalid %s property\n",
+				XGBE_DFE_CFG_PROPERTY);
+			goto err_io;
+		}
+	} else {
+		memcpy(pdata->serdes_dfe_tap_cfg, xgbe_serdes_dfe_tap_cfg,
+		       sizeof(pdata->serdes_dfe_tap_cfg));
+	}
+
+	if (device_property_present(phy_dev, XGBE_DFE_ENA_PROPERTY)) {
+		ret = device_property_read_u32_array(phy_dev,
+						     XGBE_DFE_ENA_PROPERTY,
+						     pdata->serdes_dfe_tap_ena,
+						     XGBE_SPEEDS);
+		if (ret) {
+			dev_err(dev, "invalid %s property\n",
+				XGBE_DFE_ENA_PROPERTY);
+			goto err_io;
+		}
+	} else {
+		memcpy(pdata->serdes_dfe_tap_ena, xgbe_serdes_dfe_tap_ena,
+		       sizeof(pdata->serdes_dfe_tap_ena));
+	}
+
 	/* Obtain device settings unique to ACPI/OF */
 	if (pdata->use_acpi)
 		ret = xgbe_acpi_support(pdata);
@@ -393,17 +646,23 @@ static int xgbe_probe(struct platform_device *pdev)
 	}
 	pdata->dev_irq = ret;
 
+	/* Get the auto-negotiation interrupt */
+	ret = platform_get_irq(phy_pdev, phy_irqnum++);
+	if (ret < 0) {
+		dev_err(dev, "platform_get_irq phy 0 failed\n");
+		goto err_io;
+	}
+	pdata->an_irq = ret;
+
 	netdev->irq = pdata->dev_irq;
 	netdev->base_addr = (unsigned long)pdata->xgmac_regs;
 	memcpy(netdev->dev_addr, pdata->mac_addr, netdev->addr_len);
 
 	/* Set all the function pointers */
 	xgbe_init_all_fptrs(pdata);
-	hw_if = &pdata->hw_if;
-	desc_if = &pdata->desc_if;
 
 	/* Issue software reset to device */
-	hw_if->exit(pdata);
+	pdata->hw_if.exit(pdata);
 
 	/* Populate the hardware features */
 	xgbe_get_all_hw_features(pdata);
@@ -458,16 +717,8 @@ static int xgbe_probe(struct platform_device *pdev)
 	XGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
 	XGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 
-	/* Prepare to regsiter with MDIO */
-	pdata->mii_bus_id = kasprintf(GFP_KERNEL, "%s", pdev->name);
-	if (!pdata->mii_bus_id) {
-		dev_err(dev, "failed to allocate mii bus id\n");
-		ret = -ENOMEM;
-		goto err_io;
-	}
-	ret = xgbe_mdio_register(pdata);
-	if (ret)
-		goto err_bus_id;
+	/* Call MDIO/PHY initialization routine */
+	pdata->phy_if.phy_init(pdata);
 
 	/* Set device operations */
 	netdev->netdev_ops = xgbe_get_netdev_ops();
@@ -512,26 +763,52 @@ static int xgbe_probe(struct platform_device *pdev)
 	ret = register_netdev(netdev);
 	if (ret) {
 		dev_err(dev, "net device registration failed\n");
-		goto err_reg_netdev;
+		goto err_io;
+	}
+
+	/* Create the PHY/ANEG name based on netdev name */
+	snprintf(pdata->an_name, sizeof(pdata->an_name) - 1, "%s-pcs",
+		 netdev_name(netdev));
+
+	/* Create workqueues */
+	pdata->dev_workqueue =
+		create_singlethread_workqueue(netdev_name(netdev));
+	if (!pdata->dev_workqueue) {
+		netdev_err(netdev, "device workqueue creation failed\n");
+		ret = -ENOMEM;
+		goto err_netdev;
+	}
+
+	pdata->an_workqueue =
+		create_singlethread_workqueue(pdata->an_name);
+	if (!pdata->an_workqueue) {
+		netdev_err(netdev, "phy workqueue creation failed\n");
+		ret = -ENOMEM;
+		goto err_wq;
 	}
 
 	xgbe_ptp_register(pdata);
 
 	xgbe_debugfs_init(pdata);
 
+	platform_device_put(phy_pdev);
+
 	netdev_notice(netdev, "net device enabled\n");
 
 	DBGPR("<-- xgbe_probe\n");
 
 	return 0;
 
-err_reg_netdev:
-	xgbe_mdio_unregister(pdata);
+err_wq:
+	destroy_workqueue(pdata->dev_workqueue);
 
-err_bus_id:
-	kfree(pdata->mii_bus_id);
+err_netdev:
+	unregister_netdev(netdev);
 
 err_io:
+	platform_device_put(phy_pdev);
+
+err_phydev:
 	free_netdev(netdev);
 
 err_alloc:
@@ -551,11 +828,13 @@ static int xgbe_remove(struct platform_device *pdev)
 
 	xgbe_ptp_unregister(pdata);
 
-	unregister_netdev(netdev);
+	flush_workqueue(pdata->an_workqueue);
+	destroy_workqueue(pdata->an_workqueue);
 
-	xgbe_mdio_unregister(pdata);
+	flush_workqueue(pdata->dev_workqueue);
+	destroy_workqueue(pdata->dev_workqueue);
 
-	kfree(pdata->mii_bus_id);
+	unregister_netdev(netdev);
 
 	free_netdev(netdev);
 
@@ -568,16 +847,17 @@ static int xgbe_remove(struct platform_device *pdev)
 static int xgbe_suspend(struct device *dev)
 {
 	struct net_device *netdev = dev_get_drvdata(dev);
-	int ret;
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	int ret = 0;
 
 	DBGPR("-->xgbe_suspend\n");
 
-	if (!netif_running(netdev)) {
-		DBGPR("<--xgbe_dev_suspend\n");
-		return -EINVAL;
-	}
+	if (netif_running(netdev))
+		ret = xgbe_powerdown(netdev, XGMAC_DRIVER_CONTEXT);
 
-	ret = xgbe_powerdown(netdev, XGMAC_DRIVER_CONTEXT);
+	pdata->lpm_ctrl = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL1);
+	pdata->lpm_ctrl |= MDIO_CTRL1_LPOWER;
+	XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL1, pdata->lpm_ctrl);
 
 	DBGPR("<--xgbe_suspend\n");
 
@@ -587,16 +867,16 @@ static int xgbe_suspend(struct device *dev)
 static int xgbe_resume(struct device *dev)
 {
 	struct net_device *netdev = dev_get_drvdata(dev);
-	int ret;
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	int ret = 0;
 
 	DBGPR("-->xgbe_resume\n");
 
-	if (!netif_running(netdev)) {
-		DBGPR("<--xgbe_dev_resume\n");
-		return -EINVAL;
-	}
+	pdata->lpm_ctrl &= ~MDIO_CTRL1_LPOWER;
+	XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL1, pdata->lpm_ctrl);
 
-	ret = xgbe_powerup(netdev, XGMAC_DRIVER_CONTEXT);
+	if (netif_running(netdev))
+		ret = xgbe_powerup(netdev, XGMAC_DRIVER_CONTEXT);
 
 	DBGPR("<--xgbe_resume\n");
 
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
index 532a67f..1ae4bfb 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
@@ -119,48 +119,1036 @@
 #include <linux/mdio.h>
 #include <linux/phy.h>
 #include <linux/of.h>
+#include <linux/bitops.h>
+#include <linux/jiffies.h>
 
 #include "xgbe.h"
 #include "xgbe-common.h"
 
-static int xgbe_mdio_read(struct mii_bus *mii, int prtad, int mmd_reg)
+static void xgbe_an_enable_kr_training(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_prv_data *pdata = mii->priv;
-	struct xgbe_hw_if *hw_if = &pdata->hw_if;
-	int mmd_data;
+	unsigned int reg;
 
-	DBGPR_MDIO("-->xgbe_mdio_read: prtad=%#x mmd_reg=%#x\n",
-		   prtad, mmd_reg);
+	reg = XMDIO_READ(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL);
 
-	mmd_data = hw_if->read_mmd_regs(pdata, prtad, mmd_reg);
+	reg |= XGBE_KR_TRAINING_ENABLE;
+	XMDIO_WRITE(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, reg);
+}
+
+static void xgbe_an_disable_kr_training(struct xgbe_prv_data *pdata)
+{
+	unsigned int reg;
+
+	reg = XMDIO_READ(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL);
+
+	reg &= ~XGBE_KR_TRAINING_ENABLE;
+	XMDIO_WRITE(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, reg);
+}
+
+static void xgbe_pcs_power_cycle(struct xgbe_prv_data *pdata)
+{
+	unsigned int reg;
+
+	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL1);
+
+	reg |= MDIO_CTRL1_LPOWER;
+	XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL1, reg);
+
+	usleep_range(75, 100);
+
+	reg &= ~MDIO_CTRL1_LPOWER;
+	XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL1, reg);
+}
+
+static void xgbe_serdes_start_ratechange(struct xgbe_prv_data *pdata)
+{
+	/* Assert Rx and Tx ratechange */
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, RATECHANGE, 1);
+}
+
+static void xgbe_serdes_complete_ratechange(struct xgbe_prv_data *pdata)
+{
+	unsigned int wait;
+	u16 status;
+
+	/* Release Rx and Tx ratechange */
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, RATECHANGE, 0);
+
+	/* Wait for Rx and Tx ready */
+	wait = XGBE_RATECHANGE_COUNT;
+	while (wait--) {
+		usleep_range(50, 75);
+
+		status = XSIR0_IOREAD(pdata, SIR0_STATUS);
+		if (XSIR_GET_BITS(status, SIR0_STATUS, RX_READY) &&
+		    XSIR_GET_BITS(status, SIR0_STATUS, TX_READY))
+			goto rx_reset;
+	}
+
+	netdev_dbg(pdata->netdev, "SerDes rx/tx not ready (%#hx)\n",
+		   status);
+
+rx_reset:
+	/* Perform Rx reset for the DFE changes */
+	XRXTX_IOWRITE_BITS(pdata, RXTX_REG6, RESETB_RXD, 0);
+	XRXTX_IOWRITE_BITS(pdata, RXTX_REG6, RESETB_RXD, 1);
+}
+
+static void xgbe_xgmii_mode(struct xgbe_prv_data *pdata)
+{
+	unsigned int reg;
+
+	/* Enable KR training */
+	xgbe_an_enable_kr_training(pdata);
+
+	/* Set MAC to 10G speed */
+	pdata->hw_if.set_xgmii_speed(pdata);
+
+	/* Set PCS to KR/10G speed */
+	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL2);
+	reg &= ~MDIO_PCS_CTRL2_TYPE;
+	reg |= MDIO_PCS_CTRL2_10GBR;
+	XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL2, reg);
+
+	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL1);
+	reg &= ~MDIO_CTRL1_SPEEDSEL;
+	reg |= MDIO_CTRL1_SPEED10G;
+	XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL1, reg);
+
+	xgbe_pcs_power_cycle(pdata);
+
+	/* Set SerDes to 10G speed */
+	xgbe_serdes_start_ratechange(pdata);
+
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, DATARATE, XGBE_SPEED_10000_RATE);
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, WORDMODE, XGBE_SPEED_10000_WORD);
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, PLLSEL, XGBE_SPEED_10000_PLL);
+
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, CDR_RATE,
+			   pdata->serdes_cdr_rate[XGBE_SPEED_10000]);
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, TXAMP,
+			   pdata->serdes_tx_amp[XGBE_SPEED_10000]);
+	XRXTX_IOWRITE_BITS(pdata, RXTX_REG20, BLWC_ENA,
+			   pdata->serdes_blwc[XGBE_SPEED_10000]);
+	XRXTX_IOWRITE_BITS(pdata, RXTX_REG114, PQ_REG,
+			   pdata->serdes_pq_skew[XGBE_SPEED_10000]);
+	XRXTX_IOWRITE_BITS(pdata, RXTX_REG129, RXDFE_CONFIG,
+			   pdata->serdes_dfe_tap_cfg[XGBE_SPEED_10000]);
+	XRXTX_IOWRITE(pdata, RXTX_REG22,
+		      pdata->serdes_dfe_tap_ena[XGBE_SPEED_10000]);
+
+	xgbe_serdes_complete_ratechange(pdata);
+}
+
+static void xgbe_gmii_2500_mode(struct xgbe_prv_data *pdata)
+{
+	unsigned int reg;
+
+	/* Disable KR training */
+	xgbe_an_disable_kr_training(pdata);
+
+	/* Set MAC to 2.5G speed */
+	pdata->hw_if.set_gmii_2500_speed(pdata);
+
+	/* Set PCS to KX/1G speed */
+	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL2);
+	reg &= ~MDIO_PCS_CTRL2_TYPE;
+	reg |= MDIO_PCS_CTRL2_10GBX;
+	XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL2, reg);
+
+	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL1);
+	reg &= ~MDIO_CTRL1_SPEEDSEL;
+	reg |= MDIO_CTRL1_SPEED1G;
+	XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL1, reg);
+
+	xgbe_pcs_power_cycle(pdata);
+
+	/* Set SerDes to 2.5G speed */
+	xgbe_serdes_start_ratechange(pdata);
+
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, DATARATE, XGBE_SPEED_2500_RATE);
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, WORDMODE, XGBE_SPEED_2500_WORD);
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, PLLSEL, XGBE_SPEED_2500_PLL);
+
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, CDR_RATE,
+			   pdata->serdes_cdr_rate[XGBE_SPEED_2500]);
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, TXAMP,
+			   pdata->serdes_tx_amp[XGBE_SPEED_2500]);
+	XRXTX_IOWRITE_BITS(pdata, RXTX_REG20, BLWC_ENA,
+			   pdata->serdes_blwc[XGBE_SPEED_2500]);
+	XRXTX_IOWRITE_BITS(pdata, RXTX_REG114, PQ_REG,
+			   pdata->serdes_pq_skew[XGBE_SPEED_2500]);
+	XRXTX_IOWRITE_BITS(pdata, RXTX_REG129, RXDFE_CONFIG,
+			   pdata->serdes_dfe_tap_cfg[XGBE_SPEED_2500]);
+	XRXTX_IOWRITE(pdata, RXTX_REG22,
+		      pdata->serdes_dfe_tap_ena[XGBE_SPEED_2500]);
+
+	xgbe_serdes_complete_ratechange(pdata);
+}
+
+static void xgbe_gmii_mode(struct xgbe_prv_data *pdata)
+{
+	unsigned int reg;
+
+	/* Disable KR training */
+	xgbe_an_disable_kr_training(pdata);
+
+	/* Set MAC to 1G speed */
+	pdata->hw_if.set_gmii_speed(pdata);
+
+	/* Set PCS to KX/1G speed */
+	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL2);
+	reg &= ~MDIO_PCS_CTRL2_TYPE;
+	reg |= MDIO_PCS_CTRL2_10GBX;
+	XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL2, reg);
+
+	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL1);
+	reg &= ~MDIO_CTRL1_SPEEDSEL;
+	reg |= MDIO_CTRL1_SPEED1G;
+	XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL1, reg);
+
+	xgbe_pcs_power_cycle(pdata);
+
+	/* Set SerDes to 1G speed */
+	xgbe_serdes_start_ratechange(pdata);
+
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, DATARATE, XGBE_SPEED_1000_RATE);
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, WORDMODE, XGBE_SPEED_1000_WORD);
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, PLLSEL, XGBE_SPEED_1000_PLL);
+
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, CDR_RATE,
+			   pdata->serdes_cdr_rate[XGBE_SPEED_1000]);
+	XSIR1_IOWRITE_BITS(pdata, SIR1_SPEED, TXAMP,
+			   pdata->serdes_tx_amp[XGBE_SPEED_1000]);
+	XRXTX_IOWRITE_BITS(pdata, RXTX_REG20, BLWC_ENA,
+			   pdata->serdes_blwc[XGBE_SPEED_1000]);
+	XRXTX_IOWRITE_BITS(pdata, RXTX_REG114, PQ_REG,
+			   pdata->serdes_pq_skew[XGBE_SPEED_1000]);
+	XRXTX_IOWRITE_BITS(pdata, RXTX_REG129, RXDFE_CONFIG,
+			   pdata->serdes_dfe_tap_cfg[XGBE_SPEED_1000]);
+	XRXTX_IOWRITE(pdata, RXTX_REG22,
+		      pdata->serdes_dfe_tap_ena[XGBE_SPEED_1000]);
+
+	xgbe_serdes_complete_ratechange(pdata);
+}
+
+static void xgbe_cur_mode(struct xgbe_prv_data *pdata,
+			  enum xgbe_mode *mode)
+{
+	unsigned int reg;
+
+	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL2);
+	if ((reg & MDIO_PCS_CTRL2_TYPE) == MDIO_PCS_CTRL2_10GBR)
+		*mode = XGBE_MODE_KR;
+	else
+		*mode = XGBE_MODE_KX;
+}
+
+static bool xgbe_in_kr_mode(struct xgbe_prv_data *pdata)
+{
+	enum xgbe_mode mode;
+
+	xgbe_cur_mode(pdata, &mode);
+
+	return (mode == XGBE_MODE_KR);
+}
+
+static void xgbe_switch_mode(struct xgbe_prv_data *pdata)
+{
+	/* If we are in KR switch to KX, and vice-versa */
+	if (xgbe_in_kr_mode(pdata)) {
+		if (pdata->speed_set == XGBE_SPEEDSET_1000_10000)
+			xgbe_gmii_mode(pdata);
+		else
+			xgbe_gmii_2500_mode(pdata);
+	} else {
+		xgbe_xgmii_mode(pdata);
+	}
+}
+
+static void xgbe_set_mode(struct xgbe_prv_data *pdata,
+			  enum xgbe_mode mode)
+{
+	enum xgbe_mode cur_mode;
+
+	xgbe_cur_mode(pdata, &cur_mode);
+	if (mode != cur_mode)
+		xgbe_switch_mode(pdata);
+}
+
+static void xgbe_set_an(struct xgbe_prv_data *pdata, bool enable, bool restart)
+{
+	unsigned int reg;
+
+	reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_CTRL1);
+	reg &= ~MDIO_AN_CTRL1_ENABLE;
+
+	if (enable)
+		reg |= MDIO_AN_CTRL1_ENABLE;
+
+	if (restart)
+		reg |= MDIO_AN_CTRL1_RESTART;
+
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_CTRL1, reg);
+}
+
+static void xgbe_restart_an(struct xgbe_prv_data *pdata)
+{
+	xgbe_set_an(pdata, true, true);
+}
+
+static void xgbe_disable_an(struct xgbe_prv_data *pdata)
+{
+	xgbe_set_an(pdata, false, false);
+}
+
+static enum xgbe_an xgbe_an_tx_training(struct xgbe_prv_data *pdata,
+					enum xgbe_rx *state)
+{
+	unsigned int ad_reg, lp_reg, reg;
+
+	*state = XGBE_RX_COMPLETE;
+
+	/* If we're not in KR mode then we're done */
+	if (!xgbe_in_kr_mode(pdata))
+		return XGBE_AN_PAGE_RECEIVED;
+
+	/* Enable/Disable FEC */
+	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
+	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA + 2);
+
+	reg = XMDIO_READ(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_FECCTRL);
+	reg &= ~(MDIO_PMA_10GBR_FECABLE_ABLE | MDIO_PMA_10GBR_FECABLE_ERRABLE);
+	if ((ad_reg & 0xc000) && (lp_reg & 0xc000))
+		reg |= pdata->fec_ability;
+
+	XMDIO_WRITE(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_FECCTRL, reg);
+
+	/* Start KR training */
+	reg = XMDIO_READ(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL);
+	if (reg & XGBE_KR_TRAINING_ENABLE) {
+		XSIR0_IOWRITE_BITS(pdata, SIR0_KR_RT_1, RESET, 1);
+
+		reg |= XGBE_KR_TRAINING_START;
+		XMDIO_WRITE(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL,
+			    reg);
+
+		XSIR0_IOWRITE_BITS(pdata, SIR0_KR_RT_1, RESET, 0);
+	}
+
+	return XGBE_AN_PAGE_RECEIVED;
+}
+
+static enum xgbe_an xgbe_an_tx_xnp(struct xgbe_prv_data *pdata,
+				   enum xgbe_rx *state)
+{
+	u16 msg;
+
+	*state = XGBE_RX_XNP;
+
+	msg = XGBE_XNP_MCF_NULL_MESSAGE;
+	msg |= XGBE_XNP_MP_FORMATTED;
+
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_XNP + 2, 0);
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_XNP + 1, 0);
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_XNP, msg);
+
+	return XGBE_AN_PAGE_RECEIVED;
+}
+
+static enum xgbe_an xgbe_an_rx_bpa(struct xgbe_prv_data *pdata,
+				   enum xgbe_rx *state)
+{
+	unsigned int link_support;
+	unsigned int reg, ad_reg, lp_reg;
+
+	/* Read Base Ability register 2 first */
+	reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA + 1);
+
+	/* Check for a supported mode, otherwise restart in a different one */
+	link_support = xgbe_in_kr_mode(pdata) ? 0x80 : 0x20;
+	if (!(reg & link_support))
+		return XGBE_AN_INCOMPAT_LINK;
+
+	/* Check Extended Next Page support */
+	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
+	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA);
+
+	return ((ad_reg & XGBE_XNP_NP_EXCHANGE) ||
+		(lp_reg & XGBE_XNP_NP_EXCHANGE))
+	       ? xgbe_an_tx_xnp(pdata, state)
+	       : xgbe_an_tx_training(pdata, state);
+}
+
+static enum xgbe_an xgbe_an_rx_xnp(struct xgbe_prv_data *pdata,
+				   enum xgbe_rx *state)
+{
+	unsigned int ad_reg, lp_reg;
+
+	/* Check Extended Next Page support */
+	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_XNP);
+	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPX);
+
+	return ((ad_reg & XGBE_XNP_NP_EXCHANGE) ||
+		(lp_reg & XGBE_XNP_NP_EXCHANGE))
+	       ? xgbe_an_tx_xnp(pdata, state)
+	       : xgbe_an_tx_training(pdata, state);
+}
+
+static enum xgbe_an xgbe_an_page_received(struct xgbe_prv_data *pdata)
+{
+	enum xgbe_rx *state;
+	unsigned long an_timeout;
+	enum xgbe_an ret;
+
+	if (!pdata->an_start) {
+		pdata->an_start = jiffies;
+	} else {
+		an_timeout = pdata->an_start +
+			     msecs_to_jiffies(XGBE_AN_MS_TIMEOUT);
+		if (time_after(jiffies, an_timeout)) {
+			/* Auto-negotiation timed out, reset state */
+			pdata->kr_state = XGBE_RX_BPA;
+			pdata->kx_state = XGBE_RX_BPA;
+
+			pdata->an_start = jiffies;
+		}
+	}
+
+	state = xgbe_in_kr_mode(pdata) ? &pdata->kr_state
+					   : &pdata->kx_state;
+
+	switch (*state) {
+	case XGBE_RX_BPA:
+		ret = xgbe_an_rx_bpa(pdata, state);
+		break;
+
+	case XGBE_RX_XNP:
+		ret = xgbe_an_rx_xnp(pdata, state);
+		break;
+
+	default:
+		ret = XGBE_AN_ERROR;
+	}
+
+	return ret;
+}
+
+static enum xgbe_an xgbe_an_incompat_link(struct xgbe_prv_data *pdata)
+{
+	/* Be sure we aren't looping trying to negotiate */
+	if (xgbe_in_kr_mode(pdata)) {
+		pdata->kr_state = XGBE_RX_ERROR;
+
+		if (!(pdata->phy.advertising & ADVERTISED_1000baseKX_Full) &&
+		    !(pdata->phy.advertising & ADVERTISED_2500baseX_Full))
+			return XGBE_AN_NO_LINK;
+
+		if (pdata->kx_state != XGBE_RX_BPA)
+			return XGBE_AN_NO_LINK;
+	} else {
+		pdata->kx_state = XGBE_RX_ERROR;
+
+		if (!(pdata->phy.advertising & ADVERTISED_10000baseKR_Full))
+			return XGBE_AN_NO_LINK;
+
+		if (pdata->kr_state != XGBE_RX_BPA)
+			return XGBE_AN_NO_LINK;
+	}
+
+	xgbe_disable_an(pdata);
+
+	xgbe_switch_mode(pdata);
+
+	xgbe_restart_an(pdata);
+
+	return XGBE_AN_INCOMPAT_LINK;
+}
+
+static irqreturn_t xgbe_an_isr(int irq, void *data)
+{
+	struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data;
+
+	/* Interrupt reason must be read and cleared outside of IRQ context */
+	disable_irq_nosync(pdata->an_irq);
+
+	queue_work(pdata->an_workqueue, &pdata->an_irq_work);
+
+	return IRQ_HANDLED;
+}
+
+static void xgbe_an_irq_work(struct work_struct *work)
+{
+	struct xgbe_prv_data *pdata = container_of(work,
+						   struct xgbe_prv_data,
+						   an_irq_work);
+
+	/* Avoid a race between enabling the IRQ and exiting the work by
+	 * waiting for the work to finish and then queueing it
+	 */
+	flush_work(&pdata->an_work);
+	queue_work(pdata->an_workqueue, &pdata->an_work);
+}
+
+static void xgbe_an_state_machine(struct work_struct *work)
+{
+	struct xgbe_prv_data *pdata = container_of(work,
+						   struct xgbe_prv_data,
+						   an_work);
+	enum xgbe_an cur_state = pdata->an_state;
+	unsigned int int_reg, int_mask;
+
+	mutex_lock(&pdata->an_mutex);
+
+	/* Read the interrupt */
+	int_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_INT);
+	if (!int_reg)
+		goto out;
+
+next_int:
+	if (int_reg & XGBE_AN_PG_RCV) {
+		pdata->an_state = XGBE_AN_PAGE_RECEIVED;
+		int_mask = XGBE_AN_PG_RCV;
+	} else if (int_reg & XGBE_AN_INC_LINK) {
+		pdata->an_state = XGBE_AN_INCOMPAT_LINK;
+		int_mask = XGBE_AN_INC_LINK;
+	} else if (int_reg & XGBE_AN_INT_CMPLT) {
+		pdata->an_state = XGBE_AN_COMPLETE;
+		int_mask = XGBE_AN_INT_CMPLT;
+	} else {
+		pdata->an_state = XGBE_AN_ERROR;
+		int_mask = 0;
+	}
+
+	/* Clear the interrupt to be processed */
+	int_reg &= ~int_mask;
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_INT, int_reg);
+
+	pdata->an_result = pdata->an_state;
+
+again:
+	cur_state = pdata->an_state;
+
+	switch (pdata->an_state) {
+	case XGBE_AN_READY:
+		pdata->an_supported = 0;
+		break;
+
+	case XGBE_AN_PAGE_RECEIVED:
+		pdata->an_state = xgbe_an_page_received(pdata);
+		pdata->an_supported++;
+		break;
 
-	DBGPR_MDIO("<--xgbe_mdio_read: mmd_data=%#x\n", mmd_data);
+	case XGBE_AN_INCOMPAT_LINK:
+		pdata->an_supported = 0;
+		pdata->parallel_detect = 0;
+		pdata->an_state = xgbe_an_incompat_link(pdata);
+		break;
 
-	return mmd_data;
+	case XGBE_AN_COMPLETE:
+		pdata->parallel_detect = pdata->an_supported ? 0 : 1;
+		netdev_dbg(pdata->netdev, "%s successful\n",
+			   pdata->an_supported ? "Auto negotiation"
+					       : "Parallel detection");
+		break;
+
+	case XGBE_AN_NO_LINK:
+		break;
+
+	default:
+		pdata->an_state = XGBE_AN_ERROR;
+	}
+
+	if (pdata->an_state == XGBE_AN_NO_LINK) {
+		int_reg = 0;
+		XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_INT, 0);
+	} else if (pdata->an_state == XGBE_AN_ERROR) {
+		netdev_err(pdata->netdev,
+			   "error during auto-negotiation, state=%u\n",
+			   cur_state);
+
+		int_reg = 0;
+		XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_INT, 0);
+	}
+
+	if (pdata->an_state >= XGBE_AN_COMPLETE) {
+		pdata->an_result = pdata->an_state;
+		pdata->an_state = XGBE_AN_READY;
+		pdata->kr_state = XGBE_RX_BPA;
+		pdata->kx_state = XGBE_RX_BPA;
+		pdata->an_start = 0;
+	}
+
+	if (cur_state != pdata->an_state)
+		goto again;
+
+	if (int_reg)
+		goto next_int;
+
+out:
+	enable_irq(pdata->an_irq);
+
+	mutex_unlock(&pdata->an_mutex);
 }
 
-static int xgbe_mdio_write(struct mii_bus *mii, int prtad, int mmd_reg,
-			   u16 mmd_val)
+static void xgbe_an_init(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_prv_data *pdata = mii->priv;
-	struct xgbe_hw_if *hw_if = &pdata->hw_if;
-	int mmd_data = mmd_val;
+	unsigned int reg;
+
+	/* Set up Advertisement register 3 first */
+	reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
+	if (pdata->phy.advertising & ADVERTISED_10000baseR_FEC)
+		reg |= 0xc000;
+	else
+		reg &= ~0xc000;
+
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2, reg);
+
+	/* Set up Advertisement register 2 next */
+	reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1);
+	if (pdata->phy.advertising & ADVERTISED_10000baseKR_Full)
+		reg |= 0x80;
+	else
+		reg &= ~0x80;
+
+	if ((pdata->phy.advertising & ADVERTISED_1000baseKX_Full) ||
+	    (pdata->phy.advertising & ADVERTISED_2500baseX_Full))
+		reg |= 0x20;
+	else
+		reg &= ~0x20;
+
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1, reg);
 
-	DBGPR_MDIO("-->xgbe_mdio_write: prtad=%#x mmd_reg=%#x mmd_data=%#x\n",
-		   prtad, mmd_reg, mmd_data);
+	/* Set up Advertisement register 1 last */
+	reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
+	if (pdata->phy.advertising & ADVERTISED_Pause)
+		reg |= 0x400;
+	else
+		reg &= ~0x400;
 
-	hw_if->write_mmd_regs(pdata, prtad, mmd_reg, mmd_data);
+	if (pdata->phy.advertising & ADVERTISED_Asym_Pause)
+		reg |= 0x800;
+	else
+		reg &= ~0x800;
+
+	/* We don't intend to perform XNP */
+	reg &= ~XGBE_XNP_NP_EXCHANGE;
+
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE, reg);
+}
+
+static const char *xgbe_phy_speed_string(int speed)
+{
+	switch (speed) {
+	case SPEED_1000:
+		return "1Gbps";
+	case SPEED_2500:
+		return "2.5Gbps";
+	case SPEED_10000:
+		return "10Gbps";
+	case SPEED_UNKNOWN:
+		return "Unknown";
+	default:
+		return "Unsupported";
+	}
+}
+
+static void xgbe_phy_print_status(struct xgbe_prv_data *pdata)
+{
+	if (pdata->phy.link)
+		netdev_info(pdata->netdev,
+			    "Link is Up - %s/%s - flow control %s\n",
+			    xgbe_phy_speed_string(pdata->phy.speed),
+			    pdata->phy.duplex == DUPLEX_FULL ? "Full" : "Half",
+			    pdata->phy.pause ? "rx/tx" : "off");
+	else
+		netdev_info(pdata->netdev, "Link is Down\n");
+}
+
+static void xgbe_phy_adjust_link(struct xgbe_prv_data *pdata)
+{
+	int new_state = 0;
+
+	if (pdata->phy.link) {
+		/* Flow control support */
+		if (pdata->pause_autoneg) {
+			if (pdata->phy.pause || pdata->phy.asym_pause) {
+				pdata->tx_pause = 1;
+				pdata->rx_pause = 1;
+			} else {
+				pdata->tx_pause = 0;
+				pdata->rx_pause = 0;
+			}
+		}
+
+		if (pdata->tx_pause != pdata->phy_tx_pause) {
+			pdata->hw_if.config_tx_flow_control(pdata);
+			pdata->phy_tx_pause = pdata->tx_pause;
+		}
+
+		if (pdata->rx_pause != pdata->phy_rx_pause) {
+			pdata->hw_if.config_rx_flow_control(pdata);
+			pdata->phy_rx_pause = pdata->rx_pause;
+		}
+
+		/* Speed support */
+		if (pdata->phy_speed != pdata->phy.speed) {
+			new_state = 1;
+			pdata->phy_speed = pdata->phy.speed;
+		}
+
+		if (pdata->phy_link != pdata->phy.link) {
+			new_state = 1;
+			pdata->phy_link = pdata->phy.link;
+		}
+	} else if (pdata->phy_link) {
+		new_state = 1;
+		pdata->phy_link = 0;
+		pdata->phy_speed = SPEED_UNKNOWN;
+	}
+
+	if (new_state && netif_msg_link(pdata))
+		xgbe_phy_print_status(pdata);
+}
+
+static int xgbe_phy_config_fixed(struct xgbe_prv_data *pdata)
+{
+	/* Disable auto-negotiation */
+	xgbe_disable_an(pdata);
+
+	/* Validate/Set specified speed */
+	switch (pdata->phy.speed) {
+	case SPEED_10000:
+		xgbe_set_mode(pdata, XGBE_MODE_KR);
+		break;
+
+	case SPEED_2500:
+	case SPEED_1000:
+		xgbe_set_mode(pdata, XGBE_MODE_KX);
+		break;
+
+	default:
+		return -EINVAL;
+	}
 
-	DBGPR_MDIO("<--xgbe_mdio_write\n");
+	/* Validate duplex mode */
+	if (pdata->phy.duplex != DUPLEX_FULL)
+		return -EINVAL;
+
+	pdata->phy.pause = 0;
+	pdata->phy.asym_pause = 0;
 
 	return 0;
 }
 
-void xgbe_dump_phy_registers(struct xgbe_prv_data *pdata)
+static int __xgbe_phy_config_aneg(struct xgbe_prv_data *pdata)
+{
+	set_bit(XGBE_LINK_INIT, &pdata->dev_state);
+	pdata->link_check = jiffies;
+
+	if (pdata->phy.autoneg != AUTONEG_ENABLE)
+		return xgbe_phy_config_fixed(pdata);
+
+	/* Disable auto-negotiation interrupt */
+	disable_irq(pdata->an_irq);
+
+	/* Start auto-negotiation in a supported mode */
+	if (pdata->phy.advertising & ADVERTISED_10000baseKR_Full) {
+		xgbe_set_mode(pdata, XGBE_MODE_KR);
+	} else if ((pdata->phy.advertising & ADVERTISED_1000baseKX_Full) ||
+		   (pdata->phy.advertising & ADVERTISED_2500baseX_Full)) {
+		xgbe_set_mode(pdata, XGBE_MODE_KX);
+	} else {
+		enable_irq(pdata->an_irq);
+		return -EINVAL;
+	}
+
+	/* Disable and stop any in progress auto-negotiation */
+	xgbe_disable_an(pdata);
+
+	/* Clear any auto-negotitation interrupts */
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_INT, 0);
+
+	pdata->an_result = XGBE_AN_READY;
+	pdata->an_state = XGBE_AN_READY;
+	pdata->kr_state = XGBE_RX_BPA;
+	pdata->kx_state = XGBE_RX_BPA;
+
+	/* Re-enable auto-negotiation interrupt */
+	enable_irq(pdata->an_irq);
+
+	/* Set up advertisement registers based on current settings */
+	xgbe_an_init(pdata);
+
+	/* Enable and start auto-negotiation */
+	xgbe_restart_an(pdata);
+
+	return 0;
+}
+
+static int xgbe_phy_config_aneg(struct xgbe_prv_data *pdata)
+{
+	int ret;
+
+	mutex_lock(&pdata->an_mutex);
+
+	ret = __xgbe_phy_config_aneg(pdata);
+	if (ret)
+		set_bit(XGBE_LINK_ERR, &pdata->dev_state);
+	else
+		clear_bit(XGBE_LINK_ERR, &pdata->dev_state);
+
+	mutex_unlock(&pdata->an_mutex);
+
+	return ret;
+}
+
+static bool xgbe_phy_aneg_done(struct xgbe_prv_data *pdata)
+{
+	return (pdata->an_result == XGBE_AN_COMPLETE);
+}
+
+static void xgbe_check_link_timeout(struct xgbe_prv_data *pdata)
+{
+	unsigned long link_timeout;
+
+	link_timeout = pdata->link_check + (XGBE_LINK_TIMEOUT * HZ);
+	if (time_after(jiffies, link_timeout))
+		xgbe_phy_config_aneg(pdata);
+}
+
+static void xgbe_phy_status_force(struct xgbe_prv_data *pdata)
+{
+	if (xgbe_in_kr_mode(pdata)) {
+		pdata->phy.speed = SPEED_10000;
+	} else {
+		switch (pdata->speed_set) {
+		case XGBE_SPEEDSET_1000_10000:
+			pdata->phy.speed = SPEED_1000;
+			break;
+
+		case XGBE_SPEEDSET_2500_10000:
+			pdata->phy.speed = SPEED_2500;
+			break;
+		}
+	}
+	pdata->phy.duplex = DUPLEX_FULL;
+	pdata->phy.pause = 0;
+	pdata->phy.asym_pause = 0;
+}
+
+static void xgbe_phy_status_aneg(struct xgbe_prv_data *pdata)
+{
+	unsigned int ad_reg, lp_reg;
+
+	pdata->phy.lp_advertising = 0;
+
+	if ((pdata->phy.autoneg != AUTONEG_ENABLE) || pdata->parallel_detect)
+		return xgbe_phy_status_force(pdata);
+
+	pdata->phy.lp_advertising |= ADVERTISED_Autoneg;
+	pdata->phy.lp_advertising |= ADVERTISED_Backplane;
+
+	/* Compare Advertisement and Link Partner register 1 */
+	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
+	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA);
+	if (lp_reg & 0x400)
+		pdata->phy.lp_advertising |= ADVERTISED_Pause;
+	if (lp_reg & 0x800)
+		pdata->phy.lp_advertising |= ADVERTISED_Asym_Pause;
+
+	ad_reg &= lp_reg;
+	pdata->phy.pause = (ad_reg & 0x400) ? 1 : 0;
+	pdata->phy.asym_pause = (ad_reg & 0x800) ? 1 : 0;
+
+	/* Compare Advertisement and Link Partner register 2 */
+	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1);
+	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA + 1);
+	if (lp_reg & 0x80)
+		pdata->phy.lp_advertising |= ADVERTISED_10000baseKR_Full;
+	if (lp_reg & 0x20) {
+		switch (pdata->speed_set) {
+		case XGBE_SPEEDSET_1000_10000:
+			pdata->phy.lp_advertising |= ADVERTISED_1000baseKX_Full;
+			break;
+		case XGBE_SPEEDSET_2500_10000:
+			pdata->phy.lp_advertising |= ADVERTISED_2500baseX_Full;
+			break;
+		}
+	}
+
+	ad_reg &= lp_reg;
+	if (ad_reg & 0x80) {
+		pdata->phy.speed = SPEED_10000;
+		xgbe_set_mode(pdata, XGBE_MODE_KR);
+	} else if (ad_reg & 0x20) {
+		switch (pdata->speed_set) {
+		case XGBE_SPEEDSET_1000_10000:
+			pdata->phy.speed = SPEED_1000;
+			break;
+
+		case XGBE_SPEEDSET_2500_10000:
+			pdata->phy.speed = SPEED_2500;
+			break;
+		}
+
+		xgbe_set_mode(pdata, XGBE_MODE_KX);
+	} else {
+		pdata->phy.speed = SPEED_UNKNOWN;
+	}
+
+	/* Compare Advertisement and Link Partner register 3 */
+	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
+	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA + 2);
+	if (lp_reg & 0xc000)
+		pdata->phy.lp_advertising |= ADVERTISED_10000baseR_FEC;
+
+	pdata->phy.duplex = DUPLEX_FULL;
+}
+
+static void xgbe_phy_status(struct xgbe_prv_data *pdata)
+{
+	unsigned int reg, link_aneg;
+
+	if (test_bit(XGBE_LINK_ERR, &pdata->dev_state)) {
+		if (test_and_clear_bit(XGBE_LINK, &pdata->dev_state))
+			netif_carrier_off(pdata->netdev);
+
+		pdata->phy.link = 0;
+		goto adjust_link;
+	}
+
+	link_aneg = (pdata->phy.autoneg == AUTONEG_ENABLE);
+
+	/* Get the link status. Link status is latched low, so read
+	 * once to clear and then read again to get current state
+	 */
+	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
+	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
+	pdata->phy.link = (reg & MDIO_STAT1_LSTATUS) ? 1 : 0;
+
+	if (pdata->phy.link) {
+		if (link_aneg && !xgbe_phy_aneg_done(pdata)) {
+			xgbe_check_link_timeout(pdata);
+			return;
+		}
+
+		xgbe_phy_status_aneg(pdata);
+
+		if (test_bit(XGBE_LINK_INIT, &pdata->dev_state))
+			clear_bit(XGBE_LINK_INIT, &pdata->dev_state);
+
+		if (!test_bit(XGBE_LINK, &pdata->dev_state)) {
+			set_bit(XGBE_LINK, &pdata->dev_state);
+			netif_carrier_on(pdata->netdev);
+		}
+	} else {
+		if (test_bit(XGBE_LINK_INIT, &pdata->dev_state)) {
+			xgbe_check_link_timeout(pdata);
+
+			if (link_aneg)
+				return;
+		}
+
+		xgbe_phy_status_aneg(pdata);
+
+		if (test_bit(XGBE_LINK, &pdata->dev_state)) {
+			clear_bit(XGBE_LINK, &pdata->dev_state);
+			netif_carrier_off(pdata->netdev);
+		}
+	}
+
+adjust_link:
+	xgbe_phy_adjust_link(pdata);
+}
+
+static void xgbe_phy_stop(struct xgbe_prv_data *pdata)
+{
+	/* Disable auto-negotiation */
+	xgbe_disable_an(pdata);
+
+	/* Disable auto-negotiation interrupts */
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_INTMASK, 0);
+
+	devm_free_irq(pdata->dev, pdata->an_irq, pdata);
+
+	pdata->phy.link = 0;
+	if (test_and_clear_bit(XGBE_LINK, &pdata->dev_state))
+		netif_carrier_off(pdata->netdev);
+
+	xgbe_phy_adjust_link(pdata);
+}
+
+static int xgbe_phy_start(struct xgbe_prv_data *pdata)
+{
+	struct net_device *netdev = pdata->netdev;
+	int ret;
+
+	ret = devm_request_irq(pdata->dev, pdata->an_irq,
+			       xgbe_an_isr, 0, pdata->an_name,
+			       pdata);
+	if (ret) {
+		netdev_err(netdev, "phy irq request failed\n");
+		return ret;
+	}
+
+	/* Set initial mode - call the mode setting routines
+	 * directly to insure we are properly configured
+	 */
+	if (pdata->phy.advertising & ADVERTISED_10000baseKR_Full) {
+		xgbe_xgmii_mode(pdata);
+	} else if (pdata->phy.advertising & ADVERTISED_1000baseKX_Full) {
+		xgbe_gmii_mode(pdata);
+	} else if (pdata->phy.advertising & ADVERTISED_2500baseX_Full) {
+		xgbe_gmii_2500_mode(pdata);
+	} else {
+		ret = -EINVAL;
+		goto err_irq;
+	}
+
+	/* Set up advertisement registers based on current settings */
+	xgbe_an_init(pdata);
+
+	/* Enable auto-negotiation interrupts */
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_INTMASK, 0x07);
+
+	return xgbe_phy_config_aneg(pdata);
+
+err_irq:
+	devm_free_irq(pdata->dev, pdata->an_irq, pdata);
+
+	return ret;
+}
+
+static int xgbe_phy_reset(struct xgbe_prv_data *pdata)
+{
+	unsigned int count, reg;
+
+	reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL1);
+	reg |= MDIO_CTRL1_RESET;
+	XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL1, reg);
+
+	count = 50;
+	do {
+		msleep(20);
+		reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL1);
+	} while ((reg & MDIO_CTRL1_RESET) && --count);
+
+	if (reg & MDIO_CTRL1_RESET)
+		return -ETIMEDOUT;
+
+	/* Disable auto-negotiation for now */
+	xgbe_disable_an(pdata);
+
+	/* Clear auto-negotiation interrupts */
+	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_INT, 0);
+
+	return 0;
+}
+
+static void xgbe_dump_phy_registers(struct xgbe_prv_data *pdata)
 {
 	struct device *dev = pdata->dev;
-	struct phy_device *phydev = pdata->phydev;
-	int i;
 
 	dev_dbg(dev, "\n************* PHY Reg dump **********************\n");
 
@@ -194,122 +1182,59 @@ void xgbe_dump_phy_registers(struct xgbe_prv_data *pdata)
 		MDIO_AN_COMP_STAT,
 		XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_COMP_STAT));
 
-	dev_dbg(dev, "MMD Device Mask = %#x\n",
-		phydev->c45_ids.devices_in_package);
-	for (i = 0; i < ARRAY_SIZE(phydev->c45_ids.device_ids); i++)
-		dev_dbg(dev, "  MMD %d: ID = %#08x\n", i,
-			phydev->c45_ids.device_ids[i]);
-
 	dev_dbg(dev, "\n*************************************************\n");
 }
 
-int xgbe_mdio_register(struct xgbe_prv_data *pdata)
+static void xgbe_phy_init(struct xgbe_prv_data *pdata)
 {
-	struct mii_bus *mii;
-	struct phy_device *phydev;
-	int ret = 0;
-
-	DBGPR("-->xgbe_mdio_register\n");
-
-	mii = mdiobus_alloc();
-	if (!mii) {
-		dev_err(pdata->dev, "mdiobus_alloc failed\n");
-		return -ENOMEM;
-	}
-
-	/* Register on the MDIO bus (don't probe any PHYs) */
-	mii->name = XGBE_PHY_NAME;
-	mii->read = xgbe_mdio_read;
-	mii->write = xgbe_mdio_write;
-	snprintf(mii->id, sizeof(mii->id), "%s", pdata->mii_bus_id);
-	mii->priv = pdata;
-	mii->phy_mask = ~0;
-	mii->parent = pdata->dev;
-	ret = mdiobus_register(mii);
-	if (ret) {
-		dev_err(pdata->dev, "mdiobus_register failed\n");
-		goto err_mdiobus_alloc;
-	}
-	if (netif_msg_drv(pdata))
-		dev_dbg(pdata->dev, "mdiobus_register succeeded for %s\n",
-			pdata->mii_bus_id);
-
-	/* Probe the PCS using Clause 45 */
-	phydev = get_phy_device(mii, XGBE_PRTAD, true);
-	if (IS_ERR(phydev) || !phydev ||
-	    !phydev->c45_ids.device_ids[MDIO_MMD_PCS]) {
-		dev_err(pdata->dev, "get_phy_device failed\n");
-		ret = phydev ? PTR_ERR(phydev) : -ENOLINK;
-		goto err_mdiobus_register;
-	}
-	request_module(MDIO_MODULE_PREFIX MDIO_ID_FMT,
-		       MDIO_ID_ARGS(phydev->c45_ids.device_ids[MDIO_MMD_PCS]));
+	mutex_init(&pdata->an_mutex);
+	INIT_WORK(&pdata->an_irq_work, xgbe_an_irq_work);
+	INIT_WORK(&pdata->an_work, xgbe_an_state_machine);
+	pdata->mdio_mmd = MDIO_MMD_PCS;
 
-	ret = phy_device_register(phydev);
-	if (ret) {
-		dev_err(pdata->dev, "phy_device_register failed\n");
-		goto err_phy_device;
-	}
-	if (!phydev->dev.driver) {
-		dev_err(pdata->dev, "phy driver probe failed\n");
-		ret = -EIO;
-		goto err_phy_device;
+	/* Initialize supported features */
+	pdata->phy.supported = SUPPORTED_Autoneg;
+	pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
+	pdata->phy.supported |= SUPPORTED_Backplane;
+	pdata->phy.supported |= SUPPORTED_10000baseKR_Full;
+	switch (pdata->speed_set) {
+	case XGBE_SPEEDSET_1000_10000:
+		pdata->phy.supported |= SUPPORTED_1000baseKX_Full;
+		break;
+	case XGBE_SPEEDSET_2500_10000:
+		pdata->phy.supported |= SUPPORTED_2500baseX_Full;
+		break;
 	}
 
-	/* Add a reference to the PHY driver so it can't be unloaded */
-	pdata->phy_module = phydev->dev.driver->owner;
-	if (!try_module_get(pdata->phy_module)) {
-		dev_err(pdata->dev, "try_module_get failed\n");
-		ret = -EIO;
-		goto err_phy_device;
-	}
+	pdata->fec_ability = XMDIO_READ(pdata, MDIO_MMD_PMAPMD,
+					MDIO_PMA_10GBR_FECABLE);
+	pdata->fec_ability &= (MDIO_PMA_10GBR_FECABLE_ABLE |
+			       MDIO_PMA_10GBR_FECABLE_ERRABLE);
+	if (pdata->fec_ability & MDIO_PMA_10GBR_FECABLE_ABLE)
+		pdata->phy.supported |= SUPPORTED_10000baseR_FEC;
 
-	pdata->mii = mii;
-	pdata->mdio_mmd = MDIO_MMD_PCS;
+	pdata->phy.advertising = pdata->phy.supported;
 
-	phydev->autoneg = pdata->default_autoneg;
-	if (phydev->autoneg == AUTONEG_DISABLE) {
-		phydev->speed = pdata->default_speed;
-		phydev->duplex = DUPLEX_FULL;
+	pdata->phy.address = 0;
 
-		phydev->advertising &= ~ADVERTISED_Autoneg;
-	}
+	pdata->phy.autoneg = AUTONEG_ENABLE;
+	pdata->phy.speed = SPEED_UNKNOWN;
+	pdata->phy.duplex = DUPLEX_UNKNOWN;
 
-	pdata->phydev = phydev;
+	pdata->phy.link = 0;
 
 	if (netif_msg_drv(pdata))
 		xgbe_dump_phy_registers(pdata);
-
-	DBGPR("<--xgbe_mdio_register\n");
-
-	return 0;
-
-err_phy_device:
-	phy_device_free(phydev);
-
-err_mdiobus_register:
-	mdiobus_unregister(mii);
-
-err_mdiobus_alloc:
-	mdiobus_free(mii);
-
-	return ret;
 }
 
-void xgbe_mdio_unregister(struct xgbe_prv_data *pdata)
+void xgbe_init_function_ptrs_phy(struct xgbe_phy_if *phy_if)
 {
-	DBGPR("-->xgbe_mdio_unregister\n");
-
-	pdata->phydev = NULL;
-
-	module_put(pdata->phy_module);
-	pdata->phy_module = NULL;
-
-	mdiobus_unregister(pdata->mii);
-	pdata->mii->priv = NULL;
+	phy_if->phy_init        = xgbe_phy_init;
 
-	mdiobus_free(pdata->mii);
-	pdata->mii = NULL;
+	phy_if->phy_reset       = xgbe_phy_reset;
+	phy_if->phy_start       = xgbe_phy_start;
+	phy_if->phy_stop        = xgbe_phy_stop;
 
-	DBGPR("<--xgbe_mdio_unregister\n");
+	phy_if->phy_status      = xgbe_phy_status;
+	phy_if->phy_config_aneg = xgbe_phy_config_aneg;
 }
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index e182b25..b6aecc9 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -129,7 +129,7 @@
 #include <net/dcbnl.h>
 
 #define XGBE_DRV_NAME		"amd-xgbe"
-#define XGBE_DRV_VERSION	"1.0.0-a"
+#define XGBE_DRV_VERSION	"1.0.1"
 #define XGBE_DRV_DESC		"AMD 10 Gigabit Ethernet Driver"
 
 /* Descriptor related defines */
@@ -178,14 +178,17 @@
 #define XGMAC_JUMBO_PACKET_MTU	9000
 #define XGMAC_MAX_JUMBO_PACKET	9018
 
-/* MDIO bus phy name */
-#define XGBE_PHY_NAME		"amd_xgbe_phy"
-#define XGBE_PRTAD		0
-
 /* Common property names */
 #define XGBE_MAC_ADDR_PROPERTY	"mac-address"
 #define XGBE_PHY_MODE_PROPERTY	"phy-mode"
 #define XGBE_DMA_IRQS_PROPERTY	"amd,per-channel-interrupt"
+#define XGBE_SPEEDSET_PROPERTY	"amd,speed-set"
+#define XGBE_BLWC_PROPERTY	"amd,serdes-blwc"
+#define XGBE_CDR_RATE_PROPERTY	"amd,serdes-cdr-rate"
+#define XGBE_PQ_SKEW_PROPERTY	"amd,serdes-pq-skew"
+#define XGBE_TX_AMP_PROPERTY	"amd,serdes-tx-amp"
+#define XGBE_DFE_CFG_PROPERTY	"amd,serdes-dfe-tap-config"
+#define XGBE_DFE_ENA_PROPERTY	"amd,serdes-dfe-tap-enable"
 
 /* Device-tree clock names */
 #define XGBE_DMA_CLOCK		"dma_clk"
@@ -241,6 +244,49 @@
 #define XGBE_RSS_LOOKUP_TABLE_TYPE	0
 #define XGBE_RSS_HASH_KEY_TYPE		1
 
+/* Auto-negotiation */
+#define XGBE_AN_MS_TIMEOUT		500
+#define XGBE_LINK_TIMEOUT		10
+
+#define XGBE_AN_INT_CMPLT		0x01
+#define XGBE_AN_INC_LINK		0x02
+#define XGBE_AN_PG_RCV			0x04
+#define XGBE_AN_INT_MASK		0x07
+
+/* Rate-change complete wait/retry count */
+#define XGBE_RATECHANGE_COUNT		500
+
+/* Default SerDes settings */
+#define XGBE_SPEED_10000_BLWC		0
+#define XGBE_SPEED_10000_CDR		0x7
+#define XGBE_SPEED_10000_PLL		0x1
+#define XGBE_SPEED_10000_PQ		0x12
+#define XGBE_SPEED_10000_RATE		0x0
+#define XGBE_SPEED_10000_TXAMP		0xa
+#define XGBE_SPEED_10000_WORD		0x7
+#define XGBE_SPEED_10000_DFE_TAP_CONFIG	0x1
+#define XGBE_SPEED_10000_DFE_TAP_ENABLE	0x7f
+
+#define XGBE_SPEED_2500_BLWC		1
+#define XGBE_SPEED_2500_CDR		0x2
+#define XGBE_SPEED_2500_PLL		0x0
+#define XGBE_SPEED_2500_PQ		0xa
+#define XGBE_SPEED_2500_RATE		0x1
+#define XGBE_SPEED_2500_TXAMP		0xf
+#define XGBE_SPEED_2500_WORD		0x1
+#define XGBE_SPEED_2500_DFE_TAP_CONFIG	0x3
+#define XGBE_SPEED_2500_DFE_TAP_ENABLE	0x0
+
+#define XGBE_SPEED_1000_BLWC		1
+#define XGBE_SPEED_1000_CDR		0x2
+#define XGBE_SPEED_1000_PLL		0x0
+#define XGBE_SPEED_1000_PQ		0xa
+#define XGBE_SPEED_1000_RATE		0x3
+#define XGBE_SPEED_1000_TXAMP		0xf
+#define XGBE_SPEED_1000_WORD		0x1
+#define XGBE_SPEED_1000_DFE_TAP_CONFIG	0x3
+#define XGBE_SPEED_1000_DFE_TAP_ENABLE	0x0
+
 struct xgbe_prv_data;
 
 struct xgbe_packet_data {
@@ -412,6 +458,13 @@ struct xgbe_channel {
 	struct xgbe_ring *rx_ring;
 } ____cacheline_aligned;
 
+enum xgbe_state {
+	XGBE_DOWN,
+	XGBE_LINK,
+	XGBE_LINK_INIT,
+	XGBE_LINK_ERR,
+};
+
 enum xgbe_int {
 	XGMAC_INT_DMA_CH_SR_TI,
 	XGMAC_INT_DMA_CH_SR_TPS,
@@ -443,6 +496,55 @@ enum xgbe_mtl_fifo_size {
 	XGMAC_MTL_FIFO_SIZE_256K = 0x3ff,
 };
 
+enum xgbe_speed {
+	XGBE_SPEED_1000 = 0,
+	XGBE_SPEED_2500,
+	XGBE_SPEED_10000,
+	XGBE_SPEEDS,
+};
+
+enum xgbe_an {
+	XGBE_AN_READY = 0,
+	XGBE_AN_PAGE_RECEIVED,
+	XGBE_AN_INCOMPAT_LINK,
+	XGBE_AN_COMPLETE,
+	XGBE_AN_NO_LINK,
+	XGBE_AN_ERROR,
+};
+
+enum xgbe_rx {
+	XGBE_RX_BPA = 0,
+	XGBE_RX_XNP,
+	XGBE_RX_COMPLETE,
+	XGBE_RX_ERROR,
+};
+
+enum xgbe_mode {
+	XGBE_MODE_KR = 0,
+	XGBE_MODE_KX,
+};
+
+enum xgbe_speedset {
+	XGBE_SPEEDSET_1000_10000 = 0,
+	XGBE_SPEEDSET_2500_10000,
+};
+
+struct xgbe_phy {
+	u32 supported;
+	u32 advertising;
+	u32 lp_advertising;
+
+	int address;
+
+	int autoneg;
+	int speed;
+	int duplex;
+	int pause;
+	int asym_pause;
+
+	int link;
+};
+
 struct xgbe_mmc_stats {
 	/* Tx Stats */
 	u64 txoctetcount_gb;
@@ -594,6 +696,20 @@ struct xgbe_hw_if {
 	int (*set_rss_lookup_table)(struct xgbe_prv_data *, const u32 *);
 };
 
+struct xgbe_phy_if {
+	/* For initial PHY setup */
+	void (*phy_init)(struct xgbe_prv_data *);
+
+	/* For PHY support when setting device up/down */
+	int (*phy_reset)(struct xgbe_prv_data *);
+	int (*phy_start)(struct xgbe_prv_data *);
+	void (*phy_stop)(struct xgbe_prv_data *);
+
+	/* For PHY support while device is up */
+	void (*phy_status)(struct xgbe_prv_data *);
+	int (*phy_config_aneg)(struct xgbe_prv_data *);
+};
+
 struct xgbe_desc_if {
 	int (*alloc_ring_resources)(struct xgbe_prv_data *);
 	void (*free_ring_resources)(struct xgbe_prv_data *);
@@ -663,6 +779,9 @@ struct xgbe_prv_data {
 	/* XGMAC/XPCS related mmio registers */
 	void __iomem *xgmac_regs;	/* XGMAC CSRs */
 	void __iomem *xpcs_regs;	/* XPCS MMD registers */
+	void __iomem *rxtx_regs;	/* SerDes Rx/Tx CSRs */
+	void __iomem *sir0_regs;	/* SerDes integration registers (1/2) */
+	void __iomem *sir1_regs;	/* SerDes integration registers (2/2) */
 
 	/* Overall device lock */
 	spinlock_t lock;
@@ -673,10 +792,14 @@ struct xgbe_prv_data {
 	/* RSS addressing mutex */
 	struct mutex rss_mutex;
 
+	/* Flags representing xgbe_state */
+	unsigned long dev_state;
+
 	int dev_irq;
 	unsigned int per_channel_irq;
 
 	struct xgbe_hw_if hw_if;
+	struct xgbe_phy_if phy_if;
 	struct xgbe_desc_if desc_if;
 
 	/* AXI DMA settings */
@@ -685,6 +808,11 @@ struct xgbe_prv_data {
 	unsigned int arcache;
 	unsigned int awcache;
 
+	/* Service routine support */
+	struct workqueue_struct *dev_workqueue;
+	struct work_struct service_work;
+	struct timer_list service_timer;
+
 	/* Rings for Tx/Rx on a DMA channel */
 	struct xgbe_channel *channel;
 	unsigned int channel_count;
@@ -732,22 +860,6 @@ struct xgbe_prv_data {
 	u32 rss_table[XGBE_RSS_MAX_TABLE_SIZE];
 	u32 rss_options;
 
-	/* MDIO settings */
-	struct module *phy_module;
-	char *mii_bus_id;
-	struct mii_bus *mii;
-	int mdio_mmd;
-	struct phy_device *phydev;
-	int default_autoneg;
-	int default_speed;
-
-	/* Current PHY settings */
-	phy_interface_t phy_mode;
-	int phy_link;
-	int phy_speed;
-	unsigned int phy_tx_pause;
-	unsigned int phy_rx_pause;
-
 	/* Netdev related settings */
 	unsigned char mac_addr[ETH_ALEN];
 	netdev_features_t netdev_features;
@@ -794,6 +906,53 @@ struct xgbe_prv_data {
 	/* Network interface message level setting */
 	u32 msg_enable;
 
+	/* Current PHY settings */
+	phy_interface_t phy_mode;
+	int phy_link;
+	int phy_speed;
+	unsigned int phy_tx_pause;
+	unsigned int phy_rx_pause;
+
+	/* MDIO/PHY related settings */
+	struct xgbe_phy phy;
+	int mdio_mmd;
+	unsigned long link_check;
+
+	char an_name[IFNAMSIZ + 32];
+	struct workqueue_struct *an_workqueue;
+
+	int an_irq;
+	struct work_struct an_irq_work;
+
+	unsigned int speed_set;
+
+	/* SerDes UEFI configurable settings.
+	 *   Switching between modes/speeds requires new values for some
+	 *   SerDes settings.  The values can be supplied as device
+	 *   properties in array format.  The first array entry is for
+	 *   1GbE, second for 2.5GbE and third for 10GbE
+	 */
+	u32 serdes_blwc[XGBE_SPEEDS];
+	u32 serdes_cdr_rate[XGBE_SPEEDS];
+	u32 serdes_pq_skew[XGBE_SPEEDS];
+	u32 serdes_tx_amp[XGBE_SPEEDS];
+	u32 serdes_dfe_tap_cfg[XGBE_SPEEDS];
+	u32 serdes_dfe_tap_ena[XGBE_SPEEDS];
+
+	/* Auto-negotiation state machine support */
+	struct mutex an_mutex;
+	enum xgbe_an an_result;
+	enum xgbe_an an_state;
+	enum xgbe_rx kr_state;
+	enum xgbe_rx kx_state;
+	struct work_struct an_work;
+	unsigned int an_supported;
+	unsigned int parallel_detect;
+	unsigned int fec_ability;
+	unsigned long an_start;
+
+	unsigned int lpm_ctrl;		/* CTRL1 for resume */
+
 #ifdef CONFIG_DEBUG_FS
 	struct dentry *xgbe_debugfs;
 
@@ -807,6 +966,7 @@ struct xgbe_prv_data {
 /* Function prototypes*/
 
 void xgbe_init_function_ptrs_dev(struct xgbe_hw_if *);
+void xgbe_init_function_ptrs_phy(struct xgbe_phy_if *);
 void xgbe_init_function_ptrs_desc(struct xgbe_desc_if *);
 struct net_device_ops *xgbe_get_netdev_ops(void);
 struct ethtool_ops *xgbe_get_ethtool_ops(void);
@@ -814,9 +974,6 @@ struct ethtool_ops *xgbe_get_ethtool_ops(void);
 const struct dcbnl_rtnl_ops *xgbe_get_dcbnl_ops(void);
 #endif
 
-int xgbe_mdio_register(struct xgbe_prv_data *);
-void xgbe_mdio_unregister(struct xgbe_prv_data *);
-void xgbe_dump_phy_registers(struct xgbe_prv_data *);
 void xgbe_ptp_register(struct xgbe_prv_data *);
 void xgbe_ptp_unregister(struct xgbe_prv_data *);
 void xgbe_dump_tx_desc(struct xgbe_prv_data *, struct xgbe_ring *,
diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig
index 8fadaa1..7c0cb87 100644
--- a/drivers/net/phy/Kconfig
+++ b/drivers/net/phy/Kconfig
@@ -24,12 +24,6 @@ config AMD_PHY
 	---help---
 	  Currently supports the am79c874
 
-config AMD_XGBE_PHY
-	tristate "Driver for the AMD 10GbE (amd-xgbe) PHYs"
-	depends on (OF || ACPI) && HAS_IOMEM
-	---help---
-	  Currently supports the AMD 10GbE PHY
-
 config MARVELL_PHY
 	tristate "Drivers for Marvell PHYs"
 	---help---
diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile
index 501ea76..e97e7f9 100644
--- a/drivers/net/phy/Makefile
+++ b/drivers/net/phy/Makefile
@@ -33,5 +33,4 @@ obj-$(CONFIG_MDIO_BUS_MUX_GPIO)	+= mdio-mux-gpio.o
 obj-$(CONFIG_MDIO_BUS_MUX_MMIOREG) += mdio-mux-mmioreg.o
 obj-$(CONFIG_MDIO_SUN4I)	+= mdio-sun4i.o
 obj-$(CONFIG_MDIO_MOXART)	+= mdio-moxart.o
-obj-$(CONFIG_AMD_XGBE_PHY)	+= amd-xgbe-phy.o
 obj-$(CONFIG_MDIO_BCM_UNIMAC)	+= mdio-bcm-unimac.o
diff --git a/drivers/net/phy/amd-xgbe-phy.c b/drivers/net/phy/amd-xgbe-phy.c
deleted file mode 100644
index fb276f6..0000000
--- a/drivers/net/phy/amd-xgbe-phy.c
+++ /dev/null
@@ -1,1862 +0,0 @@
-/*
- * AMD 10Gb Ethernet PHY driver
- *
- * This file is available to you under your choice of the following two
- * licenses:
- *
- * License 1: GPLv2
- *
- * Copyright (c) 2014 Advanced Micro Devices, Inc.
- *
- * This file is free software; you may copy, redistribute and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation, either version 2 of the License, or (at
- * your option) any later version.
- *
- * This file is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- *
- *
- * License 2: Modified BSD
- *
- * Copyright (c) 2014 Advanced Micro Devices, Inc.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in the
- *       documentation and/or other materials provided with the distribution.
- *     * Neither the name of Advanced Micro Devices, Inc. nor the
- *       names of its contributors may be used to endorse or promote products
- *       derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
- * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <linux/kernel.h>
-#include <linux/device.h>
-#include <linux/platform_device.h>
-#include <linux/string.h>
-#include <linux/errno.h>
-#include <linux/unistd.h>
-#include <linux/slab.h>
-#include <linux/interrupt.h>
-#include <linux/init.h>
-#include <linux/delay.h>
-#include <linux/workqueue.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/skbuff.h>
-#include <linux/mm.h>
-#include <linux/module.h>
-#include <linux/mii.h>
-#include <linux/ethtool.h>
-#include <linux/phy.h>
-#include <linux/mdio.h>
-#include <linux/io.h>
-#include <linux/of.h>
-#include <linux/of_platform.h>
-#include <linux/of_device.h>
-#include <linux/uaccess.h>
-#include <linux/bitops.h>
-#include <linux/property.h>
-#include <linux/acpi.h>
-#include <linux/jiffies.h>
-
-MODULE_AUTHOR("Tom Lendacky <thomas.lendacky@amd.com>");
-MODULE_LICENSE("Dual BSD/GPL");
-MODULE_VERSION("1.0.0-a");
-MODULE_DESCRIPTION("AMD 10GbE (amd-xgbe) PHY driver");
-
-#define XGBE_PHY_ID	0x000162d0
-#define XGBE_PHY_MASK	0xfffffff0
-
-#define XGBE_PHY_SPEEDSET_PROPERTY	"amd,speed-set"
-#define XGBE_PHY_BLWC_PROPERTY		"amd,serdes-blwc"
-#define XGBE_PHY_CDR_RATE_PROPERTY	"amd,serdes-cdr-rate"
-#define XGBE_PHY_PQ_SKEW_PROPERTY	"amd,serdes-pq-skew"
-#define XGBE_PHY_TX_AMP_PROPERTY	"amd,serdes-tx-amp"
-#define XGBE_PHY_DFE_CFG_PROPERTY	"amd,serdes-dfe-tap-config"
-#define XGBE_PHY_DFE_ENA_PROPERTY	"amd,serdes-dfe-tap-enable"
-
-#define XGBE_PHY_SPEEDS			3
-#define XGBE_PHY_SPEED_1000		0
-#define XGBE_PHY_SPEED_2500		1
-#define XGBE_PHY_SPEED_10000		2
-
-#define XGBE_AN_MS_TIMEOUT		500
-
-#define XGBE_AN_INT_CMPLT		0x01
-#define XGBE_AN_INC_LINK		0x02
-#define XGBE_AN_PG_RCV			0x04
-#define XGBE_AN_INT_MASK		0x07
-
-#define XNP_MCF_NULL_MESSAGE		0x001
-#define XNP_ACK_PROCESSED		BIT(12)
-#define XNP_MP_FORMATTED		BIT(13)
-#define XNP_NP_EXCHANGE			BIT(15)
-
-#define XGBE_PHY_RATECHANGE_COUNT	500
-
-#define XGBE_PHY_KR_TRAINING_START	0x01
-#define XGBE_PHY_KR_TRAINING_ENABLE	0x02
-
-#define XGBE_PHY_FEC_ENABLE		0x01
-#define XGBE_PHY_FEC_FORWARD		0x02
-#define XGBE_PHY_FEC_MASK		0x03
-
-#ifndef MDIO_PMA_10GBR_PMD_CTRL
-#define MDIO_PMA_10GBR_PMD_CTRL		0x0096
-#endif
-
-#ifndef MDIO_PMA_10GBR_FEC_ABILITY
-#define MDIO_PMA_10GBR_FEC_ABILITY	0x00aa
-#endif
-
-#ifndef MDIO_PMA_10GBR_FEC_CTRL
-#define MDIO_PMA_10GBR_FEC_CTRL		0x00ab
-#endif
-
-#ifndef MDIO_AN_XNP
-#define MDIO_AN_XNP			0x0016
-#endif
-
-#ifndef MDIO_AN_LPX
-#define MDIO_AN_LPX			0x0019
-#endif
-
-#ifndef MDIO_AN_INTMASK
-#define MDIO_AN_INTMASK			0x8001
-#endif
-
-#ifndef MDIO_AN_INT
-#define MDIO_AN_INT			0x8002
-#endif
-
-#ifndef MDIO_CTRL1_SPEED1G
-#define MDIO_CTRL1_SPEED1G		(MDIO_CTRL1_SPEED10G & ~BMCR_SPEED100)
-#endif
-
-/* SerDes integration register offsets */
-#define SIR0_KR_RT_1			0x002c
-#define SIR0_STATUS			0x0040
-#define SIR1_SPEED			0x0000
-
-/* SerDes integration register entry bit positions and sizes */
-#define SIR0_KR_RT_1_RESET_INDEX	11
-#define SIR0_KR_RT_1_RESET_WIDTH	1
-#define SIR0_STATUS_RX_READY_INDEX	0
-#define SIR0_STATUS_RX_READY_WIDTH	1
-#define SIR0_STATUS_TX_READY_INDEX	8
-#define SIR0_STATUS_TX_READY_WIDTH	1
-#define SIR1_SPEED_CDR_RATE_INDEX	12
-#define SIR1_SPEED_CDR_RATE_WIDTH	4
-#define SIR1_SPEED_DATARATE_INDEX	4
-#define SIR1_SPEED_DATARATE_WIDTH	2
-#define SIR1_SPEED_PLLSEL_INDEX		3
-#define SIR1_SPEED_PLLSEL_WIDTH		1
-#define SIR1_SPEED_RATECHANGE_INDEX	6
-#define SIR1_SPEED_RATECHANGE_WIDTH	1
-#define SIR1_SPEED_TXAMP_INDEX		8
-#define SIR1_SPEED_TXAMP_WIDTH		4
-#define SIR1_SPEED_WORDMODE_INDEX	0
-#define SIR1_SPEED_WORDMODE_WIDTH	3
-
-#define SPEED_10000_BLWC		0
-#define SPEED_10000_CDR			0x7
-#define SPEED_10000_PLL			0x1
-#define SPEED_10000_PQ			0x12
-#define SPEED_10000_RATE		0x0
-#define SPEED_10000_TXAMP		0xa
-#define SPEED_10000_WORD		0x7
-#define SPEED_10000_DFE_TAP_CONFIG	0x1
-#define SPEED_10000_DFE_TAP_ENABLE	0x7f
-
-#define SPEED_2500_BLWC			1
-#define SPEED_2500_CDR			0x2
-#define SPEED_2500_PLL			0x0
-#define SPEED_2500_PQ			0xa
-#define SPEED_2500_RATE			0x1
-#define SPEED_2500_TXAMP		0xf
-#define SPEED_2500_WORD			0x1
-#define SPEED_2500_DFE_TAP_CONFIG	0x3
-#define SPEED_2500_DFE_TAP_ENABLE	0x0
-
-#define SPEED_1000_BLWC			1
-#define SPEED_1000_CDR			0x2
-#define SPEED_1000_PLL			0x0
-#define SPEED_1000_PQ			0xa
-#define SPEED_1000_RATE			0x3
-#define SPEED_1000_TXAMP		0xf
-#define SPEED_1000_WORD			0x1
-#define SPEED_1000_DFE_TAP_CONFIG	0x3
-#define SPEED_1000_DFE_TAP_ENABLE	0x0
-
-/* SerDes RxTx register offsets */
-#define RXTX_REG6			0x0018
-#define RXTX_REG20			0x0050
-#define RXTX_REG22			0x0058
-#define RXTX_REG114			0x01c8
-#define RXTX_REG129			0x0204
-
-/* SerDes RxTx register entry bit positions and sizes */
-#define RXTX_REG6_RESETB_RXD_INDEX	8
-#define RXTX_REG6_RESETB_RXD_WIDTH	1
-#define RXTX_REG20_BLWC_ENA_INDEX	2
-#define RXTX_REG20_BLWC_ENA_WIDTH	1
-#define RXTX_REG114_PQ_REG_INDEX	9
-#define RXTX_REG114_PQ_REG_WIDTH	7
-#define RXTX_REG129_RXDFE_CONFIG_INDEX	14
-#define RXTX_REG129_RXDFE_CONFIG_WIDTH	2
-
-/* Bit setting and getting macros
- *  The get macro will extract the current bit field value from within
- *  the variable
- *
- *  The set macro will clear the current bit field value within the
- *  variable and then set the bit field of the variable to the
- *  specified value
- */
-#define GET_BITS(_var, _index, _width)					\
-	(((_var) >> (_index)) & ((0x1 << (_width)) - 1))
-
-#define SET_BITS(_var, _index, _width, _val)				\
-do {									\
-	(_var) &= ~(((0x1 << (_width)) - 1) << (_index));		\
-	(_var) |= (((_val) & ((0x1 << (_width)) - 1)) << (_index));	\
-} while (0)
-
-#define XSIR_GET_BITS(_var, _prefix, _field)				\
-	GET_BITS((_var),						\
-		 _prefix##_##_field##_INDEX,				\
-		 _prefix##_##_field##_WIDTH)
-
-#define XSIR_SET_BITS(_var, _prefix, _field, _val)			\
-	SET_BITS((_var),						\
-		 _prefix##_##_field##_INDEX,				\
-		 _prefix##_##_field##_WIDTH, (_val))
-
-/* Macros for reading or writing SerDes integration registers
- *  The ioread macros will get bit fields or full values using the
- *  register definitions formed using the input names
- *
- *  The iowrite macros will set bit fields or full values using the
- *  register definitions formed using the input names
- */
-#define XSIR0_IOREAD(_priv, _reg)					\
-	ioread16((_priv)->sir0_regs + _reg)
-
-#define XSIR0_IOREAD_BITS(_priv, _reg, _field)				\
-	GET_BITS(XSIR0_IOREAD((_priv), _reg),				\
-		 _reg##_##_field##_INDEX,				\
-		 _reg##_##_field##_WIDTH)
-
-#define XSIR0_IOWRITE(_priv, _reg, _val)				\
-	iowrite16((_val), (_priv)->sir0_regs + _reg)
-
-#define XSIR0_IOWRITE_BITS(_priv, _reg, _field, _val)			\
-do {									\
-	u16 reg_val = XSIR0_IOREAD((_priv), _reg);			\
-	SET_BITS(reg_val,						\
-		 _reg##_##_field##_INDEX,				\
-		 _reg##_##_field##_WIDTH, (_val));			\
-	XSIR0_IOWRITE((_priv), _reg, reg_val);				\
-} while (0)
-
-#define XSIR1_IOREAD(_priv, _reg)					\
-	ioread16((_priv)->sir1_regs + _reg)
-
-#define XSIR1_IOREAD_BITS(_priv, _reg, _field)				\
-	GET_BITS(XSIR1_IOREAD((_priv), _reg),				\
-		 _reg##_##_field##_INDEX,				\
-		 _reg##_##_field##_WIDTH)
-
-#define XSIR1_IOWRITE(_priv, _reg, _val)				\
-	iowrite16((_val), (_priv)->sir1_regs + _reg)
-
-#define XSIR1_IOWRITE_BITS(_priv, _reg, _field, _val)			\
-do {									\
-	u16 reg_val = XSIR1_IOREAD((_priv), _reg);			\
-	SET_BITS(reg_val,						\
-		 _reg##_##_field##_INDEX,				\
-		 _reg##_##_field##_WIDTH, (_val));			\
-	XSIR1_IOWRITE((_priv), _reg, reg_val);				\
-} while (0)
-
-/* Macros for reading or writing SerDes RxTx registers
- *  The ioread macros will get bit fields or full values using the
- *  register definitions formed using the input names
- *
- *  The iowrite macros will set bit fields or full values using the
- *  register definitions formed using the input names
- */
-#define XRXTX_IOREAD(_priv, _reg)					\
-	ioread16((_priv)->rxtx_regs + _reg)
-
-#define XRXTX_IOREAD_BITS(_priv, _reg, _field)				\
-	GET_BITS(XRXTX_IOREAD((_priv), _reg),				\
-		 _reg##_##_field##_INDEX,				\
-		 _reg##_##_field##_WIDTH)
-
-#define XRXTX_IOWRITE(_priv, _reg, _val)				\
-	iowrite16((_val), (_priv)->rxtx_regs + _reg)
-
-#define XRXTX_IOWRITE_BITS(_priv, _reg, _field, _val)			\
-do {									\
-	u16 reg_val = XRXTX_IOREAD((_priv), _reg);			\
-	SET_BITS(reg_val,						\
-		 _reg##_##_field##_INDEX,				\
-		 _reg##_##_field##_WIDTH, (_val));			\
-	XRXTX_IOWRITE((_priv), _reg, reg_val);				\
-} while (0)
-
-static const u32 amd_xgbe_phy_serdes_blwc[] = {
-	SPEED_1000_BLWC,
-	SPEED_2500_BLWC,
-	SPEED_10000_BLWC,
-};
-
-static const u32 amd_xgbe_phy_serdes_cdr_rate[] = {
-	SPEED_1000_CDR,
-	SPEED_2500_CDR,
-	SPEED_10000_CDR,
-};
-
-static const u32 amd_xgbe_phy_serdes_pq_skew[] = {
-	SPEED_1000_PQ,
-	SPEED_2500_PQ,
-	SPEED_10000_PQ,
-};
-
-static const u32 amd_xgbe_phy_serdes_tx_amp[] = {
-	SPEED_1000_TXAMP,
-	SPEED_2500_TXAMP,
-	SPEED_10000_TXAMP,
-};
-
-static const u32 amd_xgbe_phy_serdes_dfe_tap_cfg[] = {
-	SPEED_1000_DFE_TAP_CONFIG,
-	SPEED_2500_DFE_TAP_CONFIG,
-	SPEED_10000_DFE_TAP_CONFIG,
-};
-
-static const u32 amd_xgbe_phy_serdes_dfe_tap_ena[] = {
-	SPEED_1000_DFE_TAP_ENABLE,
-	SPEED_2500_DFE_TAP_ENABLE,
-	SPEED_10000_DFE_TAP_ENABLE,
-};
-
-enum amd_xgbe_phy_an {
-	AMD_XGBE_AN_READY = 0,
-	AMD_XGBE_AN_PAGE_RECEIVED,
-	AMD_XGBE_AN_INCOMPAT_LINK,
-	AMD_XGBE_AN_COMPLETE,
-	AMD_XGBE_AN_NO_LINK,
-	AMD_XGBE_AN_ERROR,
-};
-
-enum amd_xgbe_phy_rx {
-	AMD_XGBE_RX_BPA = 0,
-	AMD_XGBE_RX_XNP,
-	AMD_XGBE_RX_COMPLETE,
-	AMD_XGBE_RX_ERROR,
-};
-
-enum amd_xgbe_phy_mode {
-	AMD_XGBE_MODE_KR,
-	AMD_XGBE_MODE_KX,
-};
-
-enum amd_xgbe_phy_speedset {
-	AMD_XGBE_PHY_SPEEDSET_1000_10000 = 0,
-	AMD_XGBE_PHY_SPEEDSET_2500_10000,
-};
-
-struct amd_xgbe_phy_priv {
-	struct platform_device *pdev;
-	struct acpi_device *adev;
-	struct device *dev;
-
-	struct phy_device *phydev;
-
-	/* SerDes related mmio resources */
-	struct resource *rxtx_res;
-	struct resource *sir0_res;
-	struct resource *sir1_res;
-
-	/* SerDes related mmio registers */
-	void __iomem *rxtx_regs;	/* SerDes Rx/Tx CSRs */
-	void __iomem *sir0_regs;	/* SerDes integration registers (1/2) */
-	void __iomem *sir1_regs;	/* SerDes integration registers (2/2) */
-
-	int an_irq;
-	char an_irq_name[IFNAMSIZ + 32];
-	struct work_struct an_irq_work;
-	unsigned int an_irq_allocated;
-
-	unsigned int speed_set;
-
-	/* SerDes UEFI configurable settings.
-	 *   Switching between modes/speeds requires new values for some
-	 *   SerDes settings.  The values can be supplied as device
-	 *   properties in array format.  The first array entry is for
-	 *   1GbE, second for 2.5GbE and third for 10GbE
-	 */
-	u32 serdes_blwc[XGBE_PHY_SPEEDS];
-	u32 serdes_cdr_rate[XGBE_PHY_SPEEDS];
-	u32 serdes_pq_skew[XGBE_PHY_SPEEDS];
-	u32 serdes_tx_amp[XGBE_PHY_SPEEDS];
-	u32 serdes_dfe_tap_cfg[XGBE_PHY_SPEEDS];
-	u32 serdes_dfe_tap_ena[XGBE_PHY_SPEEDS];
-
-	/* Auto-negotiation state machine support */
-	struct mutex an_mutex;
-	enum amd_xgbe_phy_an an_result;
-	enum amd_xgbe_phy_an an_state;
-	enum amd_xgbe_phy_rx kr_state;
-	enum amd_xgbe_phy_rx kx_state;
-	struct work_struct an_work;
-	struct workqueue_struct *an_workqueue;
-	unsigned int an_supported;
-	unsigned int parallel_detect;
-	unsigned int fec_ability;
-	unsigned long an_start;
-
-	unsigned int lpm_ctrl;		/* CTRL1 for resume */
-};
-
-static int amd_xgbe_an_enable_kr_training(struct phy_device *phydev)
-{
-	int ret;
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL);
-	if (ret < 0)
-		return ret;
-
-	ret |= XGBE_PHY_KR_TRAINING_ENABLE;
-	phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, ret);
-
-	return 0;
-}
-
-static int amd_xgbe_an_disable_kr_training(struct phy_device *phydev)
-{
-	int ret;
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL);
-	if (ret < 0)
-		return ret;
-
-	ret &= ~XGBE_PHY_KR_TRAINING_ENABLE;
-	phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, ret);
-
-	return 0;
-}
-
-static int amd_xgbe_phy_pcs_power_cycle(struct phy_device *phydev)
-{
-	int ret;
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
-	if (ret < 0)
-		return ret;
-
-	ret |= MDIO_CTRL1_LPOWER;
-	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
-
-	usleep_range(75, 100);
-
-	ret &= ~MDIO_CTRL1_LPOWER;
-	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
-
-	return 0;
-}
-
-static void amd_xgbe_phy_serdes_start_ratechange(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-
-	/* Assert Rx and Tx ratechange */
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, RATECHANGE, 1);
-}
-
-static void amd_xgbe_phy_serdes_complete_ratechange(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	unsigned int wait;
-	u16 status;
-
-	/* Release Rx and Tx ratechange */
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, RATECHANGE, 0);
-
-	/* Wait for Rx and Tx ready */
-	wait = XGBE_PHY_RATECHANGE_COUNT;
-	while (wait--) {
-		usleep_range(50, 75);
-
-		status = XSIR0_IOREAD(priv, SIR0_STATUS);
-		if (XSIR_GET_BITS(status, SIR0_STATUS, RX_READY) &&
-		    XSIR_GET_BITS(status, SIR0_STATUS, TX_READY))
-			goto rx_reset;
-	}
-
-	netdev_dbg(phydev->attached_dev, "SerDes rx/tx not ready (%#hx)\n",
-		   status);
-
-rx_reset:
-	/* Perform Rx reset for the DFE changes */
-	XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RESETB_RXD, 0);
-	XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RESETB_RXD, 1);
-}
-
-static int amd_xgbe_phy_xgmii_mode(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	int ret;
-
-	/* Enable KR training */
-	ret = amd_xgbe_an_enable_kr_training(phydev);
-	if (ret < 0)
-		return ret;
-
-	/* Set PCS to KR/10G speed */
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2);
-	if (ret < 0)
-		return ret;
-
-	ret &= ~MDIO_PCS_CTRL2_TYPE;
-	ret |= MDIO_PCS_CTRL2_10GBR;
-	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2, ret);
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
-	if (ret < 0)
-		return ret;
-
-	ret &= ~MDIO_CTRL1_SPEEDSEL;
-	ret |= MDIO_CTRL1_SPEED10G;
-	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
-
-	ret = amd_xgbe_phy_pcs_power_cycle(phydev);
-	if (ret < 0)
-		return ret;
-
-	/* Set SerDes to 10G speed */
-	amd_xgbe_phy_serdes_start_ratechange(phydev);
-
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, DATARATE, SPEED_10000_RATE);
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, WORDMODE, SPEED_10000_WORD);
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, PLLSEL, SPEED_10000_PLL);
-
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, CDR_RATE,
-			   priv->serdes_cdr_rate[XGBE_PHY_SPEED_10000]);
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, TXAMP,
-			   priv->serdes_tx_amp[XGBE_PHY_SPEED_10000]);
-	XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA,
-			   priv->serdes_blwc[XGBE_PHY_SPEED_10000]);
-	XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG,
-			   priv->serdes_pq_skew[XGBE_PHY_SPEED_10000]);
-	XRXTX_IOWRITE_BITS(priv, RXTX_REG129, RXDFE_CONFIG,
-			   priv->serdes_dfe_tap_cfg[XGBE_PHY_SPEED_10000]);
-	XRXTX_IOWRITE(priv, RXTX_REG22,
-		      priv->serdes_dfe_tap_ena[XGBE_PHY_SPEED_10000]);
-
-	amd_xgbe_phy_serdes_complete_ratechange(phydev);
-
-	return 0;
-}
-
-static int amd_xgbe_phy_gmii_2500_mode(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	int ret;
-
-	/* Disable KR training */
-	ret = amd_xgbe_an_disable_kr_training(phydev);
-	if (ret < 0)
-		return ret;
-
-	/* Set PCS to KX/1G speed */
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2);
-	if (ret < 0)
-		return ret;
-
-	ret &= ~MDIO_PCS_CTRL2_TYPE;
-	ret |= MDIO_PCS_CTRL2_10GBX;
-	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2, ret);
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
-	if (ret < 0)
-		return ret;
-
-	ret &= ~MDIO_CTRL1_SPEEDSEL;
-	ret |= MDIO_CTRL1_SPEED1G;
-	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
-
-	ret = amd_xgbe_phy_pcs_power_cycle(phydev);
-	if (ret < 0)
-		return ret;
-
-	/* Set SerDes to 2.5G speed */
-	amd_xgbe_phy_serdes_start_ratechange(phydev);
-
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, DATARATE, SPEED_2500_RATE);
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, WORDMODE, SPEED_2500_WORD);
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, PLLSEL, SPEED_2500_PLL);
-
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, CDR_RATE,
-			   priv->serdes_cdr_rate[XGBE_PHY_SPEED_2500]);
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, TXAMP,
-			   priv->serdes_tx_amp[XGBE_PHY_SPEED_2500]);
-	XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA,
-			   priv->serdes_blwc[XGBE_PHY_SPEED_2500]);
-	XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG,
-			   priv->serdes_pq_skew[XGBE_PHY_SPEED_2500]);
-	XRXTX_IOWRITE_BITS(priv, RXTX_REG129, RXDFE_CONFIG,
-			   priv->serdes_dfe_tap_cfg[XGBE_PHY_SPEED_2500]);
-	XRXTX_IOWRITE(priv, RXTX_REG22,
-		      priv->serdes_dfe_tap_ena[XGBE_PHY_SPEED_2500]);
-
-	amd_xgbe_phy_serdes_complete_ratechange(phydev);
-
-	return 0;
-}
-
-static int amd_xgbe_phy_gmii_mode(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	int ret;
-
-	/* Disable KR training */
-	ret = amd_xgbe_an_disable_kr_training(phydev);
-	if (ret < 0)
-		return ret;
-
-	/* Set PCS to KX/1G speed */
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2);
-	if (ret < 0)
-		return ret;
-
-	ret &= ~MDIO_PCS_CTRL2_TYPE;
-	ret |= MDIO_PCS_CTRL2_10GBX;
-	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2, ret);
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
-	if (ret < 0)
-		return ret;
-
-	ret &= ~MDIO_CTRL1_SPEEDSEL;
-	ret |= MDIO_CTRL1_SPEED1G;
-	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
-
-	ret = amd_xgbe_phy_pcs_power_cycle(phydev);
-	if (ret < 0)
-		return ret;
-
-	/* Set SerDes to 1G speed */
-	amd_xgbe_phy_serdes_start_ratechange(phydev);
-
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, DATARATE, SPEED_1000_RATE);
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, WORDMODE, SPEED_1000_WORD);
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, PLLSEL, SPEED_1000_PLL);
-
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, CDR_RATE,
-			   priv->serdes_cdr_rate[XGBE_PHY_SPEED_1000]);
-	XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, TXAMP,
-			   priv->serdes_tx_amp[XGBE_PHY_SPEED_1000]);
-	XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA,
-			   priv->serdes_blwc[XGBE_PHY_SPEED_1000]);
-	XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG,
-			   priv->serdes_pq_skew[XGBE_PHY_SPEED_1000]);
-	XRXTX_IOWRITE_BITS(priv, RXTX_REG129, RXDFE_CONFIG,
-			   priv->serdes_dfe_tap_cfg[XGBE_PHY_SPEED_1000]);
-	XRXTX_IOWRITE(priv, RXTX_REG22,
-		      priv->serdes_dfe_tap_ena[XGBE_PHY_SPEED_1000]);
-
-	amd_xgbe_phy_serdes_complete_ratechange(phydev);
-
-	return 0;
-}
-
-static int amd_xgbe_phy_cur_mode(struct phy_device *phydev,
-				 enum amd_xgbe_phy_mode *mode)
-{
-	int ret;
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2);
-	if (ret < 0)
-		return ret;
-
-	if ((ret & MDIO_PCS_CTRL2_TYPE) == MDIO_PCS_CTRL2_10GBR)
-		*mode = AMD_XGBE_MODE_KR;
-	else
-		*mode = AMD_XGBE_MODE_KX;
-
-	return 0;
-}
-
-static bool amd_xgbe_phy_in_kr_mode(struct phy_device *phydev)
-{
-	enum amd_xgbe_phy_mode mode;
-
-	if (amd_xgbe_phy_cur_mode(phydev, &mode))
-		return false;
-
-	return (mode == AMD_XGBE_MODE_KR);
-}
-
-static int amd_xgbe_phy_switch_mode(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	int ret;
-
-	/* If we are in KR switch to KX, and vice-versa */
-	if (amd_xgbe_phy_in_kr_mode(phydev)) {
-		if (priv->speed_set == AMD_XGBE_PHY_SPEEDSET_1000_10000)
-			ret = amd_xgbe_phy_gmii_mode(phydev);
-		else
-			ret = amd_xgbe_phy_gmii_2500_mode(phydev);
-	} else {
-		ret = amd_xgbe_phy_xgmii_mode(phydev);
-	}
-
-	return ret;
-}
-
-static int amd_xgbe_phy_set_mode(struct phy_device *phydev,
-				 enum amd_xgbe_phy_mode mode)
-{
-	enum amd_xgbe_phy_mode cur_mode;
-	int ret;
-
-	ret = amd_xgbe_phy_cur_mode(phydev, &cur_mode);
-	if (ret)
-		return ret;
-
-	if (mode != cur_mode)
-		ret = amd_xgbe_phy_switch_mode(phydev);
-
-	return ret;
-}
-
-static int amd_xgbe_phy_set_an(struct phy_device *phydev, bool enable,
-			       bool restart)
-{
-	int ret;
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_CTRL1);
-	if (ret < 0)
-		return ret;
-
-	ret &= ~MDIO_AN_CTRL1_ENABLE;
-
-	if (enable)
-		ret |= MDIO_AN_CTRL1_ENABLE;
-
-	if (restart)
-		ret |= MDIO_AN_CTRL1_RESTART;
-
-	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_CTRL1, ret);
-
-	return 0;
-}
-
-static int amd_xgbe_phy_restart_an(struct phy_device *phydev)
-{
-	return amd_xgbe_phy_set_an(phydev, true, true);
-}
-
-static int amd_xgbe_phy_disable_an(struct phy_device *phydev)
-{
-	return amd_xgbe_phy_set_an(phydev, false, false);
-}
-
-static enum amd_xgbe_phy_an amd_xgbe_an_tx_training(struct phy_device *phydev,
-						    enum amd_xgbe_phy_rx *state)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	int ad_reg, lp_reg, ret;
-
-	*state = AMD_XGBE_RX_COMPLETE;
-
-	/* If we're not in KR mode then we're done */
-	if (!amd_xgbe_phy_in_kr_mode(phydev))
-		return AMD_XGBE_AN_PAGE_RECEIVED;
-
-	/* Enable/Disable FEC */
-	ad_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
-	if (ad_reg < 0)
-		return AMD_XGBE_AN_ERROR;
-
-	lp_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPA + 2);
-	if (lp_reg < 0)
-		return AMD_XGBE_AN_ERROR;
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_FEC_CTRL);
-	if (ret < 0)
-		return AMD_XGBE_AN_ERROR;
-
-	ret &= ~XGBE_PHY_FEC_MASK;
-	if ((ad_reg & 0xc000) && (lp_reg & 0xc000))
-		ret |= priv->fec_ability;
-
-	phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_FEC_CTRL, ret);
-
-	/* Start KR training */
-	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL);
-	if (ret < 0)
-		return AMD_XGBE_AN_ERROR;
-
-	if (ret & XGBE_PHY_KR_TRAINING_ENABLE) {
-		XSIR0_IOWRITE_BITS(priv, SIR0_KR_RT_1, RESET, 1);
-
-		ret |= XGBE_PHY_KR_TRAINING_START;
-		phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL,
-			      ret);
-
-		XSIR0_IOWRITE_BITS(priv, SIR0_KR_RT_1, RESET, 0);
-	}
-
-	return AMD_XGBE_AN_PAGE_RECEIVED;
-}
-
-static enum amd_xgbe_phy_an amd_xgbe_an_tx_xnp(struct phy_device *phydev,
-					       enum amd_xgbe_phy_rx *state)
-{
-	u16 msg;
-
-	*state = AMD_XGBE_RX_XNP;
-
-	msg = XNP_MCF_NULL_MESSAGE;
-	msg |= XNP_MP_FORMATTED;
-
-	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_XNP + 2, 0);
-	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_XNP + 1, 0);
-	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_XNP, msg);
-
-	return AMD_XGBE_AN_PAGE_RECEIVED;
-}
-
-static enum amd_xgbe_phy_an amd_xgbe_an_rx_bpa(struct phy_device *phydev,
-					       enum amd_xgbe_phy_rx *state)
-{
-	unsigned int link_support;
-	int ret, ad_reg, lp_reg;
-
-	/* Read Base Ability register 2 first */
-	ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPA + 1);
-	if (ret < 0)
-		return AMD_XGBE_AN_ERROR;
-
-	/* Check for a supported mode, otherwise restart in a different one */
-	link_support = amd_xgbe_phy_in_kr_mode(phydev) ? 0x80 : 0x20;
-	if (!(ret & link_support))
-		return AMD_XGBE_AN_INCOMPAT_LINK;
-
-	/* Check Extended Next Page support */
-	ad_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
-	if (ad_reg < 0)
-		return AMD_XGBE_AN_ERROR;
-
-	lp_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPA);
-	if (lp_reg < 0)
-		return AMD_XGBE_AN_ERROR;
-
-	return ((ad_reg & XNP_NP_EXCHANGE) || (lp_reg & XNP_NP_EXCHANGE)) ?
-	       amd_xgbe_an_tx_xnp(phydev, state) :
-	       amd_xgbe_an_tx_training(phydev, state);
-}
-
-static enum amd_xgbe_phy_an amd_xgbe_an_rx_xnp(struct phy_device *phydev,
-					       enum amd_xgbe_phy_rx *state)
-{
-	int ad_reg, lp_reg;
-
-	/* Check Extended Next Page support */
-	ad_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_XNP);
-	if (ad_reg < 0)
-		return AMD_XGBE_AN_ERROR;
-
-	lp_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPX);
-	if (lp_reg < 0)
-		return AMD_XGBE_AN_ERROR;
-
-	return ((ad_reg & XNP_NP_EXCHANGE) || (lp_reg & XNP_NP_EXCHANGE)) ?
-	       amd_xgbe_an_tx_xnp(phydev, state) :
-	       amd_xgbe_an_tx_training(phydev, state);
-}
-
-static enum amd_xgbe_phy_an amd_xgbe_an_page_received(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	enum amd_xgbe_phy_rx *state;
-	unsigned long an_timeout;
-	int ret;
-
-	if (!priv->an_start) {
-		priv->an_start = jiffies;
-	} else {
-		an_timeout = priv->an_start +
-			     msecs_to_jiffies(XGBE_AN_MS_TIMEOUT);
-		if (time_after(jiffies, an_timeout)) {
-			/* Auto-negotiation timed out, reset state */
-			priv->kr_state = AMD_XGBE_RX_BPA;
-			priv->kx_state = AMD_XGBE_RX_BPA;
-
-			priv->an_start = jiffies;
-		}
-	}
-
-	state = amd_xgbe_phy_in_kr_mode(phydev) ? &priv->kr_state
-						: &priv->kx_state;
-
-	switch (*state) {
-	case AMD_XGBE_RX_BPA:
-		ret = amd_xgbe_an_rx_bpa(phydev, state);
-		break;
-
-	case AMD_XGBE_RX_XNP:
-		ret = amd_xgbe_an_rx_xnp(phydev, state);
-		break;
-
-	default:
-		ret = AMD_XGBE_AN_ERROR;
-	}
-
-	return ret;
-}
-
-static enum amd_xgbe_phy_an amd_xgbe_an_incompat_link(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	int ret;
-
-	/* Be sure we aren't looping trying to negotiate */
-	if (amd_xgbe_phy_in_kr_mode(phydev)) {
-		priv->kr_state = AMD_XGBE_RX_ERROR;
-
-		if (!(phydev->advertising & SUPPORTED_1000baseKX_Full) &&
-		    !(phydev->advertising & SUPPORTED_2500baseX_Full))
-			return AMD_XGBE_AN_NO_LINK;
-
-		if (priv->kx_state != AMD_XGBE_RX_BPA)
-			return AMD_XGBE_AN_NO_LINK;
-	} else {
-		priv->kx_state = AMD_XGBE_RX_ERROR;
-
-		if (!(phydev->advertising & SUPPORTED_10000baseKR_Full))
-			return AMD_XGBE_AN_NO_LINK;
-
-		if (priv->kr_state != AMD_XGBE_RX_BPA)
-			return AMD_XGBE_AN_NO_LINK;
-	}
-
-	ret = amd_xgbe_phy_disable_an(phydev);
-	if (ret)
-		return AMD_XGBE_AN_ERROR;
-
-	ret = amd_xgbe_phy_switch_mode(phydev);
-	if (ret)
-		return AMD_XGBE_AN_ERROR;
-
-	ret = amd_xgbe_phy_restart_an(phydev);
-	if (ret)
-		return AMD_XGBE_AN_ERROR;
-
-	return AMD_XGBE_AN_INCOMPAT_LINK;
-}
-
-static irqreturn_t amd_xgbe_an_isr(int irq, void *data)
-{
-	struct amd_xgbe_phy_priv *priv = (struct amd_xgbe_phy_priv *)data;
-
-	/* Interrupt reason must be read and cleared outside of IRQ context */
-	disable_irq_nosync(priv->an_irq);
-
-	queue_work(priv->an_workqueue, &priv->an_irq_work);
-
-	return IRQ_HANDLED;
-}
-
-static void amd_xgbe_an_irq_work(struct work_struct *work)
-{
-	struct amd_xgbe_phy_priv *priv = container_of(work,
-						      struct amd_xgbe_phy_priv,
-						      an_irq_work);
-
-	/* Avoid a race between enabling the IRQ and exiting the work by
-	 * waiting for the work to finish and then queueing it
-	 */
-	flush_work(&priv->an_work);
-	queue_work(priv->an_workqueue, &priv->an_work);
-}
-
-static void amd_xgbe_an_state_machine(struct work_struct *work)
-{
-	struct amd_xgbe_phy_priv *priv = container_of(work,
-						      struct amd_xgbe_phy_priv,
-						      an_work);
-	struct phy_device *phydev = priv->phydev;
-	enum amd_xgbe_phy_an cur_state = priv->an_state;
-	int int_reg, int_mask;
-
-	mutex_lock(&priv->an_mutex);
-
-	/* Read the interrupt */
-	int_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT);
-	if (!int_reg)
-		goto out;
-
-next_int:
-	if (int_reg < 0) {
-		priv->an_state = AMD_XGBE_AN_ERROR;
-		int_mask = XGBE_AN_INT_MASK;
-	} else if (int_reg & XGBE_AN_PG_RCV) {
-		priv->an_state = AMD_XGBE_AN_PAGE_RECEIVED;
-		int_mask = XGBE_AN_PG_RCV;
-	} else if (int_reg & XGBE_AN_INC_LINK) {
-		priv->an_state = AMD_XGBE_AN_INCOMPAT_LINK;
-		int_mask = XGBE_AN_INC_LINK;
-	} else if (int_reg & XGBE_AN_INT_CMPLT) {
-		priv->an_state = AMD_XGBE_AN_COMPLETE;
-		int_mask = XGBE_AN_INT_CMPLT;
-	} else {
-		priv->an_state = AMD_XGBE_AN_ERROR;
-		int_mask = 0;
-	}
-
-	/* Clear the interrupt to be processed */
-	int_reg &= ~int_mask;
-	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, int_reg);
-
-	priv->an_result = priv->an_state;
-
-again:
-	cur_state = priv->an_state;
-
-	switch (priv->an_state) {
-	case AMD_XGBE_AN_READY:
-		priv->an_supported = 0;
-		break;
-
-	case AMD_XGBE_AN_PAGE_RECEIVED:
-		priv->an_state = amd_xgbe_an_page_received(phydev);
-		priv->an_supported++;
-		break;
-
-	case AMD_XGBE_AN_INCOMPAT_LINK:
-		priv->an_supported = 0;
-		priv->parallel_detect = 0;
-		priv->an_state = amd_xgbe_an_incompat_link(phydev);
-		break;
-
-	case AMD_XGBE_AN_COMPLETE:
-		priv->parallel_detect = priv->an_supported ? 0 : 1;
-		netdev_dbg(phydev->attached_dev, "%s successful\n",
-			   priv->an_supported ? "Auto negotiation"
-					      : "Parallel detection");
-		break;
-
-	case AMD_XGBE_AN_NO_LINK:
-		break;
-
-	default:
-		priv->an_state = AMD_XGBE_AN_ERROR;
-	}
-
-	if (priv->an_state == AMD_XGBE_AN_NO_LINK) {
-		int_reg = 0;
-		phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
-	} else if (priv->an_state == AMD_XGBE_AN_ERROR) {
-		netdev_err(phydev->attached_dev,
-			   "error during auto-negotiation, state=%u\n",
-			   cur_state);
-
-		int_reg = 0;
-		phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
-	}
-
-	if (priv->an_state >= AMD_XGBE_AN_COMPLETE) {
-		priv->an_result = priv->an_state;
-		priv->an_state = AMD_XGBE_AN_READY;
-		priv->kr_state = AMD_XGBE_RX_BPA;
-		priv->kx_state = AMD_XGBE_RX_BPA;
-		priv->an_start = 0;
-	}
-
-	if (cur_state != priv->an_state)
-		goto again;
-
-	if (int_reg)
-		goto next_int;
-
-out:
-	enable_irq(priv->an_irq);
-
-	mutex_unlock(&priv->an_mutex);
-}
-
-static int amd_xgbe_an_init(struct phy_device *phydev)
-{
-	int ret;
-
-	/* Set up Advertisement register 3 first */
-	ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
-	if (ret < 0)
-		return ret;
-
-	if (phydev->advertising & SUPPORTED_10000baseR_FEC)
-		ret |= 0xc000;
-	else
-		ret &= ~0xc000;
-
-	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2, ret);
-
-	/* Set up Advertisement register 2 next */
-	ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1);
-	if (ret < 0)
-		return ret;
-
-	if (phydev->advertising & SUPPORTED_10000baseKR_Full)
-		ret |= 0x80;
-	else
-		ret &= ~0x80;
-
-	if ((phydev->advertising & SUPPORTED_1000baseKX_Full) ||
-	    (phydev->advertising & SUPPORTED_2500baseX_Full))
-		ret |= 0x20;
-	else
-		ret &= ~0x20;
-
-	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1, ret);
-
-	/* Set up Advertisement register 1 last */
-	ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
-	if (ret < 0)
-		return ret;
-
-	if (phydev->advertising & SUPPORTED_Pause)
-		ret |= 0x400;
-	else
-		ret &= ~0x400;
-
-	if (phydev->advertising & SUPPORTED_Asym_Pause)
-		ret |= 0x800;
-	else
-		ret &= ~0x800;
-
-	/* We don't intend to perform XNP */
-	ret &= ~XNP_NP_EXCHANGE;
-
-	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE, ret);
-
-	return 0;
-}
-
-static int amd_xgbe_phy_soft_reset(struct phy_device *phydev)
-{
-	int count, ret;
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
-	if (ret < 0)
-		return ret;
-
-	ret |= MDIO_CTRL1_RESET;
-	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
-
-	count = 50;
-	do {
-		msleep(20);
-		ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
-		if (ret < 0)
-			return ret;
-	} while ((ret & MDIO_CTRL1_RESET) && --count);
-
-	if (ret & MDIO_CTRL1_RESET)
-		return -ETIMEDOUT;
-
-	/* Disable auto-negotiation for now */
-	ret = amd_xgbe_phy_disable_an(phydev);
-	if (ret < 0)
-		return ret;
-
-	/* Clear auto-negotiation interrupts */
-	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
-
-	return 0;
-}
-
-static int amd_xgbe_phy_config_init(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	struct net_device *netdev = phydev->attached_dev;
-	int ret;
-
-	if (!priv->an_irq_allocated) {
-		/* Allocate the auto-negotiation workqueue and interrupt */
-		snprintf(priv->an_irq_name, sizeof(priv->an_irq_name) - 1,
-			 "%s-pcs", netdev_name(netdev));
-
-		priv->an_workqueue =
-			create_singlethread_workqueue(priv->an_irq_name);
-		if (!priv->an_workqueue) {
-			netdev_err(netdev, "phy workqueue creation failed\n");
-			return -ENOMEM;
-		}
-
-		ret = devm_request_irq(priv->dev, priv->an_irq,
-				       amd_xgbe_an_isr, 0, priv->an_irq_name,
-				       priv);
-		if (ret) {
-			netdev_err(netdev, "phy irq request failed\n");
-			destroy_workqueue(priv->an_workqueue);
-			return ret;
-		}
-
-		priv->an_irq_allocated = 1;
-	}
-
-	/* Set initial mode - call the mode setting routines
-	 * directly to insure we are properly configured
-	 */
-	if (phydev->advertising & SUPPORTED_10000baseKR_Full)
-		ret = amd_xgbe_phy_xgmii_mode(phydev);
-	else if (phydev->advertising & SUPPORTED_1000baseKX_Full)
-		ret = amd_xgbe_phy_gmii_mode(phydev);
-	else if (phydev->advertising & SUPPORTED_2500baseX_Full)
-		ret = amd_xgbe_phy_gmii_2500_mode(phydev);
-	else
-		ret = -EINVAL;
-	if (ret < 0)
-		return ret;
-
-	/* Set up advertisement registers based on current settings */
-	ret = amd_xgbe_an_init(phydev);
-	if (ret)
-		return ret;
-
-	/* Enable auto-negotiation interrupts */
-	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INTMASK, 0x07);
-
-	return 0;
-}
-
-static int amd_xgbe_phy_setup_forced(struct phy_device *phydev)
-{
-	int ret;
-
-	/* Disable auto-negotiation */
-	ret = amd_xgbe_phy_disable_an(phydev);
-	if (ret < 0)
-		return ret;
-
-	/* Validate/Set specified speed */
-	switch (phydev->speed) {
-	case SPEED_10000:
-		ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KR);
-		break;
-
-	case SPEED_2500:
-	case SPEED_1000:
-		ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KX);
-		break;
-
-	default:
-		ret = -EINVAL;
-	}
-
-	if (ret < 0)
-		return ret;
-
-	/* Validate duplex mode */
-	if (phydev->duplex != DUPLEX_FULL)
-		return -EINVAL;
-
-	phydev->pause = 0;
-	phydev->asym_pause = 0;
-
-	return 0;
-}
-
-static int __amd_xgbe_phy_config_aneg(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	u32 mmd_mask = phydev->c45_ids.devices_in_package;
-	int ret;
-
-	if (phydev->autoneg != AUTONEG_ENABLE)
-		return amd_xgbe_phy_setup_forced(phydev);
-
-	/* Make sure we have the AN MMD present */
-	if (!(mmd_mask & MDIO_DEVS_AN))
-		return -EINVAL;
-
-	/* Disable auto-negotiation interrupt */
-	disable_irq(priv->an_irq);
-
-	/* Start auto-negotiation in a supported mode */
-	if (phydev->advertising & SUPPORTED_10000baseKR_Full)
-		ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KR);
-	else if ((phydev->advertising & SUPPORTED_1000baseKX_Full) ||
-		 (phydev->advertising & SUPPORTED_2500baseX_Full))
-		ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KX);
-	else
-		ret = -EINVAL;
-	if (ret < 0) {
-		enable_irq(priv->an_irq);
-		return ret;
-	}
-
-	/* Disable and stop any in progress auto-negotiation */
-	ret = amd_xgbe_phy_disable_an(phydev);
-	if (ret < 0)
-		return ret;
-
-	/* Clear any auto-negotitation interrupts */
-	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
-
-	priv->an_result = AMD_XGBE_AN_READY;
-	priv->an_state = AMD_XGBE_AN_READY;
-	priv->kr_state = AMD_XGBE_RX_BPA;
-	priv->kx_state = AMD_XGBE_RX_BPA;
-
-	/* Re-enable auto-negotiation interrupt */
-	enable_irq(priv->an_irq);
-
-	/* Set up advertisement registers based on current settings */
-	ret = amd_xgbe_an_init(phydev);
-	if (ret)
-		return ret;
-
-	/* Enable and start auto-negotiation */
-	return amd_xgbe_phy_restart_an(phydev);
-}
-
-static int amd_xgbe_phy_config_aneg(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	int ret;
-
-	mutex_lock(&priv->an_mutex);
-
-	ret = __amd_xgbe_phy_config_aneg(phydev);
-
-	mutex_unlock(&priv->an_mutex);
-
-	return ret;
-}
-
-static int amd_xgbe_phy_aneg_done(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-
-	return (priv->an_result == AMD_XGBE_AN_COMPLETE);
-}
-
-static int amd_xgbe_phy_update_link(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	int ret;
-
-	/* If we're doing auto-negotiation don't report link down */
-	if (priv->an_state != AMD_XGBE_AN_READY) {
-		phydev->link = 1;
-		return 0;
-	}
-
-	/* Link status is latched low, so read once to clear
-	 * and then read again to get current state
-	 */
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_STAT1);
-	if (ret < 0)
-		return ret;
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_STAT1);
-	if (ret < 0)
-		return ret;
-
-	phydev->link = (ret & MDIO_STAT1_LSTATUS) ? 1 : 0;
-
-	return 0;
-}
-
-static int amd_xgbe_phy_read_status(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	u32 mmd_mask = phydev->c45_ids.devices_in_package;
-	int ret, ad_ret, lp_ret;
-
-	ret = amd_xgbe_phy_update_link(phydev);
-	if (ret)
-		return ret;
-
-	if ((phydev->autoneg == AUTONEG_ENABLE) &&
-	    !priv->parallel_detect) {
-		if (!(mmd_mask & MDIO_DEVS_AN))
-			return -EINVAL;
-
-		if (!amd_xgbe_phy_aneg_done(phydev))
-			return 0;
-
-		/* Compare Advertisement and Link Partner register 1 */
-		ad_ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
-		if (ad_ret < 0)
-			return ad_ret;
-		lp_ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPA);
-		if (lp_ret < 0)
-			return lp_ret;
-
-		ad_ret &= lp_ret;
-		phydev->pause = (ad_ret & 0x400) ? 1 : 0;
-		phydev->asym_pause = (ad_ret & 0x800) ? 1 : 0;
-
-		/* Compare Advertisement and Link Partner register 2 */
-		ad_ret = phy_read_mmd(phydev, MDIO_MMD_AN,
-				      MDIO_AN_ADVERTISE + 1);
-		if (ad_ret < 0)
-			return ad_ret;
-		lp_ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPA + 1);
-		if (lp_ret < 0)
-			return lp_ret;
-
-		ad_ret &= lp_ret;
-		if (ad_ret & 0x80) {
-			phydev->speed = SPEED_10000;
-			ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KR);
-			if (ret)
-				return ret;
-		} else {
-			switch (priv->speed_set) {
-			case AMD_XGBE_PHY_SPEEDSET_1000_10000:
-				phydev->speed = SPEED_1000;
-				break;
-
-			case AMD_XGBE_PHY_SPEEDSET_2500_10000:
-				phydev->speed = SPEED_2500;
-				break;
-			}
-
-			ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KX);
-			if (ret)
-				return ret;
-		}
-
-		phydev->duplex = DUPLEX_FULL;
-	} else {
-		if (amd_xgbe_phy_in_kr_mode(phydev)) {
-			phydev->speed = SPEED_10000;
-		} else {
-			switch (priv->speed_set) {
-			case AMD_XGBE_PHY_SPEEDSET_1000_10000:
-				phydev->speed = SPEED_1000;
-				break;
-
-			case AMD_XGBE_PHY_SPEEDSET_2500_10000:
-				phydev->speed = SPEED_2500;
-				break;
-			}
-		}
-		phydev->duplex = DUPLEX_FULL;
-		phydev->pause = 0;
-		phydev->asym_pause = 0;
-	}
-
-	return 0;
-}
-
-static int amd_xgbe_phy_suspend(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	int ret;
-
-	mutex_lock(&phydev->lock);
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
-	if (ret < 0)
-		goto unlock;
-
-	priv->lpm_ctrl = ret;
-
-	ret |= MDIO_CTRL1_LPOWER;
-	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
-
-	ret = 0;
-
-unlock:
-	mutex_unlock(&phydev->lock);
-
-	return ret;
-}
-
-static int amd_xgbe_phy_resume(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-
-	mutex_lock(&phydev->lock);
-
-	priv->lpm_ctrl &= ~MDIO_CTRL1_LPOWER;
-	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, priv->lpm_ctrl);
-
-	mutex_unlock(&phydev->lock);
-
-	return 0;
-}
-
-static unsigned int amd_xgbe_phy_resource_count(struct platform_device *pdev,
-						unsigned int type)
-{
-	unsigned int count;
-	int i;
-
-	for (i = 0, count = 0; i < pdev->num_resources; i++) {
-		struct resource *r = &pdev->resource[i];
-
-		if (type == resource_type(r))
-			count++;
-	}
-
-	return count;
-}
-
-static int amd_xgbe_phy_probe(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv;
-	struct platform_device *phy_pdev;
-	struct device *dev, *phy_dev;
-	unsigned int phy_resnum, phy_irqnum;
-	int ret;
-
-	if (!phydev->bus || !phydev->bus->parent)
-		return -EINVAL;
-
-	dev = phydev->bus->parent;
-
-	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
-	if (!priv)
-		return -ENOMEM;
-
-	priv->pdev = to_platform_device(dev);
-	priv->adev = ACPI_COMPANION(dev);
-	priv->dev = dev;
-	priv->phydev = phydev;
-	mutex_init(&priv->an_mutex);
-	INIT_WORK(&priv->an_irq_work, amd_xgbe_an_irq_work);
-	INIT_WORK(&priv->an_work, amd_xgbe_an_state_machine);
-
-	if (!priv->adev || acpi_disabled) {
-		struct device_node *bus_node;
-		struct device_node *phy_node;
-
-		bus_node = priv->dev->of_node;
-		phy_node = of_parse_phandle(bus_node, "phy-handle", 0);
-		if (!phy_node) {
-			dev_err(dev, "unable to parse phy-handle\n");
-			ret = -EINVAL;
-			goto err_priv;
-		}
-
-		phy_pdev = of_find_device_by_node(phy_node);
-		of_node_put(phy_node);
-
-		if (!phy_pdev) {
-			dev_err(dev, "unable to obtain phy device\n");
-			ret = -EINVAL;
-			goto err_priv;
-		}
-
-		phy_resnum = 0;
-		phy_irqnum = 0;
-	} else {
-		/* In ACPI, the XGBE and PHY resources are the grouped
-		 * together with the PHY resources at the end
-		 */
-		phy_pdev = priv->pdev;
-		phy_resnum = amd_xgbe_phy_resource_count(phy_pdev,
-							 IORESOURCE_MEM) - 3;
-		phy_irqnum = amd_xgbe_phy_resource_count(phy_pdev,
-							 IORESOURCE_IRQ) - 1;
-	}
-	phy_dev = &phy_pdev->dev;
-
-	/* Get the device mmio areas */
-	priv->rxtx_res = platform_get_resource(phy_pdev, IORESOURCE_MEM,
-					       phy_resnum++);
-	priv->rxtx_regs = devm_ioremap_resource(dev, priv->rxtx_res);
-	if (IS_ERR(priv->rxtx_regs)) {
-		dev_err(dev, "rxtx ioremap failed\n");
-		ret = PTR_ERR(priv->rxtx_regs);
-		goto err_put;
-	}
-
-	priv->sir0_res = platform_get_resource(phy_pdev, IORESOURCE_MEM,
-					       phy_resnum++);
-	priv->sir0_regs = devm_ioremap_resource(dev, priv->sir0_res);
-	if (IS_ERR(priv->sir0_regs)) {
-		dev_err(dev, "sir0 ioremap failed\n");
-		ret = PTR_ERR(priv->sir0_regs);
-		goto err_rxtx;
-	}
-
-	priv->sir1_res = platform_get_resource(phy_pdev, IORESOURCE_MEM,
-					       phy_resnum++);
-	priv->sir1_regs = devm_ioremap_resource(dev, priv->sir1_res);
-	if (IS_ERR(priv->sir1_regs)) {
-		dev_err(dev, "sir1 ioremap failed\n");
-		ret = PTR_ERR(priv->sir1_regs);
-		goto err_sir0;
-	}
-
-	/* Get the auto-negotiation interrupt */
-	ret = platform_get_irq(phy_pdev, phy_irqnum);
-	if (ret < 0) {
-		dev_err(dev, "platform_get_irq failed\n");
-		goto err_sir1;
-	}
-	priv->an_irq = ret;
-
-	/* Get the device speed set property */
-	ret = device_property_read_u32(phy_dev, XGBE_PHY_SPEEDSET_PROPERTY,
-				       &priv->speed_set);
-	if (ret) {
-		dev_err(dev, "invalid %s property\n",
-			XGBE_PHY_SPEEDSET_PROPERTY);
-		goto err_sir1;
-	}
-
-	switch (priv->speed_set) {
-	case AMD_XGBE_PHY_SPEEDSET_1000_10000:
-	case AMD_XGBE_PHY_SPEEDSET_2500_10000:
-		break;
-	default:
-		dev_err(dev, "invalid %s property\n",
-			XGBE_PHY_SPEEDSET_PROPERTY);
-		ret = -EINVAL;
-		goto err_sir1;
-	}
-
-	if (device_property_present(phy_dev, XGBE_PHY_BLWC_PROPERTY)) {
-		ret = device_property_read_u32_array(phy_dev,
-						     XGBE_PHY_BLWC_PROPERTY,
-						     priv->serdes_blwc,
-						     XGBE_PHY_SPEEDS);
-		if (ret) {
-			dev_err(dev, "invalid %s property\n",
-				XGBE_PHY_BLWC_PROPERTY);
-			goto err_sir1;
-		}
-	} else {
-		memcpy(priv->serdes_blwc, amd_xgbe_phy_serdes_blwc,
-		       sizeof(priv->serdes_blwc));
-	}
-
-	if (device_property_present(phy_dev, XGBE_PHY_CDR_RATE_PROPERTY)) {
-		ret = device_property_read_u32_array(phy_dev,
-						     XGBE_PHY_CDR_RATE_PROPERTY,
-						     priv->serdes_cdr_rate,
-						     XGBE_PHY_SPEEDS);
-		if (ret) {
-			dev_err(dev, "invalid %s property\n",
-				XGBE_PHY_CDR_RATE_PROPERTY);
-			goto err_sir1;
-		}
-	} else {
-		memcpy(priv->serdes_cdr_rate, amd_xgbe_phy_serdes_cdr_rate,
-		       sizeof(priv->serdes_cdr_rate));
-	}
-
-	if (device_property_present(phy_dev, XGBE_PHY_PQ_SKEW_PROPERTY)) {
-		ret = device_property_read_u32_array(phy_dev,
-						     XGBE_PHY_PQ_SKEW_PROPERTY,
-						     priv->serdes_pq_skew,
-						     XGBE_PHY_SPEEDS);
-		if (ret) {
-			dev_err(dev, "invalid %s property\n",
-				XGBE_PHY_PQ_SKEW_PROPERTY);
-			goto err_sir1;
-		}
-	} else {
-		memcpy(priv->serdes_pq_skew, amd_xgbe_phy_serdes_pq_skew,
-		       sizeof(priv->serdes_pq_skew));
-	}
-
-	if (device_property_present(phy_dev, XGBE_PHY_TX_AMP_PROPERTY)) {
-		ret = device_property_read_u32_array(phy_dev,
-						     XGBE_PHY_TX_AMP_PROPERTY,
-						     priv->serdes_tx_amp,
-						     XGBE_PHY_SPEEDS);
-		if (ret) {
-			dev_err(dev, "invalid %s property\n",
-				XGBE_PHY_TX_AMP_PROPERTY);
-			goto err_sir1;
-		}
-	} else {
-		memcpy(priv->serdes_tx_amp, amd_xgbe_phy_serdes_tx_amp,
-		       sizeof(priv->serdes_tx_amp));
-	}
-
-	if (device_property_present(phy_dev, XGBE_PHY_DFE_CFG_PROPERTY)) {
-		ret = device_property_read_u32_array(phy_dev,
-						     XGBE_PHY_DFE_CFG_PROPERTY,
-						     priv->serdes_dfe_tap_cfg,
-						     XGBE_PHY_SPEEDS);
-		if (ret) {
-			dev_err(dev, "invalid %s property\n",
-				XGBE_PHY_DFE_CFG_PROPERTY);
-			goto err_sir1;
-		}
-	} else {
-		memcpy(priv->serdes_dfe_tap_cfg,
-		       amd_xgbe_phy_serdes_dfe_tap_cfg,
-		       sizeof(priv->serdes_dfe_tap_cfg));
-	}
-
-	if (device_property_present(phy_dev, XGBE_PHY_DFE_ENA_PROPERTY)) {
-		ret = device_property_read_u32_array(phy_dev,
-						     XGBE_PHY_DFE_ENA_PROPERTY,
-						     priv->serdes_dfe_tap_ena,
-						     XGBE_PHY_SPEEDS);
-		if (ret) {
-			dev_err(dev, "invalid %s property\n",
-				XGBE_PHY_DFE_ENA_PROPERTY);
-			goto err_sir1;
-		}
-	} else {
-		memcpy(priv->serdes_dfe_tap_ena,
-		       amd_xgbe_phy_serdes_dfe_tap_ena,
-		       sizeof(priv->serdes_dfe_tap_ena));
-	}
-
-	/* Initialize supported features */
-	phydev->supported = SUPPORTED_Autoneg;
-	phydev->supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
-	phydev->supported |= SUPPORTED_Backplane;
-	phydev->supported |= SUPPORTED_10000baseKR_Full;
-	switch (priv->speed_set) {
-	case AMD_XGBE_PHY_SPEEDSET_1000_10000:
-		phydev->supported |= SUPPORTED_1000baseKX_Full;
-		break;
-	case AMD_XGBE_PHY_SPEEDSET_2500_10000:
-		phydev->supported |= SUPPORTED_2500baseX_Full;
-		break;
-	}
-
-	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_FEC_ABILITY);
-	if (ret < 0)
-		return ret;
-	priv->fec_ability = ret & XGBE_PHY_FEC_MASK;
-	if (priv->fec_ability & XGBE_PHY_FEC_ENABLE)
-		phydev->supported |= SUPPORTED_10000baseR_FEC;
-
-	phydev->advertising = phydev->supported;
-
-	phydev->priv = priv;
-
-	if (!priv->adev || acpi_disabled)
-		platform_device_put(phy_pdev);
-
-	return 0;
-
-err_sir1:
-	devm_iounmap(dev, priv->sir1_regs);
-	devm_release_mem_region(dev, priv->sir1_res->start,
-				resource_size(priv->sir1_res));
-
-err_sir0:
-	devm_iounmap(dev, priv->sir0_regs);
-	devm_release_mem_region(dev, priv->sir0_res->start,
-				resource_size(priv->sir0_res));
-
-err_rxtx:
-	devm_iounmap(dev, priv->rxtx_regs);
-	devm_release_mem_region(dev, priv->rxtx_res->start,
-				resource_size(priv->rxtx_res));
-
-err_put:
-	if (!priv->adev || acpi_disabled)
-		platform_device_put(phy_pdev);
-
-err_priv:
-	devm_kfree(dev, priv);
-
-	return ret;
-}
-
-static void amd_xgbe_phy_remove(struct phy_device *phydev)
-{
-	struct amd_xgbe_phy_priv *priv = phydev->priv;
-	struct device *dev = priv->dev;
-
-	if (priv->an_irq_allocated) {
-		devm_free_irq(dev, priv->an_irq, priv);
-
-		flush_workqueue(priv->an_workqueue);
-		destroy_workqueue(priv->an_workqueue);
-	}
-
-	/* Release resources */
-	devm_iounmap(dev, priv->sir1_regs);
-	devm_release_mem_region(dev, priv->sir1_res->start,
-				resource_size(priv->sir1_res));
-
-	devm_iounmap(dev, priv->sir0_regs);
-	devm_release_mem_region(dev, priv->sir0_res->start,
-				resource_size(priv->sir0_res));
-
-	devm_iounmap(dev, priv->rxtx_regs);
-	devm_release_mem_region(dev, priv->rxtx_res->start,
-				resource_size(priv->rxtx_res));
-
-	devm_kfree(dev, priv);
-}
-
-static int amd_xgbe_match_phy_device(struct phy_device *phydev)
-{
-	return phydev->c45_ids.device_ids[MDIO_MMD_PCS] == XGBE_PHY_ID;
-}
-
-static struct phy_driver amd_xgbe_phy_driver[] = {
-	{
-		.phy_id			= XGBE_PHY_ID,
-		.phy_id_mask		= XGBE_PHY_MASK,
-		.name			= "AMD XGBE PHY",
-		.features		= 0,
-		.flags			= PHY_IS_INTERNAL,
-		.probe			= amd_xgbe_phy_probe,
-		.remove			= amd_xgbe_phy_remove,
-		.soft_reset		= amd_xgbe_phy_soft_reset,
-		.config_init		= amd_xgbe_phy_config_init,
-		.suspend		= amd_xgbe_phy_suspend,
-		.resume			= amd_xgbe_phy_resume,
-		.config_aneg		= amd_xgbe_phy_config_aneg,
-		.aneg_done		= amd_xgbe_phy_aneg_done,
-		.read_status		= amd_xgbe_phy_read_status,
-		.match_phy_device	= amd_xgbe_match_phy_device,
-		.driver			= {
-			.owner = THIS_MODULE,
-		},
-	},
-};
-
-module_phy_driver(amd_xgbe_phy_driver);
-
-static struct mdio_device_id __maybe_unused amd_xgbe_phy_ids[] = {
-	{ XGBE_PHY_ID, XGBE_PHY_MASK },
-	{ }
-};
-MODULE_DEVICE_TABLE(mdio, amd_xgbe_phy_ids);

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v1 5/7] amd-xgbe: Support defining PHY resources in ETH device node
  2015-05-12 19:22 [PATCH net-next v1 0/7] amd-xgbe: AMD XGBE driver updates 2015-05-12 Tom Lendacky
                   ` (3 preceding siblings ...)
  2015-05-12 19:22 ` [PATCH net-next v1 4/7] amd-xgbe: Move the PHY support into amd-xgbe Tom Lendacky
@ 2015-05-12 19:23 ` Tom Lendacky
  2015-05-12 19:23 ` [PATCH net-next v1 6/7] amd-xgbe: Fix flow control setting logic Tom Lendacky
  2015-05-12 19:23 ` [PATCH net-next v1 7/7] amd-xgbe: Remove manual check and set of dma_mask pointer Tom Lendacky
  6 siblings, 0 replies; 12+ messages in thread
From: Tom Lendacky @ 2015-05-12 19:23 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Simplify the device tree support of the amd-xgbe driver by defining
the PHY-related resources within the ethernet device node. The support
provides backwards compatibility with the original way.

Update the driver version to 1.0.2.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 Documentation/devicetree/bindings/net/amd-xgbe.txt |   41 ++++++--------------
 drivers/net/ethernet/amd/xgbe/xgbe-main.c          |   23 +++++++----
 drivers/net/ethernet/amd/xgbe/xgbe.h               |    2 -
 3 files changed, 29 insertions(+), 37 deletions(-)

diff --git a/Documentation/devicetree/bindings/net/amd-xgbe.txt b/Documentation/devicetree/bindings/net/amd-xgbe.txt
index 5dbc55a..4bb624a 100644
--- a/Documentation/devicetree/bindings/net/amd-xgbe.txt
+++ b/Documentation/devicetree/bindings/net/amd-xgbe.txt
@@ -1,16 +1,20 @@
 * AMD 10GbE driver (amd-xgbe)
 
-Required properties (ethernet device):
+Required properties:
 - compatible: Should be "amd,xgbe-seattle-v1a"
 - reg: Address and length of the register sets for the device
    - MAC registers
    - PCS registers
+   - SerDes Rx/Tx registers
+   - SerDes integration registers (1/2)
+   - SerDes integration registers (2/2)
 - interrupt-parent: Should be the phandle for the interrupt controller
   that services interrupts for this device
 - interrupts: Should contain the amd-xgbe interrupt(s). The first interrupt
   listed is required and is the general device interrupt. If the optional
   amd,per-channel-interrupt property is specified, then one additional
-  interrupt for each DMA channel supported by the device should be specified
+  interrupt for each DMA channel supported by the device should be specified.
+  The last interrupt listed should be the PCS auto-negotiation interrupt.
 - clocks:
    - DMA clock for the amd-xgbe device (used for calculating the
      correct Rx interrupt watchdog timer value on a DMA channel
@@ -19,28 +23,15 @@ Required properties (ethernet device):
 - clock-names: Should be the names of the clocks
    - "dma_clk" for the DMA clock
    - "ptp_clk" for the PTP clock
-- phy-handle: See ethernet.txt file in the same directory
 - phy-mode: See ethernet.txt file in the same directory
 
-Optional properties (ethernet device):
+Optional properties:
 - mac-address: mac address to be assigned to the device. Can be overridden
   by UEFI.
 - dma-coherent: Present if dma operations are coherent
 - amd,per-channel-interrupt: Indicates that Rx and Tx complete will generate
   a unique interrupt for each DMA channel - this requires an additional
   interrupt be configured for each DMA channel
-
-Required properties (phy device):
-- compatible: Should be "amd,xgbe-phy-seattle-v1a"
-- reg: Address and length of the register sets for the device
-   - SerDes Rx/Tx registers
-   - SerDes integration registers (1/2)
-   - SerDes integration registers (2/2)
-- interrupt-parent: Should be the phandle for the interrupt controller
-  that services interrupts for this device
-- interrupts: Should contain the amd-xgbe-phy interrupt.
-
-Optional properties (phy device):
 - amd,speed-set: Speed capabilities of the device
     0 - 1GbE and 10GbE (default)
     1 - 2.5GbE and 10GbE
@@ -63,25 +54,19 @@ Example:
 	xgbe@e0700000 {
 		compatible = "amd,xgbe-seattle-v1a";
 		reg = <0 0xe0700000 0 0x80000>,
-		      <0 0xe0780000 0 0x80000>;
+		      <0 0xe0780000 0 0x80000>,
+		      <0 0xe1240800 0 0x00400>,
+		      <0 0xe1250000 0 0x00060>,
+		      <0 0xe1250080 0 0x00004>;
 		interrupt-parent = <&gic>;
 		interrupts = <0 325 4>,
-			     <0 326 1>, <0 327 1>, <0 328 1>, <0 329 1>;
+			     <0 326 1>, <0 327 1>, <0 328 1>, <0 329 1>,
+			     <0 323 4>;
 		amd,per-channel-interrupt;
 		clocks = <&xgbe_dma_clk>, <&xgbe_ptp_clk>;
 		clock-names = "dma_clk", "ptp_clk";
-		phy-handle = <&xgbe_phy>;
 		phy-mode = "xgmii";
 		mac-address = [ 02 a1 a2 a3 a4 a5 ];
-	};
-
-	xgbe_phy@e1240800 {
-		compatible = "amd,xgbe-phy-seattle-v1a";
-		reg = <0 0xe1240800 0 0x00400>,
-		      <0 0xe1250000 0 0x00060>,
-		      <0 0xe1250080 0 0x00004>;
-		interrupt-parent = <&gic>;
-		interrupts = <0 323 4>;
 		amd,speed-set = <0>;
 		amd,serdes-blwc = <1>, <1>, <0>;
 		amd,serdes-cdr-rate = <2>, <2>, <7>;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-main.c b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
index 0c219b3..34c521d 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-main.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
@@ -300,14 +300,21 @@ static struct platform_device *xgbe_of_get_phy_pdev(struct xgbe_prv_data *pdata)
 	struct platform_device *phy_pdev;
 
 	phy_node = of_parse_phandle(dev->of_node, "phy-handle", 0);
-	if (!phy_node) {
-		dev_err(dev, "unable to locate phy device\n");
-		return NULL;
+	if (phy_node) {
+		/* Old style device tree:
+		 *   The XGBE and PHY resources are separate
+		 */
+		phy_pdev = of_find_device_by_node(phy_node);
+		of_node_put(phy_node);
+	} else {
+		/* New style device tree:
+		 *   The XGBE and PHY resources are grouped together with
+		 *   the PHY resources listed last
+		 */
+		get_device(dev);
+		phy_pdev = pdata->pdev;
 	}
 
-	phy_pdev = of_find_device_by_node(phy_node);
-	of_node_put(phy_node);
-
 	return phy_pdev;
 }
 #else   /* CONFIG_OF */
@@ -401,14 +408,14 @@ static int xgbe_probe(struct platform_device *pdev)
 	phy_dev = &phy_pdev->dev;
 
 	if (pdev == phy_pdev) {
-		/* ACPI:
+		/* New style device tree or ACPI:
 		 *   The XGBE and PHY resources are grouped together with
 		 *   the PHY resources listed last
 		 */
 		phy_memnum = xgbe_resource_count(pdev, IORESOURCE_MEM) - 3;
 		phy_irqnum = xgbe_resource_count(pdev, IORESOURCE_IRQ) - 1;
 	} else {
-		/* Device tree:
+		/* Old style device tree:
 		 *   The XGBE and PHY resources are separate
 		 */
 		phy_memnum = 0;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index b6aecc9..f535d19 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -129,7 +129,7 @@
 #include <net/dcbnl.h>
 
 #define XGBE_DRV_NAME		"amd-xgbe"
-#define XGBE_DRV_VERSION	"1.0.1"
+#define XGBE_DRV_VERSION	"1.0.2"
 #define XGBE_DRV_DESC		"AMD 10 Gigabit Ethernet Driver"
 
 /* Descriptor related defines */

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v1 6/7] amd-xgbe: Fix flow control setting logic
  2015-05-12 19:22 [PATCH net-next v1 0/7] amd-xgbe: AMD XGBE driver updates 2015-05-12 Tom Lendacky
                   ` (4 preceding siblings ...)
  2015-05-12 19:23 ` [PATCH net-next v1 5/7] amd-xgbe: Support defining PHY resources in ETH device node Tom Lendacky
@ 2015-05-12 19:23 ` Tom Lendacky
  2015-05-12 19:23 ` [PATCH net-next v1 7/7] amd-xgbe: Remove manual check and set of dma_mask pointer Tom Lendacky
  6 siblings, 0 replies; 12+ messages in thread
From: Tom Lendacky @ 2015-05-12 19:23 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

The flow control negotiation logic is flawed and does not properly
advertise and process auto-negotiation of the flow control settings.
Update the flow control support to properly set the flow control
auto-negotiation settings and process the results approrpriately.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c     |    2 -
 drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c |   29 ++++++----
 drivers/net/ethernet/amd/xgbe/xgbe-mdio.c    |   73 ++++++++++++++++++--------
 drivers/net/ethernet/amd/xgbe/xgbe.h         |    8 +--
 4 files changed, 72 insertions(+), 40 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index 637b6c9..99a4c12 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -787,8 +787,6 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 {
 	pdata->phy_link = -1;
 	pdata->phy_speed = SPEED_UNKNOWN;
-	pdata->phy_tx_pause = pdata->tx_pause;
-	pdata->phy_rx_pause = pdata->rx_pause;
 
 	return pdata->phy_if.phy_reset(pdata);
 }
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
index b24a78c3..59e090e 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
@@ -247,9 +247,9 @@ static void xgbe_get_pauseparam(struct net_device *netdev,
 
 	DBGPR("-->xgbe_get_pauseparam\n");
 
-	pause->autoneg = pdata->pause_autoneg;
-	pause->tx_pause = pdata->tx_pause;
-	pause->rx_pause = pdata->rx_pause;
+	pause->autoneg = pdata->phy.pause_autoneg;
+	pause->tx_pause = pdata->phy.tx_pause;
+	pause->rx_pause = pdata->phy.rx_pause;
 
 	DBGPR("<--xgbe_get_pauseparam\n");
 }
@@ -265,19 +265,24 @@ static int xgbe_set_pauseparam(struct net_device *netdev,
 	DBGPR("  autoneg = %d, tx_pause = %d, rx_pause = %d\n",
 	      pause->autoneg, pause->tx_pause, pause->rx_pause);
 
-	pdata->pause_autoneg = pause->autoneg;
-	if (pause->autoneg) {
-		pdata->phy.advertising |= ADVERTISED_Pause;
-		pdata->phy.advertising |= ADVERTISED_Asym_Pause;
+	if (pause->autoneg && (pdata->phy.autoneg != AUTONEG_ENABLE))
+		return -EINVAL;
+
+	pdata->phy.pause_autoneg = pause->autoneg;
+	pdata->phy.tx_pause = pause->tx_pause;
+	pdata->phy.rx_pause = pause->rx_pause;
 
-	} else {
-		pdata->phy.advertising &= ~ADVERTISED_Pause;
-		pdata->phy.advertising &= ~ADVERTISED_Asym_Pause;
+	pdata->phy.advertising &= ~ADVERTISED_Pause;
+	pdata->phy.advertising &= ~ADVERTISED_Asym_Pause;
 
-		pdata->tx_pause = pause->tx_pause;
-		pdata->rx_pause = pause->rx_pause;
+	if (pause->rx_pause) {
+		pdata->phy.advertising |= ADVERTISED_Pause;
+		pdata->phy.advertising |= ADVERTISED_Asym_Pause;
 	}
 
+	if (pause->tx_pause)
+		pdata->phy.advertising ^= ADVERTISED_Asym_Pause;
+
 	if (netif_running(netdev))
 		ret = pdata->phy_if.phy_config_aneg(pdata);
 
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
index 1ae4bfb..cea19a3 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
@@ -737,6 +737,18 @@ static void xgbe_an_init(struct xgbe_prv_data *pdata)
 	XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE, reg);
 }
 
+static const char *xgbe_phy_fc_string(struct xgbe_prv_data *pdata)
+{
+	if (pdata->tx_pause && pdata->rx_pause)
+		return "rx/tx";
+	else if (pdata->rx_pause)
+		return "rx";
+	else if (pdata->tx_pause)
+		return "tx";
+	else
+		return "off";
+}
+
 static const char *xgbe_phy_speed_string(int speed)
 {
 	switch (speed) {
@@ -760,7 +772,7 @@ static void xgbe_phy_print_status(struct xgbe_prv_data *pdata)
 			    "Link is Up - %s/%s - flow control %s\n",
 			    xgbe_phy_speed_string(pdata->phy.speed),
 			    pdata->phy.duplex == DUPLEX_FULL ? "Full" : "Half",
-			    pdata->phy.pause ? "rx/tx" : "off");
+			    xgbe_phy_fc_string(pdata));
 	else
 		netdev_info(pdata->netdev, "Link is Down\n");
 }
@@ -771,24 +783,18 @@ static void xgbe_phy_adjust_link(struct xgbe_prv_data *pdata)
 
 	if (pdata->phy.link) {
 		/* Flow control support */
-		if (pdata->pause_autoneg) {
-			if (pdata->phy.pause || pdata->phy.asym_pause) {
-				pdata->tx_pause = 1;
-				pdata->rx_pause = 1;
-			} else {
-				pdata->tx_pause = 0;
-				pdata->rx_pause = 0;
-			}
-		}
+		pdata->pause_autoneg = pdata->phy.pause_autoneg;
 
-		if (pdata->tx_pause != pdata->phy_tx_pause) {
+		if (pdata->tx_pause != pdata->phy.tx_pause) {
+			new_state = 1;
 			pdata->hw_if.config_tx_flow_control(pdata);
-			pdata->phy_tx_pause = pdata->tx_pause;
+			pdata->tx_pause = pdata->phy.tx_pause;
 		}
 
-		if (pdata->rx_pause != pdata->phy_rx_pause) {
+		if (pdata->rx_pause != pdata->phy.rx_pause) {
+			new_state = 1;
 			pdata->hw_if.config_rx_flow_control(pdata);
-			pdata->phy_rx_pause = pdata->rx_pause;
+			pdata->rx_pause = pdata->phy.rx_pause;
 		}
 
 		/* Speed support */
@@ -835,9 +841,6 @@ static int xgbe_phy_config_fixed(struct xgbe_prv_data *pdata)
 	if (pdata->phy.duplex != DUPLEX_FULL)
 		return -EINVAL;
 
-	pdata->phy.pause = 0;
-	pdata->phy.asym_pause = 0;
-
 	return 0;
 }
 
@@ -933,8 +936,6 @@ static void xgbe_phy_status_force(struct xgbe_prv_data *pdata)
 		}
 	}
 	pdata->phy.duplex = DUPLEX_FULL;
-	pdata->phy.pause = 0;
-	pdata->phy.asym_pause = 0;
 }
 
 static void xgbe_phy_status_aneg(struct xgbe_prv_data *pdata)
@@ -957,9 +958,21 @@ static void xgbe_phy_status_aneg(struct xgbe_prv_data *pdata)
 	if (lp_reg & 0x800)
 		pdata->phy.lp_advertising |= ADVERTISED_Asym_Pause;
 
-	ad_reg &= lp_reg;
-	pdata->phy.pause = (ad_reg & 0x400) ? 1 : 0;
-	pdata->phy.asym_pause = (ad_reg & 0x800) ? 1 : 0;
+	if (pdata->phy.pause_autoneg) {
+		/* Set flow control based on auto-negotiation result */
+		pdata->phy.tx_pause = 0;
+		pdata->phy.rx_pause = 0;
+
+		if (ad_reg & lp_reg & 0x400) {
+			pdata->phy.tx_pause = 1;
+			pdata->phy.rx_pause = 1;
+		} else if (ad_reg & lp_reg & 0x800) {
+			if (ad_reg & 0x400)
+				pdata->phy.rx_pause = 1;
+			else if (lp_reg & 0x400)
+				pdata->phy.tx_pause = 1;
+		}
+	}
 
 	/* Compare Advertisement and Link Partner register 2 */
 	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1);
@@ -1223,6 +1236,22 @@ static void xgbe_phy_init(struct xgbe_prv_data *pdata)
 
 	pdata->phy.link = 0;
 
+	pdata->phy.pause_autoneg = pdata->pause_autoneg;
+	pdata->phy.tx_pause = pdata->tx_pause;
+	pdata->phy.rx_pause = pdata->rx_pause;
+
+	/* Fix up Flow Control advertising */
+	pdata->phy.advertising &= ~ADVERTISED_Pause;
+	pdata->phy.advertising &= ~ADVERTISED_Asym_Pause;
+
+	if (pdata->rx_pause) {
+		pdata->phy.advertising |= ADVERTISED_Pause;
+		pdata->phy.advertising |= ADVERTISED_Asym_Pause;
+	}
+
+	if (pdata->tx_pause)
+		pdata->phy.advertising ^= ADVERTISED_Asym_Pause;
+
 	if (netif_msg_drv(pdata))
 		xgbe_dump_phy_registers(pdata);
 }
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index f535d19..63d72a1 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -539,10 +539,12 @@ struct xgbe_phy {
 	int autoneg;
 	int speed;
 	int duplex;
-	int pause;
-	int asym_pause;
 
 	int link;
+
+	int pause_autoneg;
+	int tx_pause;
+	int rx_pause;
 };
 
 struct xgbe_mmc_stats {
@@ -910,8 +912,6 @@ struct xgbe_prv_data {
 	phy_interface_t phy_mode;
 	int phy_link;
 	int phy_speed;
-	unsigned int phy_tx_pause;
-	unsigned int phy_rx_pause;
 
 	/* MDIO/PHY related settings */
 	struct xgbe_phy phy;

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v1 7/7] amd-xgbe: Remove manual check and set of dma_mask pointer
  2015-05-12 19:22 [PATCH net-next v1 0/7] amd-xgbe: AMD XGBE driver updates 2015-05-12 Tom Lendacky
                   ` (5 preceding siblings ...)
  2015-05-12 19:23 ` [PATCH net-next v1 6/7] amd-xgbe: Fix flow control setting logic Tom Lendacky
@ 2015-05-12 19:23 ` Tom Lendacky
  6 siblings, 0 replies; 12+ messages in thread
From: Tom Lendacky @ 2015-05-12 19:23 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

The underlying device support will set the device dma_mask pointer
if DMA is set up properly for the device.  Remove the check for and
assignment of dma_mask when it is null. Instead, just error out if
the dma_set_mask_and_coherent function fails because dma_mask is null.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-main.c |    2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-main.c b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
index 34c521d..fb7c961 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-main.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
@@ -678,8 +678,6 @@ static int xgbe_probe(struct platform_device *pdev)
 	xgbe_default_config(pdata);
 
 	/* Set the DMA mask */
-	if (!dev->dma_mask)
-		dev->dma_mask = &dev->coherent_dma_mask;
 	ret = dma_set_mask_and_coherent(dev,
 					DMA_BIT_MASK(pdata->hw_feat.dma_width));
 	if (ret) {

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v1 2/7] amd-xgbe: Add netif_msg_* support for driver messages
  2015-05-12 19:22 ` [PATCH net-next v1 2/7] amd-xgbe: Add netif_msg_* support for driver messages Tom Lendacky
@ 2015-05-12 19:44   ` Joe Perches
  2015-05-12 20:03     ` Tom Lendacky
  0 siblings, 1 reply; 12+ messages in thread
From: Joe Perches @ 2015-05-12 19:44 UTC (permalink / raw)
  To: Tom Lendacky; +Cc: netdev, David Miller

On Tue, 2015-05-12 at 14:22 -0500, Tom Lendacky wrote:
> Add support for the network interface message level settings for
> determining whether to issue some of the driver messages.

[]

> diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c b/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c
[]
> @@ -150,9 +150,14 @@ static int xgbe_dcb_ieee_setets(struct net_device *netdev,
>  	tc_ets = 0;
>  	tc_ets_weight = 0;
>  	for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
> -		DBGPR("  TC%u: tx_bw=%hhu, rx_bw=%hhu, tsa=%hhu\n", i,
> -		      ets->tc_tx_bw[i], ets->tc_rx_bw[i], ets->tc_tsa[i]);
> -		DBGPR("  PRIO%u: TC=%hhu\n", i, ets->prio_tc[i]);
> +		if (netif_msg_drv(pdata)) {
> +			netdev_dbg(netdev,
> +				   "TC%u: tx_bw=%hhu, rx_bw=%hhu, tsa=%hhu\n",
> +				   i, ets->tc_tx_bw[i], ets->tc_rx_bw[i],
> +				   ets->tc_tsa[i]);

These might be more concise and work a bit easier
with dynamic_debug using

	netif_dbg(priv, type, netdev, fmt, ...)

like:

		netif_dbg(pdata, drv, netdev,
			  "TC%u: tx_bw=%hhu, rx_bw=%hhu, tsa=%hhu\n",
			  i, ets->tc_tx_bw[i], ets->tc_rx_bw[i], ets->tc_tsa[i]);

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v1 2/7] amd-xgbe: Add netif_msg_* support for driver messages
  2015-05-12 19:44   ` Joe Perches
@ 2015-05-12 20:03     ` Tom Lendacky
  0 siblings, 0 replies; 12+ messages in thread
From: Tom Lendacky @ 2015-05-12 20:03 UTC (permalink / raw)
  To: Joe Perches; +Cc: netdev, David Miller

On 05/12/2015 02:44 PM, Joe Perches wrote:
> On Tue, 2015-05-12 at 14:22 -0500, Tom Lendacky wrote:
>> Add support for the network interface message level settings for
>> determining whether to issue some of the driver messages.
>
> []
>
>> diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c b/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c
> []
>> @@ -150,9 +150,14 @@ static int xgbe_dcb_ieee_setets(struct net_device *netdev,
>>   	tc_ets = 0;
>>   	tc_ets_weight = 0;
>>   	for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
>> -		DBGPR("  TC%u: tx_bw=%hhu, rx_bw=%hhu, tsa=%hhu\n", i,
>> -		      ets->tc_tx_bw[i], ets->tc_rx_bw[i], ets->tc_tsa[i]);
>> -		DBGPR("  PRIO%u: TC=%hhu\n", i, ets->prio_tc[i]);
>> +		if (netif_msg_drv(pdata)) {
>> +			netdev_dbg(netdev,
>> +				   "TC%u: tx_bw=%hhu, rx_bw=%hhu, tsa=%hhu\n",
>> +				   i, ets->tc_tx_bw[i], ets->tc_rx_bw[i],
>> +				   ets->tc_tsa[i]);
>
> These might be more concise and work a bit easier
> with dynamic_debug using
>
> 	netif_dbg(priv, type, netdev, fmt, ...)
>
> like:
>
> 		netif_dbg(pdata, drv, netdev,
> 			  "TC%u: tx_bw=%hhu, rx_bw=%hhu, tsa=%hhu\n",
> 			  i, ets->tc_tx_bw[i], ets->tc_rx_bw[i], ets->tc_tsa[i]);
>
>

I didn't realize that interface was there.  I'll convert them over
over to the netif_ format.  Thanks for the pointer.

Tom

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v1 4/7] amd-xgbe: Move the PHY support into amd-xgbe
  2015-05-12 19:22 ` [PATCH net-next v1 4/7] amd-xgbe: Move the PHY support into amd-xgbe Tom Lendacky
@ 2015-05-12 22:19   ` Florian Fainelli
  2015-05-13 14:01     ` Tom Lendacky
  0 siblings, 1 reply; 12+ messages in thread
From: Florian Fainelli @ 2015-05-12 22:19 UTC (permalink / raw)
  To: Tom Lendacky, netdev; +Cc: David Miller

On 12/05/15 12:22, Tom Lendacky wrote:
> The AMD XGBE device is intended to work with a specific integrated PHY
> and that PHY is not meant to be a standalone PHY for use by other
> devices. As such this patch removes the phylib driver and implements
> the PHY support in the amd-xgbe driver (the majority of the logic from
> the phylib driver is moved into the amd-xgbe driver).

Did not you submit a similar patch a while ago and David asked to keep
the PHY driver separate? Even though the internal PHY driver might not
be reusable on another platform, having your Ethernet driver implement a
PHY library driver seems like a potential layering issue.
-- 
Florian

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v1 4/7] amd-xgbe: Move the PHY support into amd-xgbe
  2015-05-12 22:19   ` Florian Fainelli
@ 2015-05-13 14:01     ` Tom Lendacky
  0 siblings, 0 replies; 12+ messages in thread
From: Tom Lendacky @ 2015-05-13 14:01 UTC (permalink / raw)
  To: Florian Fainelli, netdev; +Cc: David Miller

On 05/12/2015 05:19 PM, Florian Fainelli wrote:
> On 12/05/15 12:22, Tom Lendacky wrote:
>> The AMD XGBE device is intended to work with a specific integrated PHY
>> and that PHY is not meant to be a standalone PHY for use by other
>> devices. As such this patch removes the phylib driver and implements
>> the PHY support in the amd-xgbe driver (the majority of the logic from
>> the phylib driver is moved into the amd-xgbe driver).
>
> Did not you submit a similar patch a while ago and David asked to keep
> the PHY driver separate? Even though the internal PHY driver might not
> be reusable on another platform, having your Ethernet driver implement a
> PHY library driver seems like a potential layering issue.

I interpreted what David said was that he asked to keep it separate
because I was still using the phy_device structure and calling into the
phy library API. This patch does not use the phy library API. I believe
what I'm doing now is no different than what other NIC drivers do that
manage their PHY(s) on integrated cards (e.g. ixgbe).

Thanks,
Tom

>

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2015-05-13 14:01 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-12 19:22 [PATCH net-next v1 0/7] amd-xgbe: AMD XGBE driver updates 2015-05-12 Tom Lendacky
2015-05-12 19:22 ` [PATCH net-next v1 1/7] amd-xgbe: Add additional stats to be reported via ethtool Tom Lendacky
2015-05-12 19:22 ` [PATCH net-next v1 2/7] amd-xgbe: Add netif_msg_* support for driver messages Tom Lendacky
2015-05-12 19:44   ` Joe Perches
2015-05-12 20:03     ` Tom Lendacky
2015-05-12 19:22 ` [PATCH net-next v1 3/7] amd-xgbe: Rework the Rx path SKB allocation Tom Lendacky
2015-05-12 19:22 ` [PATCH net-next v1 4/7] amd-xgbe: Move the PHY support into amd-xgbe Tom Lendacky
2015-05-12 22:19   ` Florian Fainelli
2015-05-13 14:01     ` Tom Lendacky
2015-05-12 19:23 ` [PATCH net-next v1 5/7] amd-xgbe: Support defining PHY resources in ETH device node Tom Lendacky
2015-05-12 19:23 ` [PATCH net-next v1 6/7] amd-xgbe: Fix flow control setting logic Tom Lendacky
2015-05-12 19:23 ` [PATCH net-next v1 7/7] amd-xgbe: Remove manual check and set of dma_mask pointer Tom Lendacky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).